Defeasable reasoning and modeling

I have a quick question about modeling and have a vague understanding of it but I'm not quite sure if I'm right or how to handle it.

Say I'm trying to model something like cars and I have a make and model. It might seem reasonable to model a model (Camery) as a class and my particular car an instance of a Camery. That's nice because I can infer a bunch of properties of my car given it's model <myCar> a :Camery but say I change the wheels. I can't really say that it's a Camery anymore, either that or I can't infer anything based on the model of car.

I think the problem I'm having it's it's non-monotomic and what I'm looking for is a defeasible reasoner. I've also seen some papers on epistemic reasoning, although I'm not sure what the difference between defeasible and epistemic reasoning is if any.

Is my intuition about the reasoning correct? If it is are there best practices for handling this with monotonic reasoning. It seems like heavy use of punning could get you part of the way there but it doesn't quite seem right. Do I fall back on a probabilistic approach?

It seems to come up often and it would be nice to be able to either steer people away from this, warn them, or offer a way to handle it. Right now I'm not quite sure what to do.

Most of the information that I've found on defeasable reasoning is somewhat old. Is this something people are still working on?

Yes, you are right re: monotonicity. In OWL (and first-order logic in general) adding a new formula can never lead to a previous inference become invalid. So if you add a statement that your car now has custom wheels, it cannot invalidate all the inferences which are true due to it being a Camery.

If you want to retract those inferences, while still using a monotonic reasoner, you'd have to delete the <myCar> a :Camery statement manually. One way to do it is to model the custom wheels predicate s.t. adding a statement that your car has custom wheel would contradict the statement that it's a Camery (that is, create an inconsistency). A disjoint classes axiom might be sufficient. Then you can ask the reasoner to explain the inconsistency and repair the knowledge base by deleting <myCar> a :Camery. The result won't infer any of the Camery properties.

The standard difficulty there is that there could be multiple, in the worst case exponentially many conflict sets for a single inconsistency. However it may not be that bad in interesting cases.

Cheers,
Pavel

PS. Can't comment on whether defeasible reasoning is still an active field of study. I guess it depends on who you ask...

I gave using the machine learning modeling a try and it looks like you can't use an spa:model in a rule :frowning:

I suppose that makes sense but I had to give it a try. Is that correct that you can't do that or did I possibly do something wrong? Any chance you might allow something like that in the future?

On a side not I came across an error when poking around. Something about not being able to parse the RDF because the rule was longer than. 1024. I'll see if I can reproduce it and make a proper report unless there really is a length restriction on rules like that.

Has Stardog by any chance looked into Probabilistic Soft Logic (PSL)? I've poked around it a little bit from time to time. They don't appear to be using semantic web technologies in any way but every time I look at it looks like it would make a great place for it, at least as a data source. They seem to have a somewhat odd way of defining their data. Could PSL be an alternative inference engine to FOL? Or possibly coexist using property attributes?

No, I don't think we ever consider it seriously but, OTOH, I can imagine integration of some statistical relational learning framework into Stardog, probably as an extension to our ML offering. Then it'd co-exist with ontology/rules in the same way as ML models co-exist now.

Best,
Pavel

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.