I (Jared Wigmore aka JaredW) have recently implemented a general forward chainer for PLN. (See Forward and Backward chaining on Wikipedia). Joel had previously implemented a prototype forward chainer, but it only supported deduction. PLN has a wide variety of inference rules. They each require different sorts of input atoms, and so a forward (or backward) chainer for PLN needs to be able to find appropriate atoms for each inference rule.
There’s a new ‘pln fc’ command, in the CogServer shell, which runs some FC inference.
Here are some pics of the new forward chainer on a demo dataset about toys, object persistence etc. They show the atomspace before and after inference.
Following is a general explanation of how it works. The idea is that each PLN Rule can provide templates for the Atoms it requires as input. In each inference step, the forward chainer picks a Rule, and then looks up a sequence of
Atoms that match the input templates.
Here’s an example with DeductionRule, solving the classic “Mortal Socrates” problem, explained on the OpenCog wiki.
DeductionRule requires two Atoms, in the form:
(Inheritance A B)
(Inheritance B C)
which basically means, A is a B and B is a C. It then produces:
(Inheritance A C)
For the first argument, the forward chainer looks up any Atom that matches (Inheritance A B), that is, any InheritanceLink in the system.
Suppose it finds “Socrates is a man”:
(Inheritance Socrates man)
Now it has A = Socrates and B = man. So to find the second argument, it looks for:
(Inheritance man C)
i.e. “man/men is/are <something>”. Suppose it finds “Men are mortal”:
(Inheritance man mortal)
Then it feeds these two premises into the DeductionRule, which produces:
(Inheritance Socrates mortal)
Remember that since this is forward chaining, it could have found all sorts of other things. If it had found, for the second argument, “Men tend to be bald”, then it would have produced “Socrates is probably bald” 😉