# Fun with first-order inference

Joel Pitt has done some experiments testing first-order PLN inference in OpenCog, on some very simple data.

These experiments don’t use the indefinite probability formulas but rather the good old fashioned SimpleTruthValue PLN formulas.

What they involve is using PLN to extrapolate indirect word associations, from direct words associations mined from text (by some statistical text mining software created for OpenCog by Linas Vepstas).

This obviously does not stress the generality of PLN as an inference framework (no VariableNodes! no quantifiers! no intension! no fuzzy MemberLinks!).  There is nothing particularly revolutionary AI-wise here … it’s just some fairly straightforward, state-of-the-art statistical NLP … Hebbian learning on a neural net, among many other techniques, could do basically the same thing … but this is a reasonable “smoke test” of the ability to load a bunch of nodes and links into OpenCog and perform some basic inference processes on them.  One nice point about PLN is that it can handle relatively simple, associative-neural-netty stuff like this, as well as more complex reasoning involving variables and quantifiers and such, all seamlessly within the same mathematical, conceptual and software approach.

The reason I decided to write a blog post on this is that Joel produced some nifty pictures based on his work, using the open-source graph visualization package Tulip.

Here is a big nasty network of nodes and links in OpenCog, before inference:

Here is the same network, after some first-order PLN inference, with the inferred links in green:

Obviously the above don’t tell you too much.  Tulip was configured so that nodes representing more greatly similar words (in terms of their statistical association) would generally be placed closer together in the visualization.  Slightly more insight is given by zooming in, using Tulip, to see some of the nodes and links close up.  Again, the links in green are the products of inference:

Note that in the immediately above example, to build the associative link between “foreign” and “administration” requires the system to make two inferences in sequence:

This entry was posted in Development and tagged , , . Bookmark the permalink.

### 1 Response to Fun with first-order inference

This site uses Akismet to reduce spam. Learn how your comment data is processed.