PLN forward chainer

I (Jared Wigmore aka JaredW) have recently implemented a general forward chainer for PLN. (See Forward and Backward chaining on Wikipedia). Joel had previously implemented a prototype forward chainer, but it only supported deduction. PLN has a wide variety of inference rules. They each require different sorts of input atoms, and so a forward (or backward) chainer for PLN needs to be able to find appropriate atoms for each inference rule.

There’s a new ‘pln fc’ command, in the CogServer shell, which runs some FC inference.

Here are some pics of the new forward chainer on a demo dataset about toys, object persistence etc. They show the atomspace before and after inference.

Following is a general explanation of how it works. The idea is that each PLN Rule can provide templates for the Atoms it requires as input. In each inference step, the forward chainer picks a Rule, and then looks up a sequence of
Atoms that match the input templates.

Here’s an example with DeductionRule, solving the classic “Mortal Socrates” problem, explained on the OpenCog wiki.

DeductionRule requires two Atoms, in the form:

(Inheritance A B)
(Inheritance B C)

which basically means, A is a B and B is a C. It then produces:

(Inheritance A C)

For the first argument, the forward chainer looks up any Atom that matches (Inheritance A B), that is, any InheritanceLink in the system.
Suppose it finds “Socrates is a man”:

(Inheritance Socrates man)

Now it has A = Socrates and B = man. So to find the second argument, it looks for:

(Inheritance man C)

i.e. “man/men is/are <something>”. Suppose it finds “Men are mortal”:

(Inheritance man mortal)

Then it feeds these two premises into the DeductionRule, which produces:

(Inheritance Socrates mortal)

Remember that since this is forward chaining, it could have found all sorts of other things. If it had found, for the second argument, “Men tend to be bald”, then it would have produced “Socrates is probably bald” 😉

Posted in Development | Tagged | Leave a comment

OpenCog OSX Support

Due to a recent contract involving development on OSX (unrelated to OpenCog unfortunately), I now have a MacBook Pro. Since we often have people attempting to get OpenCog building on OSX, with various levels of success, I decided to go through the process and document it. Along with the help of various people more familiar with OSX than myself, I managed to get the core subset of OpenCog compiling and passing unit tests.

Instructions are on the wiki although I’ll also place these in a README.osx file within the repository too.

There are bugs, mostly around library linking (i.e. I recently found that the Ubigraph module isn’t linked to libxmlrpc_client) – so if you find anything that’s broken, please file a bug at Launchpad.

The ultimate goal is to package OpenCog as an OSX application package (and an Ubuntu deb too) – however, I am but one man! If you are familiar with either packaging process, and willing to lend an indirect hand towards developing human-level(+) intelligence, then your help would be most appreciated.

Posted in Development | Tagged , , | Leave a comment

OpenCog REST support (and web UI)

OpenCog now has a REST interface that is loaded and runs on port 17034 by default. It has only recently been completed to a functional level where clients can:

  • make POST requests to create a new atom.
  • make POST requests to a specific atom URL to update an atom’s truth value, STI, or LTI.
  • make a query via a GET request for a particular type of atom, sorted by a variety of attributes and paginated so that client’s can specify the maximum results returned and get more if necessary.
  • initiate shell requests, such as telling PLN to backwards chain to increase the confidence of a target atom.

There is the potential to add many more utility methods, such as a url to get all neighbours within a number of hops from a focus atom. Also getting the configuration status and finding out what modules are currently
loaded.

If someone feels like helping, it’s pretty easy to add a new URL/target to the web module and I can help guide them.

More details on the wiki.

There is also now a basic web interface which tabulates queries and has hyperlinks to allow textual navigation of the AtomSpace. This is a prelude to having HTML5 canvas based graph visualisation via the OpenCog web interface.

Posted in Development | Leave a comment

Improvements to virtual world based NLP functionality

Here is an update on some recent OpenCog work that may be relevant to some of you…

(Email samir@vettalabs.com if you have detailed questions)

The main update is that the “virtual pet QA system” now answers questions regarding many more spatial relationships; for the full list see

http://www.opencog.org/wiki/EmbodimentLanguageComprehension_Questions

(The work here was in creating rules to identify when these relationships hold between objects as perceived and recorded in the LocalSpaceMap. Ideally these rules would be learned of course, but for a short-cut we’ve hand-coded them…. Making these rules is easy in itself, but required various improvements to the virtual-pet perception infrastructure, which were useful improvements anyway…)

Along the way to doing this, the SpaceServer and LocalSpaceMap inside OpenCog were extended to use 3D rather than just 2D like before. What Samir and Fabricio are working on now is:

1) Loading FrameNet into the Atomspace to enable better reference resolution (this will also be useful for PLN work). They are using an XML version of FrameNet so they don’t need to spider the website…. Note that our license for FrameNet covers only research uses; you need to pay to use it for commercial apps…

2) Building a simple OpenGL-based visualizer so as to visualize what the virtual pet is seeing at each point in time (this should be useful for debugging issues with spatial reasoning, etc.). Basically this is a visualizer for the 3D LocalSpaceMap structure inside OpenCog…

Posted in Development | Leave a comment

Meaning-Text Theory

During some recent reading, it struck me that a useful framework for thinking about and talking about sentence generation is the MTT or “meaning-text theory” of Igor Mel’cuk, et al Here is one readable reference:

Igor A. Mel’čuk and Alain Polguère, (1987) “A Formal Lexicon in Meaning-Text Theory”, Computational Linguistics, vol. 13, pp. 261-275.

portal.acm.org/citation.cfm?id=48160.48166
www.aclweb.org/anthology/J/J87/J87-3006.pdf

Within the context of that theory, the output of the Stanford parser is strictly at the SSynR or “surface syntactic representation” level, while, as a general rule Relex attempts to generate the DSynR or “Deep syntactic representation” structure.  Some of what I’ve been trying to do with opencog is towards the “SemR” structure, as described in that paper.

The more I read about MTT, the more it seems to capture some of what we are trying to do (defacto are doing) with NLP within opencog.  In particular, the MTT concept of a “lexical function” (which is not really described in that paper??) could be a particularly strong way of guaranteeing correct syntactic output for segsim, nlgen or NLGen2

— Linas Vepstas

Posted in Theory | Tagged , , , , , , , , | Leave a comment

An Update

Time that we post a status update!

OpenCog has been a little more quiet than usual over the last couple of months. The developers list is still sporadically active, but some of the main developers are having to spend time on other work related projects meaning less AGI-driven focus (want to change that? donate here). We’re following several options for establishing further funding for the end of 2009 and through 2010, but we’ll see how that goes.

Instead of writing a long summary post, I’ll just give some bullet points:

  • Dr. Ben Goertzel spoke on building beneficial AGI at the Singularity Summit last month (video here).
  • Cassio Pennachin and Dr. Joel Pitt attended the GSoC Mentor’s Summit at the Googleplex in Mountain View, which led to meeting FOSS developers from around the word. This also allowed them to meet up with Moshe Looks (MOSES and PLOP author) for dinner and discussions around AGI with a foray into Newcomb’s Paradox.
  • Dr. Linas Vepstas released RelEx 1.2.1, an affiliate OpenCog project, along with the related project/dependency Link-grammer 4.6.5
  • .

I’m sure there are other items of note, so to the other contributors reading this, please feel free to comment and I’ll update this post 😉

Posted in Development | Leave a comment

Semantic dependency relations

I spent the weekend comparing the Stanford parser to RelEx, and learned a lot. RelEx really does deserve to be called a “semantic relation extractor”, and not just a “dependency relation extractor”. It provides a more abstract, more semantic output than the Stanford parser, which sticks very narrowly to the syntactic structure of a sentence.

I wrote up a few paragraphs on the most prominent differences; most of my updates were to the RelEx dependency relations page.

Here are the main bullet points:

  • RelEx attempts basic entity extraction, and thus avoids generating nn noun modifier relations for named entities.
  • RelEx will collapse the object and complement of a preposition into one. Stanford will do this for some, but not all relationships.
  • RelEx will convert passive subjects into objects, and instead indicate passiveness by tagging the verb with a passive tense feature.
  • RelEx avoids generating copulas, if at all possible, and instead indicates copular relations as predicative adjectives, or in other ways.
  • RelEx extracts semantic variables from questions, with the intent of simplifying question answering. For example, “Where is the ball?” generates _pobj(_%atLocation, _$qVar) _psubj(_%atLocation, ball), which can then pattern-match a plausible answer: _pobj(under, couch).
  • RelEx attempts to extract comparison variables.

Its also clear to me that I could split up the relex processing into two stages: one which generates stanford-style syntactic relations, and a second stage that generates the more abstract stuff. This might be a wise move … Since RelEx is already more than 3x faster than the Stanford parser, this could attract new users.

— Linas Vepstas

Posted in Design, Development, Documentation, Theory | Tagged , , , , , , , | Leave a comment

Sentence Patterns

I’ve recently resumed work on the question-answering chatbot, and am trying to get it to comprehend a broader range of questions and statements.   The “big idea” is to create a number of “sentence patterns” that the pattern matcher can recognize and respond to.  The reason this is a “big” idea is because I am trying to avoid anything algorothmic or procedural — everything is to be done by specifying OpenCog hypergraphs, and NOT by writing C++ code, or scheme code (or python code…etc). The reason for working entirely with patterns and hypergraphs, rather than with C++ or scheme, is because this puts the “knowledge” of the system into a form that AI routines can manipulate it: learning algos can learn new hypergraphs; statistical algos can gather usage information on which hypergraphs get triggered, and so on.  This is all easer said than done: although I’ve eliminated a fair amount of question-answering code previously written in C++, I’ve also had to write some new scheme code. Bummer. 🙁

Patten matching is now used through-out all of the OpenCog NLP pipeline, although not in a unified manner. The Link Grammar parser uses patterns (called “disjuncts”) to determine how the words in a sentence can link to one-another, thus “parsing”, or pulling the grammatical structure out of a sentence (this paper provides an excellent overview). The RelEx dependency relation extractor applies patterns on the link-grammar output to extract syntactic relations. For example, the sentence “John threw a rock” becomes

_obj(throw, ball)
_subj(throw, John)

after RelEx gets done with it. And now, there are a dozen patterns inside of OpenCog that can pick out certain kinds of questions and statements from RelEx output, and pattern-match questions to find answers to them.

For example, the new OpenCog patterns convert “The capital of France is Paris” into

capital_of(France, Paris)

and similarly, “What is the capital of France?” into

capital_of(France,what)

Treating “what” as a variable, there is yet another pattern that matches up the form of the question to the form of the answer, thus deducing that “what” must be “Paris”.

Somewhat harder is using patterns to distinguish similar from dis-similar concepts, so that sentences like “John threw a green ball” aren’t used as answers to questions such as “Did John throw a red ball?”: the word “ball” with modifier “green” has to be detected as a different entity than the word “ball” with modifier “red”: these are two different entities (called “semes” in the code). In fact, out of laziness, I’ve punted on this one: the promotion of word-instances to “semes” is done by code, rather than by pattern matching. But soon, I hope, this will change. In the meanwhile, the README file provides a more detailed discussion.

Here are some patterns that work these days:

<me>         John threw a green ball.
<me>         Fred threw a red ball
<me>         Mary threw a blue rock
<me>         who threw a ball?
<cogita-bot> Syntax pattern match found: Fred John
<me>         who threw a red ball?
<cogita-bot> Syntax pattern match found: Fred

<me>         Did Fred throw a ball?
<cogita-bot> Truth query determined “yes”: throw

<me>         Did Fred throw a red ball?
<cogita-bot> Truth query determined “yes”: throw

<me>         The color of the book is red.
<me>         What is the color of the book?
<cogita-bot> Triples abstraction found: red

<me>         the cat sat on the mat
<me>         what did the cat sit on?
<cogita-bot> Triples abstraction found: mat

And here are some that don’t yet work: “Did Fred throw a green ball?” — gets no reply, because the system can’t find an answer, and doesn’t make the common-sense leap of “can’t find answer-> answer must be no”. Another common-sense problem is illustrated by: “Did Fred throw a round ball?” — the system doesn’t know that balls are round, and simply assumes that a “round ball” is some special kind of “ball”. Oh well. There’s work to be done.

You can try out the chatbot yourself (when its up, and not broken!) on the IRC chat channel #opencog on the freenode.net chat servers.

— Linas Vepstas

Posted in Design, Introduction, Theory | Tagged , , , , , | 2 Comments

Frequency of grammatical disjuncts

The link-grammar parser uses labeled links to connect together pairs of words.  In order to capture the idea of proper grammatical construction, any given word is only allowed to have very specific links to its right or left: for example, verbs have their subject on the left, and an object on the right.  Link-grammar defines hundreds of different link types, and there are typically dozens or even hundreds of ways that these can attach to a word. Each allowed set of links is called a “disjunct”. So, for example:

MVp- Js+

is a disjunct that says “there must be an MVp link from this word, going to the left, and an Js link, going to the right”. This disjunct commonly connects prepositions to a verb on their left (the MV- link) and the object of the preposition on the right (the J+ link).

A good way to think about disjuncts is to imagine them as very fine-grained part-of-speech tags. Thus, when one sees “MVp- Js+” associated to a word, one knows not only that the word is a preposition, but even a bit more: its a preposition that took a singular object.  Disjuncts classify words not just into crude part-of-speech categories, but much finer categories:  thus verbs are not just as transtivie or intransitive verbs, but mgiht be transitive verbs that take both direct and indirect objects, or participles, etc.

Siva Reddy, a GSOC 2009 summer student, prepared a table of the frequency of occurrence of different disjuncts in a large collection of text. The top six entries are

Ds+           950275.635843
Xp-           838569.90527
A+          616522.664867
AN+        566658.997313
MVp- Js+       563082.649325
MVp- Jp+      446487.310222

and these are exactly what one might expect:

  • Ds+ connects the determiner “the” to nouns: and of course, “the” is the most frequent word in the English language.
  • Xp- connects the period at the end of the sentence to the start of the sentence, so of course its frequently observed.
  • A+ connects adjectives to nouns, AN+ connects noun modifiers to nouns.
  • As noted above, MV connects verbs to modifying phrases, and J connects prepositions to objects, so that MV- J+ is the disjunct that most prepositions will get. Js connects to a singular object, Jp connects to a plural count or mass noun.

A graph of rank vs. frequency is shown below:

Disjunct rank vs. frequency of occurance

Disjunct rank vs. frequency of occurance

As can be seen, the distribution is more or less Zipfian, with a power-law exponent of 1.5.  The fact that the long tail appears to be linear indicates that grammatical construction in the English language appears to be more ore less scale-free: difficult and akward constructions are increasingly rare.  The fact that the graph is not purely Zipfian, but instead has a knee for the most common grammatical connections suggests that the most common grammatical constructions are “less common than they should be”: almost as if English speakers are resisting the use of formulaic sentence constructions. So, for example, since adjectives and noun-modifiers appear near the top of the rank, this suggests that English speakers “could have” used more adjectives and noun-modifiers, but didn’t. Quite why this is so is not clear.  Perhaps the use of anaphora and references in general  helps decrease the need for lots of modifiers.

The open questions are then:

  1. Why a power law of 1.5?
  2. Why is there a knee?
  3. Does this result hold for other languages?

The corpus used here consists of approximately 1 million sentences, obtained by parsing entire Wikipedia articles, Voice of America news stories, and 10 books from Project Gutenberg, including War and Peace, Jane Austen, and some scientific or medical texts.

— Linas Vepstas

Posted in Development, Theory | Tagged , , , , , | 4 Comments

Visualizing PLN inference

Recently Jared Wigmore, a student of Waikato University, New Zealand, created a tool for visualizing PLN as part of a visualisation project.

BIT visualizer

BIT visualizer

In my opinion, the BIT visualiser shows great promise as a tool for understanding the complexities of BIT expansion. In particular, the cross joins between sub-trees make it much clearer how sharing of sub-trees is occurring. The size of the BITNodes reflect their fitness evaluation in determining which node of the inference tree will be expand next and will inevitably be useful when we get to the stage of tuning the the fitness heuristic.

Being a prototype, there is plenty of scope for continued development, a couple of the many ideas that immediately come to mind are:

  • expansion of BITNodes by clicking on them (this would require OpenCog to provide an XML-RPC interface first however), and
  • thematic colouring of rules so that it’s easier to distinguish between the subtrees.

This is part of the bigger challenge of general AtomSpace visualisation. How do we convey knowledge about the processes that are going on in a digital mind to humans in a meaningful way?

Posted in Development | Tagged , | Leave a comment