Improvements to virtual world based NLP functionality

Here is an update on some recent OpenCog work that may be relevant to some of you…

(Email samir@vettalabs.com if you have detailed questions)

The main update is that the “virtual pet QA system” now answers questions regarding many more spatial relationships; for the full list see

http://www.opencog.org/wiki/EmbodimentLanguageComprehension_Questions

(The work here was in creating rules to identify when these relationships hold between objects as perceived and recorded in the LocalSpaceMap. Ideally these rules would be learned of course, but for a short-cut we’ve hand-coded them…. Making these rules is easy in itself, but required various improvements to the virtual-pet perception infrastructure, which were useful improvements anyway…)

Along the way to doing this, the SpaceServer and LocalSpaceMap inside OpenCog were extended to use 3D rather than just 2D like before. What Samir and Fabricio are working on now is:

1) Loading FrameNet into the Atomspace to enable better reference resolution (this will also be useful for PLN work). They are using an XML version of FrameNet so they don’t need to spider the website…. Note that our license for FrameNet covers only research uses; you need to pay to use it for commercial apps…

2) Building a simple OpenGL-based visualizer so as to visualize what the virtual pet is seeing at each point in time (this should be useful for debugging issues with spatial reasoning, etc.). Basically this is a visualizer for the 3D LocalSpaceMap structure inside OpenCog…

This entry was posted in Development. Bookmark the permalink.