Progress update

[cross posted from The Singularity Institute Blog]

This blog post constitutes an update on the current state of work on the OpenCog open-source AI project.

No particular event occasioned me writing the post — no dramatic milestone has been reached — it just seemed like a good time for an update, as a lot of things are going on and not many people know about most of it.

While the OpenCog project is still at an early stage, progress has been exciting on a variety of fronts. After reviewing the work that’s been done and is underway, I’ll make a few comments on where I hope things will be by the end of the year, OpenCog-wise … and then (always future-oriented!) look ahead a bit to next year and beyond.

On the most practical front, Gustavo Gama, on the SIAI/OpenCog team, has been working on getting the core OpenCog Framework codebase in shape for an official release later this fall. The codebase has already been opened up to the public and made available to developers interested in participating in the early-stage development of the platform; the official release will signify that the code is ready for a wider variety of developers to participate, including those who want to use OpenCog as a platform for their own work rather than contributing to the development of the framework.

On the theoretical side, Dr. Ben Goertzel, SIAI Director of Research, released a wikibook comprising the equivalent of several hundred pages, outlining a detailed and specific design for an advanced AGI system based on the OpenCog framework. This design is called OpenCog Prime and is heavily inspired by the Novamente Cognition Engine. A weekly series of online tutorial sessions on OpenCogPrime is being offered, the first one of which was held on September 10, and scheduled to continue until early 2009.

Dr. Joel Pitt, on the SIAI/OpenCog team, has been working on adding AI functionality to OpenCog, with the OpenCogPrime design as well as more general AGI utility in mind. A port of the Probabilistic Logic Networks framework from the proprietary Novamente Cognition Engine codebase into OpenCog is underway and should be completed by mid-fall. Also, earlier this year Joel successfully implemented an initial version of an artificial-economics-based system for the allocation of attention within OpenCog, and leadership on the development of this code has now been taken over by Dr. Matthew Ikle’ from Adams State College.

Dr. Linas Vepstas, a Novamente LLC researcher, has integrated a number of natural language processing tools into OpenCog, based on various statistical algorithms, as well as the Carnegie-Mellon link parser and the related RelEx language processing framework. This code provides powerful mechanisms for turning English sentences into logical relationships residing in OpenCog’s knowledge base.

Novamente LLC researchers Dr. Predrag Janicic and Dr. Nil Geissweiller, working with Google researcher Dr. Moshe Looks, have integrated a new version of the MOSES probabilistic evolutionary learning framework (initially described in Dr. Looks’ 2006 PhD thesis, available at metacog.org) into OpenCog.

A team of Novamente LLC engineers led by Cassio Pennachin (and including Welter Silva, Carlos Lopes and Samir Araujo), in collaboration with Jani Pirkola and others on the RealXTend team, have been working on porting to OpenCog a substantial amount of Novamente LLC code concerned with the control of intelligent agents in 3D virtual worlds. Scheduled for completion in October, this project will initially result in an OpenCog system capable of controlling intelligent, adaptive virtual dogs in the open-source online world RealXTend (a modification of the OpenSim codebase, which began as an open-source analogue to the proprietary Second Life virtual world platforms). However it is intended for extensibility beyond virtual dogs to enable the general OpenCog-based control of virtual agents in virtual worlds.

Two Chinese PhD students, Rui Liu at Wuhan University and Lian Ruiting at Xiamen University, have been working with Novamente LLC and SIAI on creating OpenCog-based code for natural language generation: that is, for taking knowledge in an OpenCog knowledge base and translating it into English. A prototype system exists that works for simple sentences, and Ruiting is currently figuring out how to generalize it.

Last but definitely not least, 11 interns were funded to work on OpenCog for Summer 2008 via the Google Summer of Code project (thanks, Google!!). Their projects covered a variety of areas, and some were dramatically successful. To name just two among many deserving examples: Filip Maric designed and implemented a new approach to grammar parsing based on Boolean satisfaction, and integrated it with the Carnegie-Mellon link parser which is integrated into OpenCog’s RelEx natural language processing framework; and Cesar Maracondes refactored the implementation of key portions of the Probabilistic Logic Networks framework. A full recounting of the GSoC work may be found at http://brainwave.opencog.org/2008/09/, which links to another page that in turn links to individual student project pages.

There are a lot of things going on with OpenCog and I’m aware I’ve left some interesting things out … but I hope I’ve given a reasonable overall flavor of the state of progress.

Where do we hope to be by the end of the year? An officially-released OpenCog Framework … with PLN, MOSES, attention allocation, virtual-agent control in Multiverse and RealXTend, and basic natural language comprehension and generation integrated. All these things are underway, under intense development, and not too far from completion. With all these things in place, we’ll be poised for a really exciting 2009 for OpenCog.

With luck, during 2009 we will make serious progress toward creating a “virtually embodied artificial infant” (which may take humanoid, animal or virtual-robot form: that’s not the point) based on the OpenCogPrime design, and will also see a variety of other AGi approaches implemented within OpenCog by a diversity of researchers.

Also, the vague and sketchy OpenCogPrime roadmap recently posted is scheduled to be turned into a more thorough and precise document with careful attention to evaluation and metrics at each envisioned stage.

As the roadmap indicates, there is a clear and definite plan in place, that seems to have a plausible chance of leading from the current early stage of OpenCog development through a series of more and more advanced stages, defined loosely by reference to human childhood cognitive development (although the specifics of OpenCogPrime and OpenCog more generally are not closely tied to human biopsychology, nevertheless it seems that human development may be used as a rough analogue for the developmental progress of virtually or physically embodied AI systems based on OpenCogPrime and some other OpenCog-based designs).

This year the focus is on getting the basic AI mechanisms in place … next year (though for sure more work on AI mechanisms will continue!) we hope to segue into more of a focus on artificial baby-building … and with hard work and just a little luck, a few years down the road we’ll have a robust and highly intelligent OpenCog-based AGi dude on our hands. But I don’t want to diverge too far onto the glorious future … I’ve written enough on that already elsewhere … the point of this post was supposed to be to summarize the state of current progress. So, that’s all for now!

“May you live in interesting times.” 😉

This entry was posted in Development, Meta and tagged , . Bookmark the permalink.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.