Preview of a virtual learning environment

It’s been a while since the last update, but be assured we’ve been very busy working away on the embodiment code and developing our virtual learning environment. With AGI-11 now in progress at the Googleplex, we’ve put together a few videos to give an outline of what we’re working on. There isn’t any overly advanced learning going on yet, but it gives you a feel for where the project is going.

First up is a video demonstrating a human controlled player navigating and interacting with the world. This world is built in Unity3D, so eventually we’ll be able to put this environment online, make it an iPad app, or whatever else, and let you guys interact and teach OpenCog directly.

The things to note are that it’s based off a minecraft-like environment, which means the player and AI will be able to modify the terrain and build things. Other objects can also be moved around and interacted with. We’ve got a very flexible action system that allows new action types to easily be added, and OpenCog will be able to learn the causal effect of executing these previously unknown actions.

Next is a demonstration of 3D pathfinding (implemented by Troy Deheng), with the satisfaction of a single “demand” for energy by consuming the battery object. In addition, it also shows simple planning by asking for a battery from the player if OpenCog can’t find one in the world. After asking this, the player spawns a new battery with a switch, and OpenCog efficiently detects this correlation between using the switch and new batteries appearing. In essence learning a new behaviour to satisfy it’s goals.

Jared Wigmore has been working on Fishgram, which is a frequent sub-hypergraph miner that detects patterns in the AtomSpace (and so by extension, also in the virtual world). This is the component used to detect that pushing a switch creates a new battery.

Last is a shorter video showing demands beyond just energy. These are analogous to goals, but not quite. They are slightly different in that they impel the system towards certain goals. The new demand in this video is one for integrity, which is roughly analogous to fatigue/health. In the video, the house is known to satisfy this demand and increase integrity, but it oscillates between which demand is most important. Integrity then Energy, then back again. Zhenhua Cai has already added a couple of more demands: Competence and Certainty.

Thanks to Cord Krohn for putting the videos together, as well as doing environment and character design and thanks to Michael Jia for 3d modelling and art assets.

This entry was posted in Development and tagged . Bookmark the permalink.

8 Responses to Preview of a virtual learning environment

  1. Pingback: OpenCog 3D Virtual Learning Environment | AI Revolution

  2. Matt says:

    Cool. Is there a tutorial for setting this up like the old one for Multiverse on the cogbuntu page?

    • J P says:

      Not yet I’m afraid – the Unity world is under heavy development, so until it settles down we’ll probably wait until we can create a decent tutorial and supporting documentation.

  3. Free Thought says:

    I’m slightly confused about whether these videos demonstrate anything that moves in a direction closer to AGI or if it’s demonstrating narrow AI. Is there any chance of seeing a demonstration soon that might help to show some behaviors that are clearly solving varied and/or unique problems using a single methodology? Its hard to currently visualize how this system will be able to perform tasks outside of its narrow restraints via hard-coding requirements for the AI to try to satisfy. Even though it may be learning to satisfy these requirements in its own way, how far will one be able to ultimately expand upon the capabilities of such an AI?

    • J P says:

      Hi – My aim is to be as transparent as possible about what we’re doing. We’re not interested in trying to deceive people with narrow scripted behaviours.

      So, saying that, these videos are mostly a demonstration of the virtual world. OpenCog has specialised path-finding algorithms, but these can be constructed as part of a “action plan”. So a plan might include an action to “move to the battery cube” (where battery cube is a unique id related to a previously seen object).

      So we’re abstracting actions at a certain level, but not confining OpenCog to only know about certain objects. We can easily add new actions to objects and it’s up to OpenCog to detect the patterns of what happens when those actions are initiated.

      We are building on an older embodiment system that was quite limited in what it could learn. Now a lot of time is being spent on pulling out the old parts and replacing them with general systems.

  4. Sa3vis says:

    I’m just wondering if curiosity and memory will be added in this project. Curiosity in that if there is a new object or button it will go observe it and then depending on what that does it will either ignore, stay away from, be cautious with or use. Maybe even hoard. And with memory then if it sees something it doesn’t need at the moment it  would keep a virtual memory map of where it is located and then find it when needed. Those are just a few things I was thinking would be neat. Keep up the great work and good luck.

    • J P says:

      There is definitely an aspect like curiousity. In OpenPsi (the emotion and motivation system), there are demands for “competence” and “certainty”. “Certainty” will lead the character to try to understand the state of environment, “competence” will lead to the character trying to get better at influence and predict the result of its actions.

  5. MDude1350 says:

    That Bomberman Hero music.
    This virtual learning environment sounds like something I’ve been wanting for a while. The bots you have seem nice, but I think it’d be fun to see what I can make on my own. Would I be able to use the environment to embody other AI systems without them being  a part of OpenCog, or would I need to roll my own system for that?

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.