I thought I should write a post to let everyone know where we are at
with the Hong Kong AI project.
We’ve got ourselves established in the M-Lab currently, although Gino may want to relocate us to the School of Design on the PolyU campus. We’ve submitted an order to the University requisition people in the University for 3 reasonably specc’ed workstations, we’ll no doubt order more for the rest of the team when they arrive (see below). In the mean time, there are older computers that are functional enough for now and we have our personal laptops which we’ve been using.
After a review of available game engines, and based on the expertise of the team, we’ve decided that Unity3D will be the best engine to use for the project. While Lucid2 was initially proposed, it only supports an old version of DirectX and the original development team isn’t available for round the clock support. Unity3D has an active community and also supports cross-platform development, including iOS (although no Linux support ;-( ).
Initially we were focused on speech interaction in Unity3D, and we managed to get pocketsphinx, the speech recognition library, working as a DLL plugin for Unity3D and controlling a robot character with very basic commands (“go left”, “go right”, “explode”, “stop”, etc.). I started playing with festival as a speech synthesis plugin but then we realised that these aspects should probably not be a primary focus at the beginning due to the trickiness of getting speech working right. Instead and in the mean time,
we want to just use text input/output. We can build on that once the core dynamics are working correctly (or we have more people involved).
Thus, I am focused on creating a network interface for connecting Unity3D to OpenCog. This has involved understanding the existing network architecture of the embodiment system. At this stage it seems it will first involve making a NetworkElement class in C#, then generating the right XML messages. I think this can be improved with zeromq and
protocol buffers, but we’ll wait until the existing pipeline works and I/we better understand the embodiment system.
Lester has been getting more familiar with Unity3D development. Importing game characters from another in-house game project so that we have models to prototype with (until we get some graphic-artists/modellers). He’ll also help with the Unity side of the OpenCog-Unity connection.
Cord is busy with game design. Loosely we’ve been thinking about a game around the idea of the Incredible Machine meets Creatures (well, very roughly at least). There will be levels to solve, where you have to teach the AI character(s) to do certain behaviors and can interact with them by talking… and to a limited extent by “playing god” (limited, because “playing god” uses “magic”/”psi” which you’ve only got a certain amount of). Each level your AI characters will remember the stuff you taught them earlier, you’ll be able to tell
them to interact with new objects the AI hasn’t seen before by telling it that the object is similar to another one it’s already seen. This will be more fully detailed by the end of the week.
Progress is happening, and on a personal note it feels good to be spending all my time and mental energy on OpenCog rather than on distracting contracts to pay the bills
I’ll endeavour to regular post updates and perhaps a video or two once we start having a functional demo.