OpenCog navigating Nao robot

Recently the team at Xiamen University have been working on integrating OpenCog with Nao robots. This recent video shows them using voice commands to tell the robot to walk from one object to the next. The robot hears, parses, and understands the command before then using pathfinding to get to the destination.

The team involved were: Professor Min Jiang and students Huang Deheng, Ye Yang, and Shuo Chen.

This entry was posted in Development and tagged , , , . Bookmark the permalink.

6 Responses to OpenCog navigating Nao robot

  1. Bob Mottram says:

    It’s always difficult to judge demonstrations like this, because they depend upon not so much what the robot does as much as how it does it and whether the methods used can generalise. This sort of demo was done long ago with Shakey, but with a lot of contrivances, out-takes and men behind curtains.

  2. Bob Mottram says:

    It’s always difficult to judge demonstrations like this, because they depend upon not so much what the robot does as much as how it does it and whether the methods used can generalise. This sort of demo was done long ago with Shakey, but with a lot of contrivances, out-takes and men behind curtains.

  3. Joel Pitt says:

    Yes, I totally understand what you mean Bob.

    Here I know that speech recognition is occurring (I don’t know if it’s biased towards recognising “go to” statements), and then the path is worked out (probably using A*). I believe they are also using a rooftop camera to position things. The latter is obviously a hack, but it’s a start.

    Unfortunately I only get sporadic updates from Xiamen, but I’ll likely be moving to Hong Kong where I’ll be able to get a better feel for the progress and how it’s implemented.

  4. Joel Pitt says:

    Yes, I totally understand what you mean Bob.

    Here I know that speech recognition is occurring (I don’t know if it’s biased towards recognising “go to” statements), and then the path is worked out (probably using A*). I believe they are also using a rooftop camera to position things. The latter is obviously a hack, but it’s a start.

    Unfortunately I only get sporadic updates from Xiamen, but I’ll likely be moving to Hong Kong where I’ll be able to get a better feel for the progress and how it’s implemented.

  5. JB says:

    Does the robot already know the location of the ball, or does it locate the ball and then determine a path?

    • Anonymous says:

      Here it already “knows” the location of the ball from being informed by a rooftop camera about the position of objects. Vision processing to detect and recognise features is something we’re working on…

Leave a Reply to Joel PittCancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.