The Status of AGI and OpenCog

AGI stands for Artificial General Intelligence: it is the idea of creating a thinking machine, as smart and clever and powerful as a human being. OpenCog is a project pursuing basic research in AGI. It is attempting to build such a thinking machine.

Let’s compare the search for AGI to Astronomy. An amateur with binoculars can do astronomy and have lots of fun, but will not contribute to scientific advance. For that, you need highly-trained scientists, and you need sophisticated instruments (viz. fancy telescopes.)

OpenCog-the-project is a loose affiliation of about a half-dozen “scientists” of varying talents and varying time devoted to it. It has a loosely articulated research program; some roughly-agreed-upon directions to go in.

OpenCog-the-software is a telescope. It’s the primary tool that I (the author of this blog entry) personally use for research. Others use it, but they struggle to be effective. They lose interest after a while. (Seriously: if you had full-time access to some major scientific instrument: some big telescope, particle accelerator, would you be effective? Would you lose interest after a while? What would you even do with it? Use it to explode cans of soda and post the videos on youtube?)

It’s worth mentioning what OpenCog isn’t: it is not a grand theory of AGI. It is not a master blue-print of a machine, waiting to be built, certain of success once completed. We hope to find some portion of that grand theory. We hope to build some portion of that machine. Bit by bit.

Lets compare this to mainstream big-science. OpenCog the project is comparable to a small department at a small university. A handful of half-active people.  OpenCog the software is… well, I spent a decade building it out. It was not clear what to build, or what it should even do. It’s a bit clearer now.  I still think it’s on the right track.

The first particle accelerators were built by people like Ben Franklin, who had no clue that there were particles, or that they were being accelerated. Neither of those ideas gelled until over 100 years later. When they did, the first intentionally-designed particle accelerators were built by single individuals, over the course of a few months or a few years. (The story of early telescopes is similar). In the meantime, the general idea of why particle accelerators are useful became clear. (Its not obviously obvious, not like a telescope can be obviously used for looking. What would you even look at? The neighbor’s house?) 

But once you get the idea of what something is, and why it is useful, you can build bigger and better ones, and when you can’t build it yourself in a few years, you can organize a staff of grad students and assistants. Maybe even a part-time engineer. Eventually, the general idea becomes so very clear, and you are so certain of success, that you can raise funds to pour concrete for a new building to house it in.

Neither OpenCog, nor AGI are at that level yet.  Deep learning is like the invention of the Geiger counter: thousands of people went off and built themselves a Geiger counter (or downloaded one off the net) and started messing with it. And they’re having lots of fun with it; some are making great discoveries, while others are building better Geiger counters.  

OpenCog-the-software is not anywhere near as fun as a Geiger counter.  Is it even on the right track?  Well, ten years ago, OpenCog-the-software was almost the only graph database on the planet. Now there are more than you can easily count, all enjoying huge commercial success. So maybe we did something right, when we designed it to be a graph database.  Meanwhile, OpenCog has grown with new features, capabilities, and stuff that no other graph DB has, or has even contemplated.  So I feel like we’re way ahead in the race. But…

But are these features/functions what are actually needed for AGI?  Maybe it’s not a telescope/particle-accelerator at all, but just a crazy back-yard sculpture? How can you tell?

I would like to attract more people to the project, but we (OpenCog) have a way of under-publicizing it. We consistently fail to write up high-quality research, we fail to even write blog posts (making amends, here). We fail to twitter about it. We fail to keep compelling demos running (Ten years ago, we had a decent question-answering chatbot that could have attracted attention. It … bit-rotted. )

There’s some excuse, some reason for that: every time you create an experiment, and run it, you will typically get some insightful results. While interesting, those results are not “true AGI”, not by a long-shot. So what do you do with your experiment? Announce to the world “Hey, we once again failed to create AGI?” Do you moth-ball the code? Do you even mention it to newcomers? Perhaps we need to keep a gallery of neat things that we’ve built… or at least, a collection of basic experiments with basic instruments that newcomers need to master.

So this is where the project is at. Come join us!

About Linas Vepstas

Computer Science Researcher - Hanson Robotics
This entry was posted in Uncategorized and tagged , , , . Bookmark the permalink.

2 Responses to The Status of AGI and OpenCog

  1. Scott Cole says:

    Sounds awful. Let’s do it!

  2. Avi Halabi says:

    “Seriously: if you had full-time access to some major scientific instrument: some big telescope, particle accelerator, would you be effective? Would you lose interest after a while? What would you even do with it?”

    With a telescope, you can, simply put, see really far. Call it superhumanly far. What you do with what you see is up to you — you have to think that part up yourself.
    With an artificial cognition engine, you can think really fast. Eventually superhumanly fast (although speed is not a key aspect of cognition, it’s easy to grasp). What you do with these thoughts is up to you — but now, you have to think about that, too, superhumanly fast.
    So I wouldn’t expect anyone to loose interest in a really fast thinker.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.