You would probably want a truly Open Source Graph distributed like ArangoDB database just to make sure you can bundle it for research purposes and modify/integrate it into OpenCog for optimization purposes. The distributed ability allows you to scale the knowledge base over multiple servers and many SSDs for faster access. You need multi-threading to support using multiple core on today’s workstations that can support over 32 cores at the high end.

]]>With a telescope, you can, simply put, see really far. Call it superhumanly far. What you do with what you see is up to you — you have to think that part up yourself.

With an artificial cognition engine, you can think really fast. Eventually superhumanly fast (although speed is not a key aspect of cognition, it’s easy to grasp). What you do with these thoughts is up to you — but now, you have to think about **that**, too, superhumanly fast.

So I wouldn’t expect anyone to loose interest in a really fast thinker.

currently there’s a sentence in this post part of which says, literally, “Pick out the right vertexes (one per word), wire them together so that there are no dangling unconnected edges, and viola!”

As typos go, this one is funny. ]]>

Oh, well, OK, if you keep the number of axioms, rules, and the number of premises constant, then, yes, exponential growth provides an absolute upper bound. However, I thought that the point of inference learning was to datamine for inference chains that can be converted into new rules. A side-effect of storing things in the atomspace is that the number of premises, viz potential starting points for new inferences, also increases.

So, if you look at the “big picture”, the upper bound is exponential. But if you look at it locally, the introduction of new rules and premises makes it combinatorial.

]]>What makes you say that it grows combinatorially?

I believe my statement is correct. It grows exponentially w.r.t. the proof length, that is what Marcus Hutter’s famous paper The Fastest and Shortest Algorithm for All Well-Defined Problems says. He means the length of the binary representation of the proof, but I think it applies here too. There is a finite number of rules and axioms, ultimately a inference tree of size S can be turned into a binary string of length O(S).

Maybe you mean w.r.t. to something else. For instance the complexity w.r.t. the length of the theorem is unbounded.

]]>I don’t believe that there are forces within the device itself, driving it towards good or bad. However, as humans, as society we do exert an effect. We can train dogs to be nasty, or to be nice. We can do the same for AI. I’m very tempted to say that high-IQ AI will generally be sufficiently self-aware to be nice. On the other hand, I’ve met some high-IQ people who are quite nasty, so …

]]>Gabi, There is a specific proposal and prototype on how to model it, and how to attach it to sensory inputs, motor outputs, and a language subsystem, here: https://github.com/opencog/opencog/blob/master/opencog/eva/architecture/embodiment.pdf

]]>