Tag Archives: Learning

Why Hypergraphs?

I’ve recently been hacking on creating a new parser for the Link Grammar theory of natural language parsing. I want to couple parsing to machine learning (ML), to that I can use ML to learn natural languages. To do that, I need to place everything in a certain abstract data representation framework that allows graph rewrite rules, logical reasoning, and Bayesian probabilistic reasoning to be combined. This framework exists in OpenCog, but few people know or understand this. That this framework also has a firm foundation in model theory, category theory (even n-categories!) and type theory is even less well known. To explain all this, I just wrote a simple, easy introduction to all of these ideas, and how they come together. Follow the link for more. Continue reading

Posted in Design, Introduction, Theory | Tagged , , , , , , , , , | 39 Comments

The MOSES Metapopulation

Or, how to select a promising species for mutation. Continue reading

Posted in Design, Documentation, Theory | Tagged , , , | 6 Comments

Genetic Crossover in MOSES

MOSES is a system for learning programs from input data. ¬†Given a table of input values, and a column of outputs, MOSES tries to learn a program, the simplest program that can reproduce the output given the input values. The … Continue reading

Posted in Design, Documentation, Introduction, Theory | Tagged , , , , , | 2 Comments

Tuning Metalearning in MOSES

I’ve been studying MOSES recently, with an eye towards performance tuning it. Turns out optimization algorithms don’t always behave the way you think they do, and certainly not the way you want them to. Given a table of values, MOSES … Continue reading

Posted in Design, Development, Documentation, Theory | Tagged , , | 2 Comments

Hacking on Link-Grammar

I hack, heads-down, on link-grammar every now and then. Yesterday, I fixed another round of broken parse rules: making sure that sentences like “John is altogether amazingly quick.” “That one is marginally better” “I am done working” “I asked Jim … Continue reading

Posted in Design, Development, Theory | Tagged , , , , | 6 Comments

Spaced repetition and memory

An article in Wired from a while back on Piotr Wozniak (no relation to Steve), a researcher of optimal memory and learning strategies, got me thinking about learning theory and memorization in the context of OpenCog. From the article (emphasis … Continue reading

Posted in Design, Theory | Tagged , , , | Leave a comment