Deep learning and neural nets are all the rage, today, and have displaced symbolic AI systems in most applications. It’s commonly believed that the two approaches have nothing to do with each other; that they’re just completely different, and that’s that. But this is false: there are some profound similarities; they are not only variants… Continue reading Symbolic and Neural Nets: Two Sides of the Same Coin
I’ve recently been hacking on creating a new parser for the Link Grammar theory of natural language parsing. I want to couple parsing to machine learning (ML), to that I can use ML to learn natural languages. To do that, I need to place everything in a certain abstract data representation framework that allows graph rewrite rules, logical reasoning, and Bayesian probabilistic reasoning to be combined. This framework exists in OpenCog, but few people know or understand this. That this framework also has a firm foundation in model theory, category theory (even n-categories!) and type theory is even less well known. To explain all this, I just wrote a simple, easy introduction to all of these ideas, and how they come together. Follow the link for more.
The new Viterbi decoder for Link Grammar should offer better integration with higher level semantic algorithms!
I spent the weekend comparing the Stanford parser to RelEx, and learned a lot. RelEx really does deserve to be called a “semantic relation extractor”, and not just a “dependency relation extractor”. It provides a more abstract, more semantic output than the Stanford parser, which sticks very narrowly to the syntactic structure of a sentence.… Continue reading Semantic dependency relations