Tag Archives: NLP

Symbolic and Neural Nets: Two Sides of the Same Coin

Deep learning and neural nets are all the rage, today, and have displaced symbolic AI systems in most applications. It’s commonly believed that the two approaches have nothing to do with each other; that they’re just completely different, and that’s … Continue reading

Posted in Theory | Tagged , , , , , , , | 1 Comment

The Viterbi Parser

The new Viterbi decoder for Link Grammar should offer better integration with higher level semantic algorithms! Continue reading

Posted in Design, Development, Theory | Tagged , , , , , , | 3 Comments

Meaning-Text Theory

During some recent reading, it struck me that a useful framework for thinking about and talking about sentence generation is the MTT or “meaning-text theory” of Igor Mel’cuk, et al Here is one readable reference: Igor A. Mel’čuk and Alain … Continue reading

Posted in Theory | Tagged , , , , , , , , | Leave a comment

Semantic dependency relations

I spent the weekend comparing the Stanford parser to RelEx, and learned a lot. RelEx really does deserve to be called a “semantic relation extractor”, and not just a “dependency relation extractor”. It provides a more abstract, more semantic output … Continue reading

Posted in Design, Development, Documentation, Theory | Tagged , , , , , , , | Leave a comment

Sentence Patterns

I’ve recently resumed work on the question-answering chatbot, and am trying to get it to comprehend a broader range of questions and statements.   The “big idea” is to create a number of “sentence patterns” that the pattern matcher can recognize … Continue reading

Posted in Design, Introduction, Theory | Tagged , , , , , | 2 Comments