Tag Archives: NLP

The Viterbi Parser

The new Viterbi decoder for Link Grammar should offer better integration with higher level semantic algorithms! Continue reading

Posted in Design, Development, Theory | Tagged , , , , , , | 3 Comments

Meaning-Text Theory

During some recent reading, it struck me that a useful framework for thinking about and talking about sentence generation is the MTT or “meaning-text theory” of Igor Mel’cuk, et al Here is one readable reference: Igor A. Mel’čuk and Alain … Continue reading

Posted in Theory | Tagged , , , , , , , , | Leave a comment

Semantic dependency relations

I spent the weekend comparing the Stanford parser to RelEx, and learned a lot. RelEx really does deserve to be called a “semantic relation extractor”, and not just a “dependency relation extractor”. It provides a more abstract, more semantic output … Continue reading

Posted in Design, Development, Documentation, Theory | Tagged , , , , , , , | Leave a comment

Sentence Patterns

I’ve recently resumed work on the question-answering chatbot, and am trying to get it to comprehend a broader range of questions and statements.   The “big idea” is to create a number of “sentence patterns” that the pattern matcher can recognize … Continue reading

Posted in Design, Introduction, Theory | Tagged , , , , , | 2 Comments

Frequency of grammatical disjuncts

The link-grammar parser uses labeled links to connect together pairs of words.  In order to capture the idea of proper grammatical construction, any given word is only allowed to have very specific links to its right or left: for example, … Continue reading

Posted in Development, Theory | Tagged , , , , , | 4 Comments