During some recent reading, it struck me that a useful framework for thinking about and talking about sentence generation is the MTT or “meaning-text theory” of Igor Mel’cuk, et al Here is one readable reference:
Igor A. Mel’čuk and Alain Polguère, (1987) “A Formal Lexicon in Meaning-Text Theory”, Computational Linguistics, vol. 13, pp. 261-275.
portal.acm.org/citation.cfm?id=48160.48166
www.aclweb.org/anthology/J/J87/J87-3006.pdf
Within the context of that theory, the output of the Stanford parser is strictly at the SSynR or “surface syntactic representation” level, while, as a general rule Relex attempts to generate the DSynR or “Deep syntactic representation” structure. Some of what I’ve been trying to do with opencog is towards the “SemR” structure, as described in that paper.
The more I read about MTT, the more it seems to capture some of what we are trying to do (defacto are doing) with NLP within opencog. In particular, the MTT concept of a “lexical function” (which is not really described in that paper??) could be a particularly strong way of guaranteeing correct syntactic output for segsim, nlgen or NLGen2
— Linas Vepstas