History Edit

From Agirre and Edmonds (2007)[1]: "A “modular” view of language processing was firmly established in the mid-20th century by semioticians and structural linguists, who developed cognitive models that describe language understanding as an aggregative processing of various levels of information (syntax/semantics/pragmatics for the semioticians, morpho-phonological/syntactic/lexico-semantic for the structural linguists). This modular view was taken up by the earliest computational linguists, who treated the process of language understanding as a modular system of sub-systems that could be modeled computationally, and it has remained dominant (abetted by cognitive psychology and neuro-science) to this day. It is apparent in the design of “comprehensive” language processing systems, which invariably include multiple modules devoted to isolatable analytic steps, and it informed the “pipeline” approach to linguistic annotation introduced in the mid-Nineties (Ide and Véronis 1994) that has been implemented in major annotation systems10 since then"

Prevalence Edit

Finkel and Manning (2009)[2]: "unfortunately, it is still common practice to cobble together independent systems for the various types of annotation, and there is no guarantee that their outputs will be consistent."

Hollingshead and Roark (2007)[3]: "Pipeline systems are ubiquitous in natural language processing, used not only in parsing (Ratnaparkhi, 1999; Charniak, 2000), but also machine translation (Och and Ney, 2003) and speech recognition (Fiscus, 1997; Goel et al., 2000), among others."

Arguments for pipeline architecture Edit

Finkel and Manning (2009)[2]: "Vapnik has observed (Vapnik, 1998; Ng and Jordan, 2002) that “one should solve the problem directly and never solve a more general problem as an intermediate step,” implying that building a joint model of two phenomena [parsing+NER] is more likely to harm performance on the individual tasks than to help it. Indeed, it has proven very difficult to build a joint model of parsing and semantic role labeling, either with PCFG trees (Sutton and McCallum, 2005) or with dependency trees. The CoNLL 2008 shared task (Surdeanu et al., 2008) was intended to be about joint dependency parsing and semantic role labeling, but the top performing systems decoupled the tasks and outperformed the systems which attempted to learn them jointly."

Alternatives Edit

Pipeline is not the only way to structure text comprehension. From Kintsch (1988)[4]: "Text comprehension is assumed to be organized in cycles, roughly corresponding to short sentences or phrases (for further detail, see Kintsch & van Dijk, 1978[5]; Miller & Kintsch, 1980[6]). In each cycle a new net is constructed, including whatever is carried over in the short-term buffer from the previous cycle. [...] The highly activated nodes constitute the discourse representation formed on each processing cycle. In principle, it includes information at many levels: lexical nodes, text propositions, knowledge-based elaborations (i.e., various types of inferences), as well as macropropositions."

See also Edit

References Edit

  1. Agirre, Eneko, and Philip Edmonds, eds. Word sense disambiguation: Algorithms and applications. Vol. 33. Springer Science & Business Media, 2007.
  2. 2.0 2.1 Finkel, J. R., & Manning, C. D. (2009). Joint parsing and named entity recognition. NAACL, 326–334.
  3. Hollingshead, K., & Roark, B. (2007). Pipeline Iteration. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics (pp. 952–959). inproceedings, Prague, Czech Republic: Association for Computational Linguistics.
  4. Kintsch, W. (1988). The Role of Knowledge in Discourse Comprehension - a Construction Integration Model. Psychological Review, 95(2), 163–182.
  5. Kintsch, W., & van Dijk, T. A. (1978). Towards a model of text comprehension and production. Psychological Review, 85, 363-394.
  6. Miller, J. R., & Kintsch, W. (1980). Readability and recall of short prose passages: A theoretical analysis. Journal of Experimental Psychology: Human Learning and Memory, 6, 335-354.