Matthew Honnibal of SpaCy has commented that joint processing is the "motherhood and apple pie" of NLP. Indeed, everyone would agree that joint models should perform better than a cascade of independent models, not because they take more time and effort to build but because more information is available. Of course combining your models into one doesn't guarantee increased performance, as demonstrated in the CoNLL-2008 shared task[1] and researchers have been exploring more and more methods to harness the power of joint processing.

Tasks Edit

Multi-word expression recognition + parsing: Constant and Nivre (2016)[2]

POS tagging + parsing: Bohnet and Nivre (2012)[3], Zhenghua et al. (2014, for Chinese)[4], Nguyen et al. (2017)[5]

Word segmentation + POS tagging + parsing: Hatori et al. (2012)[6]

Denis and Baldridge (2009)[7] solve coreference resolution, anaphora resolution and named-entity classification in a joint global model.

Constituent parsing + NER:

  • Finkel and Manning (2009)[8]: "Despite these earlier results, we found that combining parsing and named entity recognition modestly improved performance on both tasks. Our joint model produces an output which has consistent parse structure and named entity spans, and does a better job at both tasks than separate models with the same features."

Entity Identification + Relation Extraction: Yu and Lam (2010)[9], Yu et al. (2011)[10]

CCG supertagging + Parsing: Auli and Lopez (2011)[11]

Entity coreference + typing + linking: Durrett and Klein (2014)[12]

Entity coreference + typing + relation: Singh et al. (2013)[13]

Entity coreference + linking: Hajishirzi et al. (2013)[14], Dutta & Weikum (2015; cross-document)[15]

Subtasks in Entity linking: mention detection + linking:

From Ji et al. (2014)[16]: "Some recent work (Sil and Yates, 2013; Meij et al., 2012; Guo et al., 2013; Huang et al., 2014b) proved that mention extraction and mention linking can mutually enhance each other. Inspired by the these successes, many teams including IBM (Sil and Florian, 2014), MSIIPL THU (Zhao et al., 2014), SemLinker (Meurs et al., 2014), UBC (Barrena et al., 2014) and RPI (Hong et al., 2014) used the properties in external KBs such as DBPedia as feedback to refine the identification and classification of name mentions."
Coreference resolution + mention head detection: Peng et al. (2015)[17]


  • Che and Liu (2010)[18]: "We further propose a Markov logic model that jointly labels semantic roles and disambiguates all word senses. By evaluating our model on the OntoNotes 3.0 data, we show that this joint approach leads to a higher performance for word sense disambiguation and semantic role labeling than those pipeline approaches."
  • "The above idea, that the predicate senses and the semantic role labeling can help each other, may be inspired by Hajic et al. (2009), Surdeanu et al. (2008), and Dang and Palmer (2005). They have shown that semantic role features are helpful to disambiguate verb senses and vice versa."

Named-entity recognition and linking: Luo et al. (2014)[19]

Event coref. + Temporal relation identification: Teng et al. (2016)[20]

Parsing+NMT: Eriguchi et al. (2017), Wu et al. (2017)

See also Edit

  • Category tree:

References Edit

  1. Surdeanu, M., & Johansson, R. (2008). The CoNLL-2008 shared task on joint parsing of syntactic and semantic dependencies. In Proceedings of the Twelfth Conference on Computational Natural Language Learning. Association for Computational Linguistics (pp. 159–177).
  2. Constant, M., & Nivre, J. (2016). A Transition-Based System for Joint Lexical and Syntactic Analysis. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (pp. 161–171). Association for Computational Linguistics. Retrieved from
  3. Bohnet, Bernd, and Joakim Nivre. "A transition-based system for joint part-of-speech tagging and labeled non-projective dependency parsing."Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. Association for Computational Linguistics, 2012.
  4. Li, Zhenghua, et al. "Joint Optimization for Chinese POS Tagging and Dependency Parsing." IEEE/ACM Transactions on Audio, Speech, and Language Processing 22.1 (2014): 274-286.
  5. Nguyen, D. Q., Dras, M., & Johnson, M. (2017). A Novel Neural Network Model for Joint POS Tagging and Graph-based Dependency Parsing. arXiv Preprint arXiv:1705.05952.
  6. Hatori, Jun, et al. "Incremental joint approach to word segmentation, pos tagging, and dependency parsing in Chinese." Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers-Volume 1. Association for Computational Linguistics, 2012.
  7. Denis, Pascal, and Jason Baldridge. "Global joint models for coreference resolution and named entity classification." Procesamiento del Lenguaje Natural 42, no. 1 (2009): 87-96.
  8. Finkel, J. R., & Manning, C. D. (2009). Joint parsing and named entity recognition. NAACL, 326–334.
  9. Yu, X., & Lam, W. (2010). Jointly Identifying Entities and Extracting Relations in Encyclopedia Text via a Graphical Model Approach. Proceedings of COLING ’10 Proceedings of the 23rd International Conference on Computational Linguistics (COLING 2010, (August), 1399–1407. Retrieved from
  10. Yu, X., King, I., & Lyu, M. R. (2011). Towards a top-down and bottom-up bidirectional approach to joint information extraction. Proceedings of the 20th ACM International Conference on Information and Knowledge Management - CIKM ’11, (January), 847.
  11. Auli, Michael, and Adam Lopez. "A comparison of loopy belief propagation and dual decomposition for integrated CCG supertagging and parsing."Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1. Association for Computational Linguistics, 2011.
  12. Durrett, Greg, and Dan Klein. "A joint model for entity analysis: Coreference, typing, and linking." Transactions of the Association for Computational Linguistics 2 (2014): 477-490.
  13. Singh, S., Riedel, S., Martin, B., Zheng, J., & McCallum, A. (2013). Joint Inference of Entities, Relations, and Coreference. Automated Knowledge Base Construction (AKBC CIKM 2013), 1–6.
  14. Hajishirzi, H., Zilles, L., Weld, D. S., & Zettlemoyer, L. (2013). Joint Coreference Resolution and Named-Entity Linking with Multi-pass Sieves. In EMNLP ’13 (pp. 289–299).
  15. Dutta, Sourav, and Gerhard Weikum. "C3EL: A Joint Model for Cross-Document Co-Reference Resolution and Entity Linking." Conference on Empirical Methods in Natural Language Processing. ACL, 2015.
  16. Ji, H., Nothman, J., & Hachey, B. (2014). Overview of TAC-KBP2014 Entity Discovery and Linking Tasks. TAC (Text Analysis Conference) 2014.
  17. Peng, H., Chang, K.-W., & Roth, D. (2015). A Joint Framework for Coreference Resolution and Mention Head Detection. CoNLL, 12–21.
  18. Che, W., & Liu, T. (2010). Jointly Modeling WSD and SRL with Markov Logic. Proceedings of the 23rd International Conference on Computational Linguistics Coling 2010, (August), 161–169. Retrieved from
  19. Luo, G., Huang, X., Lin, C., & Nie, Z. (2015). Joint Named Entity Recognition and Disambiguation. Emnlp, (September), 879–888.
  20. Teng, Jiayue, Peifeng Li, Qiaoming Zhu, and Weiyi Ge. "Joint Event Co-reference Resolution and Temporal Relation Identification." In Chinese Lexical Semantics: 17th Workshop, CLSW 2016, Singapore, Singapore, May 20–22, 2016, Revised Selected Papers, vol. 10085, p. 426. Springer, 2016.