Natural Language Understanding Wiki

TODO: Hermann et al. (2015): automatically construct a training dataset

  • macro-reading: processing large text collections and extracting knowledge bases of facts (Etzioni et al., 2006[1]; Carlson et al., 2010[2]; Fader et al., 2011[3])
  • micro-reading: reading a single document to answer comprehension questions that require deep reasoning (Richardson et al., 2013[4]; Kushman et al., 2014[5]; Berant et al., 2014[6])


  1. Oren Etzioni, Michele Banko, and Michael J. Cafarella. 2006. Machine reading. In Proceedings of AAAI.
  2. Andrew Carlson, Justin Betteridge, Bryan Kisiel, Burr Settles, Estevam R. Hruschka Jr., and Tom M. Mitchell. 2010. Toward an architecture for never- ending language learning. In Proceedings of AAAI.
  3. Anthony Fader, Stephen Soderland, and Oren Etzioni. 2011. Identifying relations for open information ex- traction. In Proceedings of EMNLP.
  4. Matthew Richardson, Christopher JC Burges, and Erin Renshaw. 2013. MCTest: A challenge dataset for the open-domain machine comprehension of text. In Proceedings of EMNLP.
  5. Nate Kushman, Yoav Artzi, Luke Zettlemoyer, and Regina Barzilay. 2014. Learning to automatically solve algebra word problems. In Proceedings of ACL.
  6. Berant, J., Srikumar, V., Chen, P.-C., Linden, A. Vander, Harding, B., Huang, B., … Manning, C. D. (2014). Modeling Biological Processes for Reading Comprehension. In Empirical Methods in Natural Language Processing (EMNLP).