Natural Language Understanding Wiki
Advertisement

Main reference: Levesque et al. (2011)[1]

Critique: Trichelair et al. (2018)[2]

Human performance: >90%

A thesis: Sharma (2014)[3]

Wikisense: (Isaak and Michael, 2016)[4], Isaak and Michael (2017)[5]

Kocijan et al. (2019)[6]

Versions[]

2016? 2018

State-of-the-art[]

References[]

  1. Levesque, H. J., Davis, E., & Morgenstern, L. (2011). The Winograd schema challenge. In AAAI Spring Symposium: Logical Formalizations of Commonsense Reasoning.
  2. Trichelair, P., Emami, A., Cheung, J. C. K., Trischler, A., Suleman, K., & Diaz, F. (2018). On the Evaluation of Common-Sense Reasoning in Natural Language Understanding. Retrieved from http://arxiv.org/abs/1811.01778
  3. Arpit Sharma. 2014. "Solving Winograd Schema Challenge : Using Semantic Parsing, Automatic Knowledge Acquisition and Logical Reasoning" PDF
  4. Nicos Isaak and Loizos Michael. Tackling the winograd schema challenge through machine logical inferences. In David Pearce and Helena Sofia Pinto, editors, STAIRS, volume 284 of Frontiers in Ar- tificial Intelligence and Applications, pages 75–86. IOS Press, 2016.
  5. Isaak, N., & Michael, L. (2017). How the Availability of Training Material Affects Performance in the Winograd Schema Challenge. Cognitum workshop 2017.
  6. Kocijan, V., Cretu, A. M., Camburu, O. M., Yordanov, Y., & Lukasiewicz, T. (2020). A surprisingly robust trick for the winograd schema challenge. ACL 2019 - 57th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference, 4837–4842. https://doi.org/10.18653/v1/p19-1478