FANDOM


Main reference: Levesque et al. (2011)[1]

Critique: Trichelair et al. (2018)[2]

Human performance: >90%

A thesis: Sharma (2014)[3]

Wikisense: (Isaak and Michael, 2016)[4], Isaak and Michael (2017)[5]

Versions Edit

2016? 2018

State-of-the-art Edit

References Edit

  1. Levesque, H. J., Davis, E., & Morgenstern, L. (2011). The Winograd schema challenge. In AAAI Spring Symposium: Logical Formalizations of Commonsense Reasoning.
  2. Trichelair, P., Emami, A., Cheung, J. C. K., Trischler, A., Suleman, K., & Diaz, F. (2018). On the Evaluation of Common-Sense Reasoning in Natural Language Understanding. Retrieved from http://arxiv.org/abs/1811.01778
  3. Arpit Sharma. 2014. "Solving Winograd Schema Challenge : Using Semantic Parsing, Automatic Knowledge Acquisition and Logical Reasoning" PDF
  4. Nicos Isaak and Loizos Michael. Tackling the winograd schema challenge through machine logical inferences. In David Pearce and Helena Sofia Pinto, editors, STAIRS, volume 284 of Frontiers in Ar- tificial Intelligence and Applications, pages 75–86. IOS Press, 2016.
  5. Isaak, N., & Michael, L. (2017). How the Availability of Training Material Affects Performance in the Winograd Schema Challenge. Cognitum workshop 2017.