Script: prototypical sequences of events and their participants.

To be differentiated from event schema induction and event schema description.

Approaches Edit

TODO: Kampmann et al. 2015

Manual construction of script knowledge bases Edit

These approaches do not scale to complex domains (Mueller, 1998; Gordon, 2001).

Co-occurrence frequency Edit

Pichotta & Mooney (2013)[1] model multi-argument probabilistic scripts by computing co-occurrence frequency in a corpus. They use a heuristic to expand co-occurrent pairs set where they replace co-referent entities by special tokens.

Graphical Edit

These methods exploit either natural texts or crowdsourced data, and, consequently, do not require expensive expert annotation.

Given a text corpus, they extract structured representations (i.e. graphs), for example chains (Chambers & Jurafsky, 2008)[2] or more general directed acyclic graphs (Regneri et al., 2010)[3]. These graphs are scenario-specific, nodes in them correspond to events (and associated with sets of potential event mentions) and arcs encode the temporal precedence relation. These graphs can then be used to inform NLP applications (e.g., question answering) by providing information whether one event is likely to precede or succeed another.

Neural embedding Edit

Simple neural event representation

Computation of an event representation (the bus disembarked passengers). From Modi and Titov (2014).

"Distributed representations of event realizations are computed based on distributed representations of predicates and their arguments, and then these representations are used to predict prototypical event orderings. The parameters of the compositional process for computing the event representations and the ranking component of the model are jointly estimated from texts." (Modi and Titov, 2014)[4]

TODO: Granroth-Wilding and Clark (2016)[5]

Dataset Edit

TODO: Dinners from Hell corpus (Rudinger et al., 2015)[6], InScript corpus (Modi et al., 2016)[7], story cloze corpus (Mostafazadeh et al., 2016[8]).

Multiple-Choice Narrative Cloze (MCNC) task Edit

Granroth-Wilding & Clark (2016)[9], publicly available here:

Evaluation Edit

  • Narrative-cloze
  • Multipel-choice narrative-cloze
  • Perplexity

References Edit

  1. Pichotta, K., & Mooney, R. (2013). Statistical Script Learning with Multi-Argument Events. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2014), (2012), 220–229. Retrieved from
  2. Chambers, Nathanael and Jurafsky, Daniel. Unsupervised learning of narrative event chains. In Proceedings of ACL, 2008.
  3. Regneri, Michaela, Koller, Alexander, and Pinkal, Manfred. Learning script knowledge with web experiments. In Proceedings of ACL, 2010.
  4. Modi, A., & Titov, I. (2013). Learning Semantic Script Knowledge with Event Embeddings. arXiv preprint arXiv:1312.5198.
  5. Granroth-Wilding, Mark, and Stephen Clark. "What Happens Next? Event Prediction Using a Compositional Neural Network Model." In AAAI, pp. 2727-2733. 2016.
  6. Rudinger, R., Demberg, V., Modi, A., Van Durme, B., and Pinkal, M. (2015). Learning to predict script events from domain-specific text. Lexical and Computational Semantics (* SEM 2015), pages 205–210.
  7. Modi, A., Anikina, T., Ostermann, S., and Pinkal, M. (2016). Inscript: Narrative texts annotated with script information. In Proceedings of the 10th edition of the Language Resources and Evaluation Conference.
  8. Mostafazadeh, Nasrin, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, and James Allen. "A corpus and cloze evaluation for deeper understanding of commonsense stories." Proceedings of NAACL HLT, San Diego, California, June. Association for Computational Linguistics (2016).
  9. Mark Granroth-Wilding, Stephen Clark (2016). What Happens Next? Event Prediction Using a Compositional Neural Network Model
Community content is available under CC-BY-SA unless otherwise noted.