Script: prototypical sequences of events and their participants.
TODO: Kampmann et al. 2015
Manual construction of script knowledge bases Edit
These approaches do not scale to complex domains (Mueller, 1998; Gordon, 2001).
Co-occurrence frequency Edit
Pichotta & Mooney (2013) model multi-argument probabilistic scripts by computing co-occurrence frequency in a corpus. They use a heuristic to expand co-occurrent pairs set where they replace co-referent entities by special tokens.
These methods exploit either natural texts or crowdsourced data, and, consequently, do not require expensive expert annotation.
Given a text corpus, they extract structured representations (i.e. graphs), for example chains (Chambers & Jurafsky, 2008) or more general directed acyclic graphs (Regneri et al., 2010). These graphs are scenario-specific, nodes in them correspond to events (and associated with sets of potential event mentions) and arcs encode the temporal precedence relation. These graphs can then be used to inform NLP applications (e.g., question answering) by providing information whether one event is likely to precede or succeed another.
Neural embedding Edit
"Distributed representations of event realizations are computed based on distributed representations of predicates and their arguments, and then these representations are used to predict prototypical event orderings. The parameters of the compositional process for computing the event representations and the ranking component of the model are jointly estimated from texts." (Modi and Titov, 2014)
TODO: Granroth-Wilding and Clark (2016)
Multiple-Choice Narrative Cloze (MCNC) task Edit
Granroth-Wilding & Clark (2016), publicly available here: http://mark.granroth-wilding.co.uk/papers/what_happens_next/
- Multipel-choice narrative-cloze
- ↑ Pichotta, K., & Mooney, R. (2013). Statistical Script Learning with Multi-Argument Events. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2014), (2012), 220–229. Retrieved from http://www.cs.utexas.edu/users/ml/papers/pichotta.eacl14.pdf
- ↑ Chambers, Nathanael and Jurafsky, Daniel. Unsupervised learning of narrative event chains. In Proceedings of ACL, 2008.
- ↑ Regneri, Michaela, Koller, Alexander, and Pinkal, Manfred. Learning script knowledge with web experiments. In Proceedings of ACL, 2010.
- ↑ Modi, A., & Titov, I. (2013). Learning Semantic Script Knowledge with Event Embeddings. arXiv preprint arXiv:1312.5198.
- ↑ Granroth-Wilding, Mark, and Stephen Clark. "What Happens Next? Event Prediction Using a Compositional Neural Network Model." In AAAI, pp. 2727-2733. 2016.
- ↑ Rudinger, R., Demberg, V., Modi, A., Van Durme, B., and Pinkal, M. (2015). Learning to predict script events from domain-specific text. Lexical and Computational Semantics (* SEM 2015), pages 205–210.
- ↑ Modi, A., Anikina, T., Ostermann, S., and Pinkal, M. (2016). Inscript: Narrative texts annotated with script information. In Proceedings of the 10th edition of the Language Resources and Evaluation Conference.
- ↑ Mostafazadeh, Nasrin, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, and James Allen. "A corpus and cloze evaluation for deeper understanding of commonsense stories." Proceedings of NAACL HLT, San Diego, California, June. Association for Computational Linguistics (2016).
- ↑ Mark Granroth-Wilding, Stephen Clark (2016). What Happens Next? Event Prediction Using a Compositional Neural Network Model