Natural Language Understanding Wiki
(Araki et al. (2014))
Tags: Visual edit apiedit
(datasets)
Tag: Visual edit
 
(One intermediate revision by the same user not shown)
Line 8: Line 8:
   
 
TODO: Gorman et al. (2016)<ref>Gorman, T. O., Wright-bettner, K., & Palmer, M. (2016). Richer Event Description : Integrating event coreference with temporal , causal and bridging annotation. In ''Proceedings of 2nd Workshop on Computing News Storylines'' (pp. 47–56).</ref>: "... the “span match” errors observed by (Kummerfeld and Klein, 2013) in recent systems, and some researchers working on coreference have observed the utility of focusing upon headwords, with (Peng et al., 2015) claiming that “identifying and co-referring mention heads is not only sufficient but is more robust than working with complete mentions” (Peng et al. 2015:1)."
 
TODO: Gorman et al. (2016)<ref>Gorman, T. O., Wright-bettner, K., & Palmer, M. (2016). Richer Event Description : Integrating event coreference with temporal , causal and bridging annotation. In ''Proceedings of 2nd Workshop on Computing News Storylines'' (pp. 47–56).</ref>: "... the “span match” errors observed by (Kummerfeld and Klein, 2013) in recent systems, and some researchers working on coreference have observed the utility of focusing upon headwords, with (Peng et al., 2015) claiming that “identifying and co-referring mention heads is not only sufficient but is more robust than working with complete mentions” (Peng et al. 2015:1)."
  +
[[File:Event-coref-def liu-et-al-2014.png|thumb|220x220px|Tasks definitions from Liu et al. (2014)]]
   
 
== Applications ==
 
== Applications ==
 
From Bejan and Harabagiu (2010)<ref>Bejan, C. A., & Harabagiu, S. (2010). Unsupervised Event Coreference Resolution with Rich Linguistic Features. In ''ACL 2010'' (pp. 1412–1422).</ref>: "solving event coreference has already proved its usefulness in various applications such as topic detection and tracking (Allan et al., 1998), information extraction (Humphreys et al., 1997), question answering (Narayanan and Harabagiu, 2004), textual entailment (Haghighi et al., 2005), and contradiction detection (deMarneffe et al., 2008)."
 
From Bejan and Harabagiu (2010)<ref>Bejan, C. A., & Harabagiu, S. (2010). Unsupervised Event Coreference Resolution with Rich Linguistic Features. In ''ACL 2010'' (pp. 1412–1422).</ref>: "solving event coreference has already proved its usefulness in various applications such as topic detection and tracking (Allan et al., 1998), information extraction (Humphreys et al., 1997), question answering (Narayanan and Harabagiu, 2004), textual entailment (Haghighi et al., 2005), and contradiction detection (deMarneffe et al., 2008)."
  +
  +
== Datasets ==
  +
* [http://rgcl.wlv.ac.uk/projects/NP4E/ NP4E]
   
 
== Open-source software and experiments ==
 
== Open-source software and experiments ==

Latest revision as of 10:13, 18 October 2017

TODO: state-of-the-art -- multipass sieves http://www.lrec-conf.org/proceedings/lrec2016/pdf/1005_Paper.pdf

TODO: Araki et al. (2014)[1]

TODO: event structure: ECB[2] includes 6 relation types: subevent, reason, purpose, enablement, precedence, and related. Some dataset only annotates identical relation (e.g. Lee et al., 2012).

TODO: Cybulska and Vossen (2014)[3]: "It is a difficult task that strongly influences diverse NLP applications. Evaluation of coreference resolution is not straightforward. There is no consensus in the field with regards to evaluation measures used to test approaches to coreference resolution. Some of the commonly used metrics are highly dependent on the evaluation data set, with scores rapidly going up or down depending on the number of singleton items in the data (Recasens and Hovy, 2011)."

TODO: Gorman et al. (2016)[4]: "... the “span match” errors observed by (Kummerfeld and Klein, 2013) in recent systems, and some researchers working on coreference have observed the utility of focusing upon headwords, with (Peng et al., 2015) claiming that “identifying and co-referring mention heads is not only sufficient but is more robust than working with complete mentions” (Peng et al. 2015:1)."

Event-coref-def liu-et-al-2014

Tasks definitions from Liu et al. (2014)

Applications[]

From Bejan and Harabagiu (2010)[5]: "solving event coreference has already proved its usefulness in various applications such as topic detection and tracking (Allan et al., 1998), information extraction (Humphreys et al., 1997), question answering (Narayanan and Harabagiu, 2004), textual entailment (Haghighi et al., 2005), and contradiction detection (deMarneffe et al., 2008)."

Datasets[]

Open-source software and experiments[]

See also[]

References[]

  1. Araki, Jun, Zhengzhong Liu, Eduard H. Hovy, and Teruko Mitamura. "Detecting Subevent Structure for Event Coreference Resolution." In LREC, pp. 4553-4558. 2014.
  2. Bejan, C. A., & Harabagiu, S. (2008). A Linguistic Resource for Discovering Event Structures and Resolving Event Coreference. Lrec, 2881–2887. Retrieved from http://www.lrec-conf.org/proceedings/lrec2008/pdf/734_paper.pdf
  3. Cybulska, A., & Vossen, P. (2014). Using a sledgehammer to crack a nut? Lexical diversity and event coreference resolution. Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC’14), 4545–4552. Retrieved from http://www.lrec-conf.org/proceedings/lrec2014/pdf/840_Paper.pdf
  4. Gorman, T. O., Wright-bettner, K., & Palmer, M. (2016). Richer Event Description : Integrating event coreference with temporal , causal and bridging annotation. In Proceedings of 2nd Workshop on Computing News Storylines (pp. 47–56).
  5. Bejan, C. A., & Harabagiu, S. (2010). Unsupervised Event Coreference Resolution with Rich Linguistic Features. In ACL 2010 (pp. 1412–1422).