Natural Language Understanding Wiki
(Durrett & Klein (2014))
Tags: Visual edit apiedit
 
No edit summary
Tag: Visual edit
 
(14 intermediate revisions by the same user not shown)
Line 1: Line 1:
  +
[[File:Ace-training-stats.png|thumb|Statistics from [http://www.itl.nist.gov/iad/mig//tests/ace/2005/doc/ace05-evalplan.v3.pdf <s>ACE05 evaluation plan</s>]<sup>(dead link - [http://web.archive.org/web/20090902090933/http://www.itl.nist.gov/iad/mig//tests/ace/2005/doc/ace05-evalplan.v3.pdf archived version])</sup>]]
From Durrett & Klein (2014)<ref>Durrett, G., & Klein, D. (2014). A Joint Model for Entity Analysis: Coreference, Typing, and Linking. In ''Transactions of the Association for Computational Linguistics'' (Vol. 2, pp. 477–490). Retrieved from https://transacl.org/ojs/index.php/tacl/article/view/412</ref>: "ACE 2005 corpus (NIST, 2005): this corpus anno- tates mentions complete with coreference, semantic types (per mention), and entity links (also per men- tion) later added by Bentivogli et al. (2010)."
+
From Durrett & Klein (2014)<ref>Durrett, G., & Klein, D. (2014). A Joint Model for Entity Analysis: Coreference, Typing, and Linking. In ''Transactions of the Association for Computational Linguistics'' (Vol. 2, pp. 477–490). Retrieved from https://transacl.org/ojs/index.php/tacl/article/view/412</ref>: "ACE 2005 corpus (NIST, 2005): this corpus annotates mentions complete with coreference, semantic types (per mention), and entity links (also per mention) later added by Bentivogli et al. (2010). [...] train/test split from Stoyanov et al. (2009), Haghighi and Klein (2010), and Bansal and Klein (2012)."
  +
  +
== Versions ==
  +
* Official contest: test data is only available to participants
  +
* LDC2005E18: enlarged version
  +
* LDC2006T06: further enlarged version
  +
* Stoyanov et al.'s split<ref>V Stoyanov, N Gilbert, C Cardie, and E Riloff. 2009. Conundrums in Noun Phrase Coreference Resolution: Making Sense of the State-of-the-art. In Associate of Computational Linguistics (ACL).</ref>: newswire only, documents unclear from which set, 57 documents for training, 24 for testing (ratio 70/30)
  +
* Rahman and Ng's split<ref>A Rahman and V Ng. 2009. Supervised models for coreference resolution. In Proceedings of the 2009 Conference on Empirical Conference in Natural Lan- guage Processing.</ref>: full 599 documents from LDC2006T06, split into 482 documents for training, 117 testing (ratio 80/20), balanced between genres
  +
[[File:Ace-evaluation-stats.png|thumb|Statistics from [http://www.itl.nist.gov/iad/mig//tests/ace/2005/doc/ace05-evalplan.v3.pdf <s>ACE05 evaluation plan</s>]<span><sup>(dead link - </sup></span>[http://web.archive.org/web/20090902090933/http://www.itl.nist.gov/iad/mig//tests/ace/2005/doc/ace05-evalplan.v3.pdf <sup>archived version</sup>]<span><sup>)</sup></span>]]
  +
  +
== See also ==
  +
* [http://www.itl.nist.gov/iad/mig/tests/ace/2005/ <s>Official website</s>]<sup>(dead link - [http://web.archive.org/web/20130125081331/itl.nist.gov/iad/mig/tests/ace/ace05 archived version])</sup>: documentation, software, resources
  +
* [http://curtis.ml.cmu.edu/w/courses/index.php/ACE_2005_Dataset CMU machine learning wiki]
  +
* [https://catalog.ldc.upenn.edu/LDC2006T06 LDC page] (pay a lot of money to download)
  +
* [[Event coreference resolution (state of the art)]]
   
 
== References ==
 
== References ==
Line 6: Line 21:
 
[[Category:Entity linking]]
 
[[Category:Entity linking]]
 
[[Category:Coreference resolution]]
 
[[Category:Coreference resolution]]
  +
[[Category:ACE]]

Latest revision as of 13:10, 11 October 2017

Ace-training-stats

Statistics from ACE05 evaluation plan(dead link - archived version)

From Durrett & Klein (2014)[1]: "ACE 2005 corpus (NIST, 2005): this corpus annotates mentions complete with coreference, semantic types (per mention), and entity links (also per mention) later added by Bentivogli et al. (2010). [...] train/test split from Stoyanov et al. (2009), Haghighi and Klein (2010), and Bansal and Klein (2012)."

Versions[]

  • Official contest: test data is only available to participants
  • LDC2005E18: enlarged version
  • LDC2006T06: further enlarged version
  • Stoyanov et al.'s split[2]: newswire only, documents unclear from which set, 57 documents for training, 24 for testing (ratio 70/30)
  • Rahman and Ng's split[3]: full 599 documents from LDC2006T06, split into 482 documents for training, 117 testing (ratio 80/20), balanced between genres
Ace-evaluation-stats

Statistics from ACE05 evaluation plan(dead link - archived version)

See also[]

References[]

  1. Durrett, G., & Klein, D. (2014). A Joint Model for Entity Analysis: Coreference, Typing, and Linking. In Transactions of the Association for Computational Linguistics (Vol. 2, pp. 477–490). Retrieved from https://transacl.org/ojs/index.php/tacl/article/view/412
  2. V Stoyanov, N Gilbert, C Cardie, and E Riloff. 2009. Conundrums in Noun Phrase Coreference Resolution: Making Sense of the State-of-the-art. In Associate of Computational Linguistics (ACL).
  3. A Rahman and V Ng. 2009. Supervised models for coreference resolution. In Proceedings of the 2009 Conference on Empirical Conference in Natural Lan- guage Processing.