Natural Language Understanding Wiki
(Berkeley CR)
Tag: Visual edit
m (fix ref)
Tag: Visual edit
Line 128: Line 128:
 
|None
 
|None
 
|No
 
|No
  +
|Lee et al. (2013)<ref>Heeyoung Lee, Angel Chang, Yves Peirsman, Nathanael Chambers, Mihai Surdeanu and Dan Jurafsky. Deterministic coreference resolution based on entity-centric, precision-ranked rules. Computational Linguistics 39(4), 2013.</ref>
|Lee et al. (2013)<ref><blockquote>
 
Heeyoung Lee, Angel Chang, Yves Peirsman, Nathanael Chambers, Mihai Surdeanu and Dan Jurafsky.
 
<br>
 
Deterministic coreference resolution based on entity-centric, precision-ranked rules.
 
<br>
 
Computational Linguistics 39(4), 2013.
 
</blockquote></ref>
 
 
|part of [http://stanfordnlp.github.io/CoreNLP/ Stanford CoreNLP]
 
|part of [http://stanfordnlp.github.io/CoreNLP/ Stanford CoreNLP]
 
|-
 
|-

Revision as of 12:54, 13 April 2018

System

Syntax feature?

Semantic role feature?

Semantic type feature?

[note 1]

Word-window feature? Mention-pair? Entity-mention? Mention-ranking? Cluster-ranking? Cluster-pair? Rule-based? Base ML model Integer linear programming? Reference Notes
cort Yes

[note 2]

No Yes

[note 3]

Yes Yes Yes No No No perceptron No Martschat and Strube (2015)[1]
nn_coref Yes

[note 4]

No Yes

[note 5]

No Yes No No No No neural net

(RNN for encoding clusters)

No Wiseman et al. (2016)[2]
huggingface's neural coref Yes No Yes

[note 6]

Yes

[note 7]

No No Yes No No No neural net No medium post impl. of Clark and Manning (2016)[3]
deep-coref Yes No Yes

[note 6]

Yes

[note 7]

Yes No Yes Yes No No neural net No Clark and Manning (2016)[4]
hcoref (Hybrid Coref) Yes

[note 8]

No Yes

[note 9]

No No Yes? No No Yes? Yes

[note 10]

random forest

[note 10]

No Lee et al. (2017)[5]
dcoref(Stanford sieve) Yes No Yes

[note 11]

No No No No No Yes Yes None No Lee et al. (2013)[6] part of Stanford CoreNLP
Berkeley CR No No Yes No No Yes

[note 12]

Yes

[note 13]

No No No log-linear No Durrett and Klein (2013)[7]
Illinois CR
xrenner eXternally configurable REference and Non Named Entity Recognizer
e2e-coref end-to-end coreference resolution system from AllenAI

Notes

  1. Different from semantic role features, this includes features about mentions alone: semantic type (person/object/number), NER type (person/location/organization), or other taxonomies.
  2. deprel: dependency relation of a mention to its governor
  3. sem_class: one of 'PERSON', 'OBJECT', 'NUMERIC' and 'UNKNOWN' and head_ner: named entity tag of the mention's head word
  4. From syntactic ancestry features in BASIC+ (Wiseman et al. 2015)
  5. From entity type features in BASIC+ (Wiseman et al. 2015)
  6. 6.0 6.1 In Clark and Manning (2016): "The type of the mention (pronoun, nominal, proper, or list)"
  7. 7.0 7.1 From Clark and Manning (2016): "first word, last word, two preceding words, and two following words of the mention. Averaged word embed- dings of the five preceding words, five following words, all words in the mention, all words in the mention’s sentence, and all words in the mention’s document."
  8. Feature: "The path in the parse tree from the root to the (antecedent/anaphor)"
  9. Feature: "named entity type attributes of (antecedent/anaphor)"
  10. 10.0 10.1 They combine rule-based and statistical classifiers.
  11. "NER label – fromthe Stanford NER"
  12. TRANSITIVE model: "each mention to maintain its own distributions over values for a number of proper- ties; these properties could include gender, named- entity type, or semantic class. Then, we will require each anaphoric mention to agree with its antecedent on the value of each of these properties"
  13. BASIC model: "This approach is similar to the mention- ranking model of Rahman and Ng (2009)."

References

  1. Sebastian Martschat and Michael Strube. 2015. Latent structures for coreference resolution. TACL, 3:405– 418.
  2. Wiseman, Sam, Alexander M. Rush, and Stuart M. Shieber. "Learning Global Features for Coreference Resolution." arXiv preprint arXiv:1604.03035(2016).
  3. Clark, K., & Manning, C. D. (2016). Deep Reinforcement Learning for Mention-Ranking Coreference Models. Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP-16), 2256–2262.
  4. Clark, K., & Manning, C. D. (2016a). Improving Coreference Resolution by Learning Entity-Level Distributed Representations. Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, 643–653. http://doi.org/10.18653/v1/P16-1061
  5. LEE, HEEYOUNG, MIHAI SURDEANU, and DAN JURAFSKY. "A scaffolding approach to coreference resolution integrating statistical and rule-based models." Natural Language Engineering (2017): 1-30.
  6. Heeyoung Lee, Angel Chang, Yves Peirsman, Nathanael Chambers, Mihai Surdeanu and Dan Jurafsky. Deterministic coreference resolution based on entity-centric, precision-ranked rules. Computational Linguistics 39(4), 2013.
  7. Durrett, G., & Klein, D. (2013). Easy victories and uphill battles in coreference resolution. EMNLP ’13, (October), 1971–1982.