Natural Language Understanding Wiki
(fix typo)
Tag: rte-source
No edit summary
Tags: Visual edit apiedit
 
Line 7: Line 7:
   
 
<blockquote>For example, for the correct proposition:
 
<blockquote>For example, for the correct proposition:
 
   
 
verb.01: ARG0, ARG1, ARGM-TMP
 
verb.01: ARG0, ARG1, ARGM-TMP

Latest revision as of 16:02, 13 July 2015

In CoNLL 2008[1], SRL evaluation was formulated as F1 score of dependencies. A dependency is created between all predicates and its arguments and is labeled with the role. Moreover, a dependency is created between every predicate and ROOT with its sense as label. This way, SRL systems still receive some credit when it makes mistake in predicate disambiguation. The report shows this example:

For example, for the correct proposition:

verb.01: ARG0, ARG1, ARGM-TMP

the system that generates the following output for the same argument tokens:

verb.02: ARG0, ARG1, ARGM-LOC

receives a labeled precision score of 2/4 because two out of four semantic dependencies are incorrect: the dependency to ROOT is labeled 02 instead of 01 and the dependency to the ARGM-TMP is incorrectly labeled ARGM-LOC.

For joint evaluation of syntax and semantics, they compute macro precision and recall scores:

Here, LMP stands for labeled macro precision, LMR labeled macro recall, and LAS (syntactic) label attachment score.

References[]

  1. Surdeanu, M., Johansson, R., Meyers, A., Màrquez, L., & Nivre, J. (2008, August). The CoNLL-2008 shared task on joint parsing of syntactic and semantic dependencies. In Proceedings of the Twelfth Conference on Computational Natural Language Learning (pp. 159-177). Association for Computational Linguistics.