FANDOM


Statistics Edit

From Corcoglioniti et al. (2015)[1]: "∼110K text documents long 219 words on average"

NLP Tasks Edit

Used in tasks:

  • Text simplification: many papers (of course!)
  • Semantic relationship extraction: Ruiz-Casado et al. (2005[2], 2007[3])
  • NER:
    • Toral and Munoz (2006)[4]: build and maintain gazetteers
  • SRL:
    • PIKES (Corcoglioniti et al. 2015)[1]
  • Knowledge extraction:
    • PIKES (Corcoglioniti et al. 2016)[5]
  • Distributional semantics:
    • Structured DSM: Goyal et al. (2013[6])

References Edit

  1. 1.0 1.1 Corcoglioniti, Francesco, Marco Rospocher, and Alessio Palmero Aprosio. "Extracting Knowledge from Text with PIKES." ISWC, 2015.
  2. Ruiz-Casado, Maria, Enrique Alfonseca, and Pablo Castells. "Automatic extraction of semantic relationships for wordnet by means of pattern learning from wikipedia." International Conference on Application of Natural Language to Information Systems. Springer Berlin Heidelberg, 2005.
  3. Ruiz-Casado, Maria, Enrique Alfonseca, and Pablo Castells. "Automatising the learning of lexical patterns: An application to the enrichment of wordnet by extracting semantic relationships from wikipedia." Data & Knowledge Engineering 61.3 (2007): 484-499.
  4. Toral, Antonio, and Rafael Munoz. "A proposal to automatically build and maintain gazetteers for Named Entity Recognition by using Wikipedia." Proceedings of EACL. 2006.
  5. Corcoglioniti, Francesco, Marco Rospocher, and Alessio Palmero Aprosio. "A 2-phase frame-based knowledge extraction framework." Proceedings of the 31st Annual ACM Symposium on Applied Computing. ACM, 2016.
  6. Goyal, K., Jauhar, S. K., Li, H., Sachan, M., Srivastava, S., & Hovy, E. (2013). A structured distributional semantic model: integrating structure with semantics. ACL 2013, 20.