Natural Language Understanding Wiki
(motivation)
Tag: Visual edit
m (format)
Tag: Visual edit
Line 17: Line 17:
   
 
Recommendations:
 
Recommendations:
* Increase intrinsic motivation: from Paolacci and Chandler (2014)<ref>Paolacci, G., & Chandler, J. (2014). Inside the Turk: Understanding Mechanical Turk as a Participant Pool. ''Current Directions in Psychological Science'', ''23''(3), 184–188. http://doi.org/10.1177/0963721414531598</ref>: "Thanking workers and explaining to them the meaning of the task they will complete can stimulate better work (D. Chandler & Kapelner, 2013)<ref>Chandler, D., & Kapelner, A. (2013). Breaking monotony with
+
* Increase intrinsic motivation: from Paolacci and Chandler (2014)<ref>Paolacci, G., & Chandler, J. (2014). Inside the Turk: Understanding Mechanical Turk as a Participant Pool. ''Current Directions in Psychological Science'', ''23''(3), 184–188. http://doi.org/10.1177/0963721414531598</ref>: "Thanking workers and explaining to them the meaning of the task they will complete can stimulate better work (D. Chandler & Kapelner, 2013)<ref>Chandler, D., & Kapelner, A. (2013). Breaking monotony with meaning: Motivation in crowdsourcing markets. Journal of Economic Behavior & Organization, 90, 123–133.</ref>, as does framing a task as requested by a nonprofit organization (Rogstadius et al., 2011)<ref>Rogstadius, J., Kostakos, V., Kittur, A., Smus, B., Laredo, J., & Vukovic, M. (2011, July). An assessment of intrinsic and extrinsic motivation on task performance in crowdsourcing markets. Paper presented at the 5th International AAAI Conference on Weblogs and Social Media, Barcelona, Spain.</ref>."
meaning: Motivation in crowdsourcing markets. Journal of
 
Economic Behavior & Organization, 90, 123–133.</ref>, as does framing a task as requested by a nonprofit organization (Rogstadius
 
et al., 2011)<ref>Rogstadius, J., Kostakos, V., Kittur, A., Smus, B., Laredo, J., &
 
Vukovic, M. (2011, July). An assessment of intrinsic and
 
extrinsic motivation on task performance in crowdsourcing markets. Paper presented at the 5th International AAAI
 
Conference on Weblogs and Social Media, Barcelona, Spain.</ref>."
 
 
 
*
 
*
   
Line 31: Line 24:
   
 
== References ==
 
== References ==
  +
{{reflist|2}}
<references />
 
 
[[Category:Crowdsourcing]]
 
[[Category:Crowdsourcing]]

Revision as of 17:52, 4 November 2018

  • coreference resolution: Phrase Detectives (Chamberlain et al., 2008;[1] Chamberlain et al., 2009[2]) was meant to gather a corpus with
  • textual entailment: Negri et al. (2011)[3] (multilingual)
  • semantic role labeling: Hong and Baker (2011)[4], Baker (2012)[5]


Verbosity (Von Ahn et al., 2006)[6] was one of the first attempts in gathering annotations with a GWAP.

Snow et al. (2008)[7] described design and evaluation guidelines for five natural language micro-tasks. However, they explicitly chose a set of tasks that could be easily understood by non-expert contributors, thus leaving the recruitment and training issues open.

Platforms

Amazon's Mechanical Turk

Usage: NAACL (2010)[8], Laws et al. (2011)[9]

Motivation of workers: Antin and Shaw (2012)[10] found that, although monetary reward is the most important motivation drawing workers to the website, more than half of the workers also come for "fun". They argue that the results obtained by Ipeirotis (2010)[11] is distorted by social desirability. Litman et al. (2014)[12] argue that money is the most important motivation and "data quality is directly affected by compensation rates for India-based participants".

Recommendations:

  • Increase intrinsic motivation: from Paolacci and Chandler (2014)[13]: "Thanking workers and explaining to them the meaning of the task they will complete can stimulate better work (D. Chandler & Kapelner, 2013)[14], as does framing a task as requested by a nonprofit organization (Rogstadius et al., 2011)[15]."

CrowdFlower

Usage: He et al. (2016)[16]

References

  1. Jon Chamberlain, Massimo Poesio, and Udo Kruschwitz. 2008. Phrase detec- tives: A web-based collaborative annotation game. Proceedings of I-Semantics, Graz.
  2. Jon Chamberlain, Udo Kruschwitz, and Massimo Poesio. 2009. Constructing an anaphorically annotated corpus with non-experts: Assessing the quality of collaborative annotations. In Proceedings of the 2009 Workshop on The Peo- ple’s Web Meets NLP: Collaboratively Constructed Semantic Resources, pages 57–62. Association for Computational Linguistics.
  3. Matteo Negri, Luisa Bentivogli, Yashar Mehdad, Danilo Giampiccolo, and Alessan- dro Marchetti. 2011. Divide and conquer: crowd- sourcing the creation of cross-lingual textual entail- ment corpora. In Proceedings of the Conference on Empirical Methods in Natural Language Process- ing, EMNLP ’11, pages 670–679, Stroudsburg, PA, USA. Association for Computational Linguistics.
  4. Jisup Hong and Collin F Baker. 2011. How good is the crowd at “real” wsd? ACL HLT 2011, page 30.
  5. Collin F Baker. 2012. Framenet, current collaborations and future goals. Language Re- sources and Evaluation, pages 1–18.
  6. Luis Von Ahn, Mihir Kedia, and Manuel Blum. 2006. Verbosity: a game for col- lecting common-sense facts. In Proceedings of the SIGCHI conference on Human Factors in computing systems, pages 75–78. ACM.
  7. Rion Snow, Brendan O’Connor, Daniel Jurafsky, and Andrew Y Ng. 2008. Cheap and fast—but is it good?: evaluating non-expert an- notations for natural language tasks. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 254–263. Association for Computational Linguistics.
  8. NAACL, H. (2010). Workshop on Creating Speech and Language Data with Amazon’s Mechanical Turk.
  9. Laws, F., Scheible, C., & Schütze, H. (2011). Active Learning with Amazon Mechanical Turk. Proceedings of the Conference on Empirical Methods in Natural Language Processing, 1546–1556.
  10. Antin, J., & Shaw, A. (2012). Social desirability bias and self-reports of motivation. Proceedings of the 2012 ACM Annual Conference on Human Factors in Computing Systems - CHI’12, 2925. http://doi.org/10.1145/2207676.2208699
  11. Panos Ipeirotis. 2010. New demographics of Mechanical Turk. http://behind-the-enemy-lines. blogspot.com/2010/03/ new-demographics-of-mechanical-turk. html.
  12. Litman, L., Robinson, J., & Rosenzweig, C. (2015). The relationship between motivation, monetary compensation, and data quality among US- and India-based workers on Mechanical Turk. Behavior Research Methods, 47(2), 519–528. http://doi.org/10.3758/s13428-014-0483-x
  13. Paolacci, G., & Chandler, J. (2014). Inside the Turk: Understanding Mechanical Turk as a Participant Pool. Current Directions in Psychological Science, 23(3), 184–188. http://doi.org/10.1177/0963721414531598
  14. Chandler, D., & Kapelner, A. (2013). Breaking monotony with meaning: Motivation in crowdsourcing markets. Journal of Economic Behavior & Organization, 90, 123–133.
  15. Rogstadius, J., Kostakos, V., Kittur, A., Smus, B., Laredo, J., & Vukovic, M. (2011, July). An assessment of intrinsic and extrinsic motivation on task performance in crowdsourcing markets. Paper presented at the 5th International AAAI Conference on Weblogs and Social Media, Barcelona, Spain.
  16. NAACL, H. (2010). Workshop on Creating Speech and Language Data with Amazon’s Mechanical Turk.