Social cues, social biases: Stereotypes in annotations on people images

Human computation is often subject to systematic biases. We consider the case of linguistic biases and their consequences for the words that crowdworkers use to describe people images in an annotation task. Social psychologists explain that when describing others, the subconscious perpetuation of stereotypes is inevitable, as we describe stereotype-congruent people and/or in-group members more abstractly than others. In an MTurk experiment we show evidence of these biases, which are exacerbated when an image’s “popular tags” are displayed, a common feature used to provide social information to workers. Underscoring recent calls for a deeper examination of the role of training data quality in algorithmic biases, results suggest that it is rather easy to sway human judgment.

Otterbacher, J. (2018), 'Social cues, social biases: Stereotypes in annotations on people images', Proceedings of the Sixth AAAI Conference on Human Computation and Crowdsourcing, HCOMP, 5-8 July 2018, Zürich, Switzerland.

Metadata

  • Amazon Mechanical Turk
  • professional services
  • Online moderately skilled click work
  • Other
  • 2018
  • Research publication
  • work content, challenges
  • English
  • Conference on Human Computation and Crowdsourcing (Publisher)
  • Qualitative research
  • Open access
Disclaimer  —  Eurofound aims to keep the information in this database up to date and accurate. If errors are brought to our attention, we will try to correct them. However, Eurofound bears no responsibility or liability whatsoever with regard to the accuracy or content of this database or of external links over which Eurofound services have no control and for which Eurofound assumes no responsibility.