The impact of intergroup bias on trust and approach behaviour towards a humanoid robot

Christopher Deligianis, Christopher Stanton, Craig McGarty, Catherine J. Stevens

Research output: Contribution to journalArticlepeer-review

Abstract

As robots become commonplace, and for successful human-robot interaction to occur, people will need to trust them. Two experiments were conducted using the "minimal group paradigm" to explore whether social identity theory influences trust formation and impressions of a robot. In Experiment 1, participants were allocated to either a "robot" or "computer" group, and then they played a cooperative visual tracking game with an Aldebaran Nao humanoid robot as a partner. We hypothesised participants in the "robot group" would demonstrate intergroup bias by sitting closer to the robot (proxemics) and trusting the robot's suggested answers more frequently than their "computer group" counterparts. Experiment 2 used an almost identical procedure with a different set of participants; however, all participants were assigned to the "robot group" and thee different levels of anthropomorphic robot movement were manipulated. Our results suggest that intergroup bias and humanlike movement can significantly affect human-robot approach behaviour. Significant effects were found for trusting the robot's suggested answers with respect to task difficulty, but not for group membership or robot movement.
Original languageEnglish
Pages (from-to)4-20
Number of pages17
JournalJournal of Human-Robot Interaction
Volume6
Issue number3
DOIs
Publication statusPublished - 2017

Keywords

  • human-robot interaction
  • group identity
  • trust
  • compliance

Fingerprint

Dive into the research topics of 'The impact of intergroup bias on trust and approach behaviour towards a humanoid robot'. Together they form a unique fingerprint.

Cite this