Low-rank human-like agents are trusted more and blamed less in human-autonomy teaming

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)
6 Downloads (Pure)

Abstract

If humans are to team with artificial teammates, factors that influence trust and shared accountability must be considered when designing agents. This study investigates the influence of anthropomorphism, rank, decision cost, and task difficulty on trust in human-autonomous teams (HAT) and how blame is apportioned if shared tasks fail. Participants (N = 31) completed repeated trials with an artificial teammate using a low-fidelity variation of an air-traffic control game. We manipulated anthropomorphism (human-like or machine-like), military rank of artificial teammates using three-star (superiors), two-star (peers), or one-star (subordinate) agents, the perceived payload of vehicles with people or supplies onboard, and task difficulty with easy or hard missions using a within-subject design. A behavioural measure of trust was inferred when participants accepted agent recommendations, and a measure of no trust when recommendations were rejected or ignored. We analysed the data for trust using binomial logistic regression. After each trial, blame was apportioned using a 2-item scale and analysed using a one-way repeated measures ANOVA. A post-experiment questionnaire obtained participants’ power distance orientation using a seven-item scale. Possible power-related effects on trust and blame apportioning are discussed. Our findings suggest that artificial agents with higher levels of anthropomorphism and lower levels of rank increased trust and shared accountability, with human team members accepting more blame for team failures.

Original languageEnglish
Article number1273350
Number of pages14
JournalFrontiers in Artificial Intelligence
Volume7
DOIs
Publication statusPublished - 2024

Bibliographical note

Publisher Copyright:
Copyright © 2024 Gall and Stanton.

Keywords

  • anthropomorphism
  • blame
  • human-autonomy teaming
  • power distance orientation
  • shared tasks
  • status
  • trust

Fingerprint

Dive into the research topics of 'Low-rank human-like agents are trusted more and blamed less in human-autonomy teaming'. Together they form a unique fingerprint.

Cite this