Mediating explainer for Human Autonomy Teaming

Siri Padmanabhan Poti, Christopher J. Stanton

Research output: Contribution to journalArticlepeer-review

8 Downloads (Pure)

Abstract

This paper examines the environment of mission-critical HAT operations, and conceptually models the human intent, AAI agency, and societal context. The conceptual model employs agency theory to describe the relationship between human principals and Autonomous Artificial Intelligence (AAI) agents in HAT. Further, an application of stakeholder theory prompts the inclusion of societal stakeholders’ roles in mission-critical HAT operations. The model reveals the opportunity for incorporating an intermediary mechanism of a non-human Mediating Explainer (MeX). MeX offers a novel means of resolving the asymmetries of information and decision-making power in HAT relationships.

Original languageEnglish
Pages (from-to)201-208
Number of pages8
JournalCEUR Workshop Proceedings
Volume3793
Publication statusPublished - 2024
EventJoint of the 2nd World Conference on eXplainable Artificial Intelligence Late-Breaking Work, Demos and Doctoral Consortium, xAI-2024:LB/D/DC - Valletta, Malta
Duration: 17 Jul 202419 Jul 2024

Bibliographical note

Publisher Copyright:
© 2024 Copyright for this paper by its authors.

Keywords

  • Agency Theory
  • Social Legitimacy
  • Stakeholder Theory
  • XAI Human Autonomy Teaming

Fingerprint

Dive into the research topics of 'Mediating explainer for Human Autonomy Teaming'. Together they form a unique fingerprint.

Cite this