Enabling affordances for AI governance

Siri Padmanabhan Poti, Christopher J. Stanton

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)
26 Downloads (Pure)

Abstract

Organizations dealing with mission-critical AI based autonomous systems may need to provide continuous risk management controls and establish means for their governance. To achieve this, organizations are required to embed trustworthiness and transparency in these systems, with human overseeing and accountability. Autonomous systems gain trustworthiness, transparency, quality, and maintainability through the assurance of outcomes, explanations of behavior, and interpretations of intent. However, technical, commercial, and market challenges during the software development lifecycle (SDLC) of autonomous systems can lead to compromises in their quality, maintainability, interpretability and explainability. This paper conceptually models transformation of SDLC to enable affordances for assurance, explanations, interpretations, and overall governance in autonomous systems. We argue that opportunities for transformation of SDLC are available through concerted interventions such as technical debt management, shift-left approach and non-ephemeral artifacts. This paper contributes to the theory and practice of governance of autonomous systems, and in building trustworthiness incrementally and hierarchically.

Original languageEnglish
Article number100086
Number of pages10
JournalJournal of Responsible Technology
Volume18
DOIs
Publication statusPublished - 2024

Bibliographical note

Publisher Copyright:
© 2024 The Author(s)

Keywords

  • Assurance
  • Behavior tree
  • Explainability
  • Governance
  • Interpretability
  • Shift-left
  • Technical debt

Fingerprint

Dive into the research topics of 'Enabling affordances for AI governance'. Together they form a unique fingerprint.

Cite this