Governing generative AI for disability and inclusion

Kuansong Victor Zhuang, Gerard Goggin

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)

Abstract

The potential of generative AI for digital accessibility solutions that can benefit the lives of disabled people and enable their participation in society cannot be understated. Against this new world of possibilities, concerns of disability bias in AI have already been highlighted, specifically in how disability data is always-already excluded from datasets and how disability is not considered within the development of such technologies. Such concerns have not yet been properly registered, let alone addressed, in governance and policy, with the launch of generative AI in 2022–2023 to global audiences via rapid take-up of applications such as ChatGPT. Generative AI has ushered in a heightened discourse on potential benefits for assistive technologies, yet it also comes with significant ethical issues and problems for disability. Not surprisingly, given the fast pace of the rollout of generative AI, how they exclude and create disability bias is still little understood. Writing at the theoretical intersections of critical disability studies in conversation with global media policy, Internet studies, media and communication studies, and science and technology studies, we offer a critical analysis of disability and the deployment of generative AI within AI policies and ask, how can we govern generative AI for disability and inclusion?.

Original languageEnglish
Number of pages17
JournalInformation, Communication & Society
DOIs
Publication statusE-pub ahead of print (In Press) - 2025

Keywords

  • disability
  • emerging technologies
  • GenAI
  • governance
  • inclusion

Fingerprint

Dive into the research topics of 'Governing generative AI for disability and inclusion'. Together they form a unique fingerprint.

Cite this