Visualisation for explainable machine learning in biomedical data analysis

Zhonglin Qu, Simeon J. Simoff, Paul J. Kennedy, Daniel R. Catchpoole, Quang Vinh Nguyen

Research output: Chapter in Book / Conference PaperChapter

Abstract

This chapter covers innovations in biomedical data mining and interpretations, especially using visualisations in interpretable machine learning for biomedical data analysis. Visualisations are important in presenting artificial intelligence models and validating the machine learning results. There are more new and complex machine learning methods that have been created to assist decision-making in recent years in the medical domain. Most of them are treated as "black boxes", as the training and prediction processes are hidden in complicated mathematical theories. Visualisation is a way to reveal the process and help a human understand the cause of a decision. Knowing the "why" for the prediction results and "how" the model works can improve users' trust in artificial intelligence results. The chapter introduces different visualisations used in interpreting supervised and unsupervised machine learning models for biomedical data. We also provide discussions and future work on using visualisations in interpreting data mining results in the medical domain.
Original languageEnglish
Title of host publicationData Driven Science for Clinically Actionable Knowledge in Diseases
EditorsDaniel R. Catchpoole, Simeon J. Simoff, Paul J. Kennedy, Quang Vinh Nguyen
Place of PublicationU.S.
PublisherCRC Press
Pages197-214
Number of pages18
EditionFirst edition
ISBN (Electronic)9781003800286
ISBN (Print)9781032273532
DOIs
Publication statusPublished - 6 Dec 2023

Fingerprint

Dive into the research topics of 'Visualisation for explainable machine learning in biomedical data analysis'. Together they form a unique fingerprint.

Cite this