The CRISP-ML approach to handling causality and interpretability issues in machine learning

Inna Kolyshkina, Simeon Simoff

Research output: Chapter in Book / Conference PaperConference Paperpeer-review

Abstract

Interpretability in machine learning projects and one of its aspects - causal inference - have recently gained significant interest and focus. Due to the recent rapid appearance of frameworks, methods, algorithms and software most of which are in early stages of their development, it can be confusing for practitioners and researchers involved in a machine learning project to choose the best approach and set of techniques that would efficiently deliver valid insights while minimising the known risks of failure of data-related projects. CRISP-ML process methodology minimises this confusion by outlining a clear step-by-step process that explicitly treats of interpretability issues through every stage. The paper presents an update of CRISP-ML, which incorporates causality in a similar way and supports formalisation, design and implementation of specific instances of CRISP-ML process, subject to required levels of interpretability and causality of results. The approach is demonstrated on examples from the domains of credit risk, public health and healthcare.
Original languageEnglish
Title of host publicationProceedings of the 2021 IEEE International Conference on Big Data, Dec 15 - Dec 18, 2021, Virtual Event
PublisherIEEE
Pages2306-2312
Number of pages7
ISBN (Print)9781665439022
DOIs
Publication statusPublished - 2021
EventIEEE International Conference on Big Data -
Duration: 15 Dec 2021 → …

Conference

ConferenceIEEE International Conference on Big Data
Period15/12/21 → …

Fingerprint

Dive into the research topics of 'The CRISP-ML approach to handling causality and interpretability issues in machine learning'. Together they form a unique fingerprint.

Cite this