Explicability of models and prediction results

Focus on the heath sector and practioner ownership

PhD Project

Research partnership

Ce projet de thèse, validé par l’Association Nationale de Recherche et Technologie (ANRT), est encadré par une Convention CIFRE co-signée entre Kaduceo et l’Institut de Recherche en Informatique de Toulouse (IRIT). 

The aim of the project

This is a three-year collaboration whose main objective is to propose a predictive analysis support system in the context of patient management and their care pathways in the hospital sector.

Context

The e-health sector is booming, and in particular the use of health data allowing the creation and use of predictive models for health care paths.

More and more solutions are being launched for practitioners and patients. Predictive models are used to support the doctor in his practice and the patient in his journey. These two types of interlocutors may have a lack of confidence in tools and applications that can be seen as “black boxes”.

This notion of the “black box” of AI in health and the importance of explaining the models are key elements of the Villani Report [1] “Giving meaning to Artificial Intelligence” where a whole section was dedicated to the use of AI in the health sector. This explanation is necessary for the doctor, but also for the patient, and comes close to the notion of the right to information of the patient whose pathway involves artificial intelligence.

With regard to the medical sector in particular, doctors’ interest in a “pathway” rather than “procedure” oriented approach is growing, especially in the hospital sector. For the doctor and the hospital manager, it will be a question of looking at a pathway as a whole, and seeing how to improve and predict it, both with a clinical objective for the patient, but also with an organizational objective for the health establishment. This pathway will be made up of the different consultations and/or hospitalizations, the different biological measurements, and the medical or non-medical data that are related to the pathway.

The prediction will concern an undesirable or desired event of the pathway, it will be used by the medical team in some cases and may be communicated to the patient in others.

The importance of explicability

Transparency of models

Understand
decision making

Generating
trust

Improve
performance

Respecting
regulations

Reduce
ethical bias

Issues and projection

In the health sector, it seems necessary to explain how artificial intelligence and prediction models work. The doctor is not a simple user of a prediction model, he needs to understand how it works by observing, for example, the variables that participated in its construction. Because prediction is not trivial, it directly concerns the patient’s life and health. The physician must be able to understand and evaluate the predictive model, to be able to appropriate it and submit suggestions for improvement.

It is therefore essential for the healthcare sector that companies offering predictive analysis solutions produce not only algorithms and predictive models, but also the means to help analyze and interpret these models. These tools must be made available to physicians or any other authorized personnel involved in the healthcare process, i.e., people who are not data analysis specialists but who have business knowledge.

It could even be a mandatory element in the future, that each prediction model provides a result, to which one or more possible explanations are associated, and based on the influence of the variables that allowed the construction of the model.

Resources

Consult the articles and publications of our Kaduceo experts

State of art: Explainability of AI models

State of art: Explainability of AI models

To reduce errors and better understand the predictions made by AI, the explicability of AI models (XAI for "eXplainable AI") has emerged as a research

Medical imaging: Explainability methods for image classification

Medical imaging: Explainability methods for image classification

Automatic image analysis has seen its performance grow strongly in recent years. These recent advances improve the construction of predictive imaging models, increasing their reliability

Our partner's approach : IRIT

Chantal Soulé-Dupuy

Julien Aligon

The GIS team of IRIT, of which we are members, has always been very invested in user-centered approaches to information access and data processing. In particular, we consider that the domain expert must be accompanied as best as possible in his data analysis process. In the particular case of machine learning, the literature proposes methods considered as efficient for the automatic recommendation of relevant algorithms (and its corresponding parameters), using AutoML type methods. However, the explanation of the prediction results obtained by these algorithms remains an important problem.

The individual explanation methods (especially the so-called “additive” ones) have allowed a major advance for a better appropriation by the domain expert.

Our investment in the field of predictive explanation first came from reflections, with the help of researchers from INSERM and RESTORE, for the development of a system allowing biology/health researchers to better understand predictive models by themselves.

These issues, as well as the health domain, naturally led us to work with Kaduceo, whose concerns are clearly similar. The joint work [1] on a new approach for the calculation of predictive explanations illustrates the strength of our collaboration. The obtaining of a CIFRE thesis with Elodie is also the logical continuation. This thesis will allow us, in particular, to deepen the problem of prediction explanation, by seeking to bring up data with a very high explanatory value.

[1] Gabriel Ferrettini, Elodie Escriva, Julien Aligon, Jean-Baptiste Excoffier, Chantal Soulé-Dupuy. Coalitional strategies for efficient individual prediction explanation, soumis à Information Systems Frontiers (accepté avec révision mineure), 2021.

"This PhD project topic allows me to specifically adapt model interpretability research to medical data and the critical and sensitive healthcare sector. Thanks to the collaboration between IRIT and Kaduceo in this PhD, I can validate my work within healthcare structures, to participate in the improvement of medical practices using computer science and information tools."
Elodie Escriva
PhD Student in computer science

An idea ? A project ?

Kaduceo USA

725 Cool Springs Bld, FRANKLIN, TN, 37067

Kaduceo France

31 Allée Jules Guesde, 31400 Toulouse, France