Explicability of models and prediction results
Focus on the heath sector and practioner ownership
PhD Project


Research partnership
The aim of the project
This is a three-year collaboration whose main objective is to propose a predictive analysis support system in the context of patient management and their care pathways in the hospital sector.
Context
The e-health sector is booming, and in particular the use of health data allowing the creation and use of predictive models for health care paths.
More and more solutions are being launched for practitioners and patients. Predictive models are used to support the doctor in his practice and the patient in his journey. These two types of interlocutors may have a lack of confidence in tools and applications that can be seen as “black boxes”.
This notion of the “black box” of AI in health and the importance of explaining the models are key elements of the Villani Report [1] “Giving meaning to Artificial Intelligence” where a whole section was dedicated to the use of AI in the health sector. This explanation is necessary for the doctor, but also for the patient, and comes close to the notion of the right to information of the patient whose pathway involves artificial intelligence.
With regard to the medical sector in particular, doctors’ interest in a “pathway” rather than “procedure” oriented approach is growing, especially in the hospital sector. For the doctor and the hospital manager, it will be a question of looking at a pathway as a whole, and seeing how to improve and predict it, both with a clinical objective for the patient, but also with an organizational objective for the health establishment. This pathway will be made up of the different consultations and/or hospitalizations, the different biological measurements, and the medical or non-medical data that are related to the pathway.
The prediction will concern an undesirable or desired event of the pathway, it will be used by the medical team in some cases and may be communicated to the patient in others.
The importance of explicability

Transparency of models

Understand
decision making

Generating
trust

Improve
performance

Respecting
regulations

Reduce
ethical bias
Issues and projection
In the health sector, it seems necessary to explain how artificial intelligence and prediction models work. The doctor is not a simple user of a prediction model, he needs to understand how it works by observing, for example, the variables that participated in its construction. Because prediction is not trivial, it directly concerns the patient’s life and health. The physician must be able to understand and evaluate the predictive model, to be able to appropriate it and submit suggestions for improvement.
It is therefore essential for the healthcare sector that companies offering predictive analysis solutions produce not only algorithms and predictive models, but also the means to help analyze and interpret these models. These tools must be made available to physicians or any other authorized personnel involved in the healthcare process, i.e., people who are not data analysis specialists but who have business knowledge.

It could even be a mandatory element in the future, that each prediction model provides a result, to which one or more possible explanations are associated, and based on the influence of the variables that allowed the construction of the model.
Resources
Consult the articles and publications of our Kaduceo experts
State of art: Explainability of AI models
To reduce errors and better understand the predictions made by AI, the explicability of AI models (XAI for "eXplainable AI") has emerged as a research…
Medical imaging: Explainability methods for image classification
Automatic image analysis has seen its performance grow strongly in recent years. These recent advances improve the construction of predictive imaging models, increasing their reliability
Our partner's approach : IRIT

Chantal Soulé-Dupuy

Julien Aligon
The GIS team of IRIT, of which we are members, has always been very invested in user-centered approaches to information access and data processing. In particular, we consider that the domain expert must be accompanied as best as possible in his data analysis process. In the particular case of machine learning, the literature proposes methods considered as efficient for the automatic recommendation of relevant algorithms (and its corresponding parameters), using AutoML type methods. However, the explanation of the prediction results obtained by these algorithms remains an important problem.
The individual explanation methods (especially the so-called “additive” ones) have allowed a major advance for a better appropriation by the domain expert.
Our investment in the field of predictive explanation first came from reflections, with the help of researchers from INSERM and RESTORE, for the development of a system allowing biology/health researchers to better understand predictive models by themselves.
These issues, as well as the health domain, naturally led us to work with Kaduceo, whose concerns are clearly similar. The joint work [1] on a new approach for the calculation of predictive explanations illustrates the strength of our collaboration. The obtaining of a CIFRE thesis with Elodie is also the logical continuation. This thesis will allow us, in particular, to deepen the problem of prediction explanation, by seeking to bring up data with a very high explanatory value.
[1] Gabriel Ferrettini, Elodie Escriva, Julien Aligon, Jean-Baptiste Excoffier, Chantal Soulé-Dupuy. Coalitional strategies for efficient individual prediction explanation, soumis à Information Systems Frontiers (accepté avec révision mineure), 2021.

Bibliographic resources
- Molnar, Christoph. "Interpretable machine learning. A Guide for Making Black Box Models Explainable", 2019
- Štrumbelj, E., Kononenko, I. Explaining prediction models and individual predictions with feature contributions. Knowl Inf Syst 41, 647–665 (2014)
- Lundberg, Scott M., and Su-In Lee. "A unified approach to interpreting model predictions." Advances in Neural Information Processing Systems. 2017.
- Ferrettini G., Aligon J., Soulé-Dupuy C. (2020) Explaining Single Predictions: A Faster Method. In: Chatzigeorgiou A. et al. (eds) SOFSEM 2020: Theory and Practice of Computer Science. SOFSEM 2020
- Ferrettini G., Aligon J., Soulé-Dupuy C. (2020) Improving on Coalitional Prediction Explanation. In: Darmont J., Novikov B., Wrembel R. (eds) Advances in Databases and Information Systems. ADBIS 2020

An idea ? A project ?


Kaduceo USA
725 Cool Springs Bld, FRANKLIN, TN, 37067

Kaduceo France
31 Allée Jules Guesde, 31400 Toulouse, France