Emmanuel Doumard, Julien Aligon, Elodie Escriva, Jean-Baptiste Excoffier, Paul Monsarrat, Chantal Soulé-Dupuy

*Détails des auteurs accessibles sur la publication

24th International Workshop on Design, Optimization, Languages and Analytical Processing of Big Data

Edinburgh, UK – March 29, 2022

The most widely used additive explanation methods, SHAP and LIME, for understanding the predictions of complex machine learning models suffer from limitations that are rarely measured in the literature. This paper aims at measuring these limitations. We illustrate and validate the results on a specific medical dataset, SA-Heart.

Our results reveal that LIME and SHAP approximations are particularly efficient in high dimension and generate intelligible global explanations, but they suffer from a lack of precision regarding local explanations. The coalition-based methods are computationally expensive in high dimension, but provide better local explanations. Finally, we present a roadmap summarizing our work by indicating the most appropriate method depending on the dimension of the dataset and the user’s goals.

Other resources you may be interested in

A Prospective Pharmacoepidemiologic Cohort Study in 30 French NICUs From 2014 to 2020

A Prospective Pharmacoepidemiologic Cohort Study in 30 French NICUs From 2014 to 2020

No consensus exists about the doses of analgesics, sedatives, anesthetics, and paralytics used in critically ill neonates.

Comparison of explanatory methods: influence of characteristics

Comparison of explanatory methods: influence of characteristics

Kaduceo, co-author of work presented at the 24th International Workshop on Design, Optimization, Languages and Analytical Processing of Big Data