Gabriel Ferrettini, Elodie Escriva, Julien Aligon, Jean-Baptiste Excoffier &  Chantal Soulé-Dupuy 

*Author details available on the publication

Inf Syst Front (2021). https://doi.org/10.1007/s10796-021-10141-9
Published: 22 May 2021

As machine learning (ML) is now widely applied in many fields, both in research and industry, understanding what happens inside the black box is becoming a growing demand, especially by non-experts of these models.

This paper proposes methods based on the detection of groups of relevant attributes – called coalitions – influencing a prediction and compares them with the literature. Our results show that these coalition methods are more efficient than existing methods such as SHapley Additive exPlanation (SHAP). The computational time is reduced while preserving acceptable accuracy of individual prediction explanations. Therefore, this allows for a wider practical use of explanation methods to increase trust between the developed ML models, end users, and anyone affected by a decision where these models played a role.

Other resources you may be interested in

Comparison of explanatory methods: influence of characteristics

Comparison of explanatory methods: influence of characteristics

Kaduceo, co-author of work presented at the 24th International Workshop on Design, Optimization, Languages and Analytical Processing of Big Data

Comparison of predictive models for cumulative live birth rate after treatment with ART

Comparison of predictive models for cumulative live birth rate after treatment with ART

Explicability-based methods would allow access to new knowledge, to gain a greater comprehension of which characteristics and interactions really influence a couple's journey. These models can…