Gabriel Ferrettini, Elodie Escriva, Julien Aligon, Jean-Baptiste Excoffier & Chantal Soulé-Dupuy
*Author details available on the publication
As machine learning (ML) is now widely applied in many fields, both in research and industry, understanding what happens inside the black box is becoming a growing demand, especially by non-experts of these models.
This paper proposes methods based on the detection of groups of relevant attributes – called coalitions – influencing a prediction and compares them with the literature. Our results show that these coalition methods are more efficient than existing methods such as SHapley Additive exPlanation (SHAP). The computational time is reduced while preserving acceptable accuracy of individual prediction explanations. Therefore, this allows for a wider practical use of explanation methods to increase trust between the developed ML models, end users, and anyone affected by a decision where these models played a role.
Other resources you may be interested in
As machine learning (ML) is now widely applied in many fields, understanding what happens inside the black box is becoming a growing demand, especially by non-experts
This study analyzes trends in metabolic bariatric surgery among adolescents in France on the basis of national data over an 11-year period