Skip to Main content Skip to Navigation
Journal articles

Remote explainability faces the bouncer problem

Erwan Le Merrer 1, * Gilles Trédan 2
* Corresponding author
1 WIDE - the World Is Distributed Exploring the tension between scale and coordination
Inria Rennes – Bretagne Atlantique , IRISA-D1 - SYSTÈMES LARGE ÉCHELLE
2 LAAS-TSF - Équipe Tolérance aux fautes et Sûreté de Fonctionnement informatique
LAAS - Laboratoire d'analyse et d'architecture des systèmes
Abstract : The concept of explainability is envisioned to satisfy society’s demands for transparency about machine learning decisions. The concept is simple: like humans, algorithms should explain the rationale behind their decisions so that their fairness can be assessed. Although this approach is promising in a local context (for example, the model creator explains it during debugging at the time of training), we argue that this reasoning cannot simply be transposed to a remote context, where a model trained by a service provider is only accessible to a user through a network and its application programming interface. This is problematic, as it constitutes precisely the target use case requiring transparency from a societal perspective. Through an analogy with a club bouncer (who may provide untruthful explanations upon customer rejection), we show that providing explanations cannot prevent a remote service from lying about the true reasons leading to its decisions. More precisely, we observe the impossibility of remote explainability for single explanations by constructing an attack on explanations that hides discriminatory features from the querying user. We provide an example implementation of this attack. We then show that the probability that an observer spots the attack, using several explanations for attempting to find incoherences, is low in practical settings. This undermines the very concept of remote explainability in general.
Document type :
Journal articles
Complete list of metadata
Contributor : Gilles Tredan <>
Submitted on : Thursday, December 31, 2020 - 3:20:33 PM
Last modification on : Thursday, June 10, 2021 - 3:01:33 AM


Files produced by the author(s)



Erwan Le Merrer, Gilles Trédan. Remote explainability faces the bouncer problem. Nature Machine Intelligence, Nature Research, 2020, 2 (9), pp.529-539. ⟨10.1038/s42256-020-0216-z⟩. ⟨hal-03048809⟩



Record views


Files downloads