login
Home / Papers / Believing in black boxes: machine learning for healthcare does not...

Believing in black boxes: machine learning for healthcare does not need explainability to be evidence-based

104 Citations2021
Liam G. McCoy, Connor T. A. Brenna, Stacy S. Chen

It is found that concerns regarding explainability are not limited to MLHC, but rather extend to numerous well-validated treatment interventions as well as to human clinical judgment itself.

Abstract

Ultimately, we conclude that the value of explainability in MLHC is not intrinsic, but is instead instrumental to achieving greater imperatives such as performance and trust. We caution against the uncompromising pursuit of explainability, and advocate instead for the development of robust empirical methods to successfully evaluate increasingly inexplicable algorithmic systems.

Believing in black boxes: machine learning for healthcare do