This paper argues that the requirements from explanations in healthcare are different from those for generic images, and that existing explanations techniques fall short in the healthcare domain, and compares a number of explanation techniques and analyses whether they provide helpful and adequate explanations.
Explanations for AI are a crucial part of autonomous systems: they increase user’s confidence, provide an interpretation of an otherwise black-box system, and can serve as an interface between the user and the AI system. Explanations are to become mandatory for all AI systems influencing people (see, for example, the upcoming EU AI Act). While so far explanations of image classifiers focused on explaining images of objects, such as ImageNet, there is an important area of application for them, namely, healthcare. In this paper we focus on a particular area of healthcare: the use of CNN machine-learning models for cancer detection in MRI brain images. We compare a number of explanation techniques and analyse whether they provide helpful and adequate explanations. We argue that the requirements from explanations in healthcare are different from those for generic images, and that existing explanations techniques fall short in the healthcare domain.