The key idea is that current XAI libraries are not suitable to fully explain and justify medical diagnosis on the individual case, demonstrated via the example of pneumonia detection through a CCN trained on x-ray images.
The black box nature of state-of-the-art machine learning algorithms almost imposed the question whether Artificial Intelligence (AI) decisions must be made transparent und humanly understandable, particularly in critical scenarios such as healthcare. This work aims to provide the audience with inspiration and starting points regarding potential limitations of XAI techniques in the context of medical decision making, especially on the local model level. The key idea is that current XAI libraries are not suitable to fully explain and justify medical diagnosis on the individual case. This is demonstrated via the example of pneumonia detection through a CCN trained on x-ray images.