Home / Papers / Explainable Artificial Intelligence (XAI)

Explainable Artificial Intelligence (XAI)

88 Citations2022
Mazharul Hossain
journal unavailable

A new frontier is opening for researchers in explainability and explainable AI, where a new frontier is opening for researchers to interpret the behavior and predictions of neural networks.

Abstract

Complex machine learning models perform better. However, we consider these models as black boxes. That’s where Explainable AI (XAI) comes into play. Understanding why a model makes a specific prediction can be as crucial as its accuracy for many applications, researchers, and decision-makers. In many real-world applications, the explainability and transparency of AI systems are indispensable. The research community and industry are giving growing attention to explainability and explainable AI. Compared to traditional machine learning methods, deep neural networks (DNNs) have been very successful. DNNs are comparably weak in explaining their inference processes and results because the data input passes through many layers, and a single prediction can involve millions of mathematical operations. It is difficult for humans to follow the exact mapping from data input to the predicted result. We would have to consider millions of weights that interact in a complex way to understand a prediction by a neural network. To interpret the behavior and predictions of neural networks, we need specific interpretation methods. Thus, a new frontier is opening for researchers.