Home / Papers / Explainable Artificial Intelligence (XAI) Approach for Reinforcement Learning Systems

Explainable Artificial Intelligence (XAI) Approach for Reinforcement Learning Systems

88 Citations2024
Maria J. P. Peixoto, Akramul Azim
Proceedings of the 39th ACM/SIGAPP Symposium on Applied Computing

This study uses a reinforcement learning method called DQN with uncertainty and compares the results with four other baseline algorithms: DQN, DDQN with Prioritised Replay, Dueling DDQN and DDQN to demonstrate the significance of incorporating uncertainty in Explainable Artificial Intelligence systems.

Abstract

This study highlights the significance of incorporating uncertainty in Explainable Artificial Intelligence (XAI) systems. To achieve our purpose, we utilize Bayesian deep learning and uncertainty-aware planning, which allow us to create visual indicators to demonstrate how autonomous agents perceive their surroundings and take action. We use a reinforcement learning method called DQN with uncertainty and compare the results with four other baseline algorithms: DQN, DDQN with Prioritised Replay, Dueling DDQN and DDQN. Our findings show that the uncertainty estimate can generate a more efficient and stable decision-making model. To explain the behaviour of our agent, we suggest an interface that displays the current view of the environment, the agent's view, Q-values for each possible action, state value, a heatmap relative to the agent's input, and a visual representation of the uncertainty connected with each Q-value. Our approach can enhance the human understanding of the algorithm's decision-making process and confidence in its performance.