login
Home / Papers / Agnostic eXplainable Artificial Intelligence (XAI) Method Based on Volterra Series

Agnostic eXplainable Artificial Intelligence (XAI) Method Based on Volterra Series

2 Citations•2023•
J. Contreras, T. Bocklitz
journal unavailable

This work proposes an agnostic XAI method based on the Volterra series that approximates models and presents relevance maps indicating higher and lower contributions to the approximation prediction (logit).

Abstract

: Convolutional Neural Networks (CNN) have shown remarkable results in several fields in recent years. Traditional performance metrics assess model performance but fail to detect biases in datasets and models. Explain-able artificial intelligence (XAI) methods aim to evaluate models, identify biases, and clarify model decisions. We propose an agnostic XAI method based on the Volterra series that approximates models. Our model architecture is composed of three second-order Volterra layers. Relevant information can be extracted from the model to be approximated and used to generate relevance maps that explain the contribution of the input elements to the prediction. Our Volterra-XAI learns its Volterra kernels comprehensively and is trained using a target model outcome. Therefore, no labels are required, and even when training data is unavailable, it is still possible to generate an approximation utilizing similar data. The trustworthiness of our method can be measured by considering the reliability of the Volterra approximation in comparison with the original model. We evaluate our XAI method for the classification task on 1D Raman spectra and 2D images using two common CNN architectures without hyperparameter tuning. We present relevance maps indicating higher and lower contributions to the approximation prediction (logit).