login
Home / Papers / Interpretability of Deep Learning

Interpretability of Deep Learning

12 Citations•2022•
Z. Huang, Fan Li, Zhanliang Wang
International Journal of Future Computer and Communication

The current methodologies and techniques about improving the interpretability of Deep Learning from different research directions are reviewed and a future look for Deep Learning researchers is provided.

Abstract

Deep Learning achieves surprising performance in many real-world tasks. However, on a black-box approach, computational techniques have been applied without a strong critical understanding of their properties. In this paper, we review the current methodologies and techniques about improving the interpretability of Deep Learning from different research directions. Some works are based on analysis of the learning process, some lay more emphasis on interpreted network architecture, and others intend to design self-interpretable Deep Learning models. This article analyzes the popular and advanced works in these fields and provides a future look for Deep Learning researchers.