login
Home / Papers / Explainable Artificial Intelligence (XAI) for Malware Analysis: A Survey of...

Explainable Artificial Intelligence (XAI) for Malware Analysis: A Survey of Techniques, Applications, and Open Challenges

2 Citations•2024•
Harikha Manthena, Shaghayegh Shajarian, Jeffrey Kimmell
IEEE Access

This survey presents a comprehensive review of state-of-the-art ML techniques for malware analysis, with a specific focus on explainability methods, and examines existing XAI frameworks, their application in malware classification and detection, and the challenges associated with making malware detection models more interpretable.

Abstract

Machine learning (ML) has rapidly advanced in recent years, revolutionizing fields such as finance, medicine, and cybersecurity. In malware detection, ML-based approaches have demonstrated high accuracy; however, their lack of transparency poses a significant challenge. Traditional black-box models often fail to provide interpretable justifications for their predictions, limiting their adoption in security-critical environments where understanding the reasoning behind a detection is essential for threat mitigation and response. Explainable AI (XAI) addresses this gap by enhancing model interpretability while maintaining strong detection capabilities. This survey presents a comprehensive review of state-of-the-art ML techniques for malware analysis, with a specific focus on explainability methods. We examine existing XAI frameworks, their application in malware classification and detection, and the challenges associated with making malware detection models more interpretable. Additionally, we explore recent advancements and highlight open research challenges in the field of explainable malware analysis. By providing a structured overview of XAI-driven malware detection approaches, this survey serves as a valuable resource for researchers and practitioners seeking to bridge the gap between ML performance and explainability in cybersecurity.