An interpretable machine learning methodology with cyber security for integrating Explainable AI methods designed to improve an analyst's or team's ability to both operate a threat detection model, and enhance a model, in terms of performance, usability and interpretability is introduced.
Cybersecurity increasingly relies on machine learning models for detecting and responding to cyber threats. Many modern machine learning models for cybersecurity are opaque and mostly unexplainable to users, and therefore pose serious challenges to users adopting and trusting these models, especially in high-stakes environments. A "black box" model may output a prediction, such as reporting a threat, without this model being able to provide any meaningful or naive explanation to users. This may understandably frustrate users and security practitioners alike. The purpose of this research study is to introduce an interpretable machine learning methodology with cyber security for integrating Explainable AI (XAI) methods designed to improve an analyst's or team's ability to both operate a threat detection model, and enhance a model, in terms of performance, usability and interpretability. The research produced a data-driven XAI framework rendering decision-making by teams of cybersecurity experts interpretable using the underlying machine learning models. The supported decision-making methods of XAI included using interpretable algorithm (i.e., a decision tree, a rule-based algorithm, LIME. This study also illustrated measurable improvements in accuracy of threat detection using interpretable machine learning models, while providing human-interpretable, legible, and understandable explanations of model predictions. These benefits will aid the process of decision making, reduce response times, improve communication between data science and cybersecurity practitioners the framework uses interactive visualization tools to increase engagement, decrease reliance on black box models, and encourage informed, data-driven security behaviors.