login
Home / Papers / Interpretable Machine Learning

Interpretable Machine Learning

540 Citations2021
Valerie Chen, Jeffrey Li, Joon Sik Kim

The field of IML (interpretable machine learning) grew out of concerns about people's inability to understand the reasoning of increasingly complex models to empower various stakeholders to tackle use cases, such as building trust in models, performing model debugging, and generally informing real human decision-making.

Abstract

<jats:p>The emergence of machine learning as a society-changing technology in the past decade has triggered concerns about people's inability to understand the reasoning of increasingly complex models. The field of IML (interpretable machine learning) grew out of these concerns, with the goal of empowering various stakeholders to tackle use cases, such as building trust in models, performing model debugging, and generally informing real human decision-making.</jats:p>

Interpretable Machine Learning