Dive into our curated list of top research papers on XAI, showcasing pivotal advancements in the field of Explainable AI. Gain insight into methodologies and applications from leading experts. Whether you're a student, researcher, or enthusiast, these papers will elevate your understanding and appreciation of XAI.
Looking for research-backed answers?Try AI Search
Ranu Sewada, Ashwani Jangid, Piyush Kumar + 1 more
international journal of food and nutritional sciences
This research paper delves into the essence of XAI, unraveling its significance across diverse domains such as healthcare, finance, and criminal justice and scrutinizes the delicate balance between interpretability and performance, shedding light on instances where the pursuit of accuracy may compromise explain-ability.
Michael Ridley
Information Technology and Libraries
The field of explainable artificial intelligence (XAI) advances techniques, processes, and strategies that provide explanations for the predictions, recommendations, and decisions of opaque and complex machine learning systems. Increasingly academic libraries are providing library users with systems, services, and collections created and delivered by machine learning. Academic libraries should adopt XAI as a tool set to verify and validate these resources, and advocate for public policy regarding XAI that serves libraries, the academy, and the public interest.
David Gunning, M. Stefik, Jaesik Choi + 3 more
Science Robotics
This research presents a meta-modelling architecture that automates the very labor-intensive and therefore time-heavy and therefore expensive and expensive process of manually cataloging and cataloging artificial intelligence applications.
Emer Owens, Barry Sheehan, Martin Mullins + 3 more
Risks
The study finds that XAI methods are particularly prevalent in claims management, underwriting and actuarial pricing practices, and proposes an adapted definition of XAI informed by the systematic review ofXAI literature in insurance.
V. Chamola, Vikas Hassija, A. Sulthana + 3 more
IEEE Access
This paper presents a comprehensive review of the state-of-the-art on how to build a Trustworthy and eXplainable AI, taking into account that AI is a black box with little insight into its underlying structure.
A new frontier is opening for researchers in explainability and explainable AI, where a new frontier is opening for researchers to interpret the behavior and predictions of neural networks.
Naitik A. Pawar, Sanika g. Mukhmale, A. MuleSaikumar + 1 more
journal unavailable
: Explainable artificial intelligence is often discussed in relation to deep learning and plays an important role in the FAT -- fairness, accountability and transparency -- ML model. XAI is useful for organizations that want to build trust when implementing an AI. XAI can help them understand an AI model's behavior, helping to find potential issues such as AI biases. XAI counters the "black box" tendency of machine learning, where even the AI's designers cannot explain why it arrived at a specific decision. XAI helps human users understand the reasoning behind AI and machine learning (ML) algo...
F. Hussain, R. Hussain, E. Hossain
ArXiv
The remarkable advancements in Deep Learning (DL) algorithms have fueled enthusiasm for using Artificial Intelligence (AI) technologies in almost every domain; however, the opaqueness of these algorithms put a question mark on their applications in safety-critical systems. In this regard, the `explainability' dimension is not only essential to both explain the inner workings of black-box algorithms, but it also adds accountability and transparency dimensions that are of prime importance for regulators, consumers, and service providers. eXplainable Artificial Intelligence (XAI) is the set of te...
Erico Tjoa, Cuntai Guan
IEEE Transactions on Neural Networks and Learning Systems
A review on interpretabilities suggested by different research works and categorize them is provided, hoping that insight into interpretability will be born with more considerations for medical practices and initiatives to push forward data-based, mathematically grounded, and technically grounded medical education are encouraged.
Tim Hulsen
AI
This narrative review will have a look at some central concepts in XAI, describe several challenges around XAI in healthcare, and discuss whether it can really help healthcare to advance by increasing understanding and trust.
Timo Speith
Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency
This paper will review recent approaches to constructing taxonomies of XAI methods and discuss general challenges concerning them as well as their individual advantages and limitations, and propose and discuss three possible solutions.
Christian Meske, B. Abedin, I. Junglas + 1 more
journal unavailable
The use of Artificial Intelligence in the context of decision analytics and service science has received significant attention in academia and practice alike, but much of the current efforts have focused on advancing underlying algorithms and not on decreasing the complexity of AI systems.
Thomas Rojat, Raphael Puget, David Filliat + 3 more
ArXiv
This paper presents an overview of existing explainable AI (XAI) methods applied on time series and illustrates the type of explanations they produce and provides a reflection on the impact of these explanation methods to provide confidence and trust in the AI systems.
İbrahim Kök, Feyza Yıldırım Okay, Özgecan Muyanlı + 1 more
IEEE Internet of Things Journal
An in-depth and systematic review of recent studies that use XAI models in the scope of the IoT domain and classify the studies according to their methodology and application areas.
Barnaby Crook, Maximilian Schluter, Timo Speith
2023 IEEE 31st International Requirements Engineering Conference Workshops (REW)
It is argued that the supposed trade-off between explainability and performance is best approached in a nuanced way that incorporates resource availability, domain characteristics, and considerations of risk.
David Gunning, D. Aha
AI Mag.
In a series of ongoing evaluations, the developer teams are assessing how well their XAM systems’ explanations improve user understanding, user trust, and user task performance.
N. C. Chung, Hongkyou Chung, Hearim Lee + 3 more
ArXiv
This paper analyze legislative and policy developments in the United States and the European Union, such as the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, the AI Act, the AI Liability Directive, and the General Data Protection Regulation from a right to explanation perspective from a right to explanation perspective to argue that these AI regulations and current market conditions threaten effective AI governance and safety.
G. Chaudhary
Kutafin Law Review
The argument presented is that xAI is crucial in contexts as it empowers judges to make informed decisions based on algorithmic outcomes, and the lack of transparency, in decision-making processes can impede judge's ability to do effectively.
N. Shafiabady, Nick Hadjinicolaou, Nadeesha Hettikankanamage + 3 more
PLOS ONE
This study aims to demystify the decision-making processes of the prediction model using XAI, essential for the ethical deployment of AI, fostering trust and transparency in these systems.
Julie Gerlings, Arisa Shollo, Ioanna D. Constantiou
journal unavailable
A systematic review of xAI literature on the topic identifies four thematic debates central to how xAI addresses the black-box problem and synthesizes the findings into a future research agenda to further the xAI body of knowledge.
David Gunning
Proceedings of the 24th International Conference on Intelligent User Interfaces
The DARPA's Explainable Artificial Intelligence (XAI) program endeavors to create AI systems whose learned models and decisions can be understood and appropriately trusted by end users. This talk will summarize the XAI program and present highlights from these Phase 1 evaluations.
A. Laios, D. de Jong, E. Kalampokis
Translational Cancer Research
Utilization of model-agnostic explainable artificial intelligence frameworks in oncology: a narrative explores the role of models in medicine and the role that models play in medicine's therapeutic decisions.
Haoyi Xiong, Xuhong Li, Xiaofei Zhang + 5 more
ArXiv
This work distill XAI methodologies into data mining operations on training and testing data across modalities, such as images, text, and tabular data, as well as on training logs, checkpoints, models and other DNN behavior descriptors.
K. Sudar, P. Nagaraj, S. Nithisaa + 3 more
2022 International Conference on Sustainable Computing and Data Communication Systems (ICSCDS)
From this article, one can understand the analysis of Alzheimer's by using XAI with the corresponding feature explanation, so that the result is much more trustable and reliable.
Zvjezdana Krstić, Mirjana Maksimović
Proceedings of the 29th International Scientific Conference Strategic Management and Decision Support Systems in Strategic Management
The key role of XAI in shaping future trends in marketing research and its implications for businesses operating in a dynamic market environment are highlighted, with the aim of driving marketing practices in a data-dominated era.
Timo Speith, Markus Langer
2023 IEEE 31st International Requirements Engineering Conference Workshops (REW)
This novel perspective is intended to augment the perspective of other authors by focusing less on the EMs themselves but on what explainability approaches intend to achieve (i.e., provide good explanatory information, facilitate understanding, satisfy societal desiderata).
Arun Das, P. Rad
ArXiv
A taxonomy and categorizing the XAI techniques based on their scope of explanations, methodology behind the algorithms, and explanation level or usage which helps build trustworthy, interpretable, and self-explanatory deep learning models is proposed.
José Luis Corcuera Bárcena, Mattia Daole, P. Ducange + 4 more
journal unavailable
The concept of Federated Learning of eXplainable AI (XAI) models, in short FED-XAI, purposely designed to address these two requirements simultaneously of preserving the data privacy and ensuring a certain level of explainability of the system.
Aryan Sethi, Sahiti Dharmavaram, S. K. Somasundaram
2024 3rd International Conference on Artificial Intelligence For Internet of Things (AIIoT)
A predictive model for heart disease with 96.07% accuracy is developed, integrating factors like peak exercise ST segment slope, maximum heart rate, and exercise-induced angina, and enables interpretable predictions, facilitating early detection and informed management.
Janet Hsiao, H. Ngai, Luyu Qiu + 2 more
ArXiv
The current report aims to propose suitable metrics for evaluating XAI systems from the perspective of the cognitive states and processes of stakeholders, and elaborate on 7 dimensions, i.e., goodness, satisfaction, user understanding, curiosity&engagement, trust&reliance, controllability&interactivity, and learning curve&productivity.
Evandro S. Ortigossa, Thales Gonçalves, L. G. Nonato
IEEE Access
The theoretical foundations of Explainable Artificial Intelligence (XAI) are provided, clarifying diffuse definitions and identifying research objectives, challenges, and future research lines related to turning opaque machine learning outputs into more transparent decisions.
C. I. Nwakanma, Love Allen Chijioke Ahakonye, J. Njoku + 5 more
Applied Sciences
XAI though in its infancy of application to ICV, is a promising research direction in the quest for improving the network efficiency of ICVs and increased transparency will foster its acceptability in the automobile industry.
Krishna P. Kalyanathaya, K. K.
International Journal of Applied Engineering and Management Letters
A common approach to build explainable models that may fulfill current challenges of XAI is conceptualized and brought some open research agenda in these findings and future directions.
Jeetesh Sharma, Murari Lal Mittal, G. Soni + 1 more
Recent Patents on Engineering
Based on the findings, XAI techniques can bring new insights and opportunities for addressing critical maintenance issues, resulting in more informed decisions, and the results analysis suggests a viable path for future studies.
Bart M. de Vries, G. Zwezerijnen, G. Burchell + 3 more
Frontiers in Medicine
There is currently no clear consensus on how XAI should be deployed in order to close the gap between medical professionals and DL algorithms for clinical implementation, so advocate for systematic technical and clinical quality assessment of XAI methods.
Het Naik, Priyanka Goradia, Vomini Desai + 2 more
European Journal of Electrical Engineering and Computer Science
This study explores Explainable Artificial Intelligence in general and then talked about its potential use for the India Healthcare system and demonstrated some XAI techniques on a diabetes dataset with an aim to show practical implementation.
Pieter Barnard, N. Marchetti, L. Dasilva
IEEE Networking Letters
Experiments conducted on the NSL-KDD dataset show that the solution is able to accurately detect new attacks encountered during testing, while its overall performance is comparable to numerous state-of-the-art works from the cybersecurity literature.
Harsh Mankodiya, M. Obaidat, Rajesh Gupta + 1 more
2021 International Conference on Communications, Computing, Cybersecurity, and Informatics (CCCI)
Artificial intelligence (AI) is the most looked up technology with a diverse range of applications across all the fields, whether it is intelligent transportation systems (ITS), medicine, healthcare, military operations, or others. One such application is autonomous vehicles (AVs), which comes under the category of AI in ITS. Vehicular Adhoc Networks (VANET) makes communication possible between AVs in the system. The performance of each vehicle depends upon the information exchanged between AVs. False or malicious information can perturb the whole system leading to severe consequences. Hence, ...
Khushi Kalasampath, K. N. Spoorthi, S. Sajeev + 3 more
IEEE Access
As AI technologies, particularly deep learning models, have advanced, their inherent “black box” nature has raised significant concerns regarding accountability, fairness, and trust, especially in critical domains such as healthcare, finance, and criminal justice. We present a detailed exploration of XAI, emphasizing its essential role in improving the interpretability and transparency of complex AI systems in various application domains. Health-related applications were notably using XAI, emphasizing diagnostics, and medical imaging. Other notable domains of use of XAI is encompassed environm...
Shahid Alam, Zeynep Altıparmak
ArXiv
This paper presents a formal definition of the terms CF and XAI-CF and a comprehensive literature review of previous works that apply and utilize XAI to build and increase trust in CF.
Pieter Barnard, I. Macaluso, N. Marchetti + 1 more
ICC 2022 - IEEE International Conference on Communications
Using real-world data to develop an AI model for STRR, it is demonstrated how the XAI methodology can be used to explain the real-time decisions of the model, to reveal trends about the model’s general behaviour, as well as aid in the diagnosis of potential faults during the model's development.
Isha Hameed, Samuel Sharpe, Daniel Barcklow + 5 more
ArXiv
This work aims to show how varying perturbations and adding simple guardrails can help to avoid potentially flawed conclusions, how treatment of categorical variables is an important consideration in both post-hoc explainability and ablation studies, and how to identify useful baselines for XAI methods and viable perturbated studies.
Rita Ganguly, Dharmpal Singh
International Journal of Advanced Computer Science and Applications
org
J. Contreras, T. Bocklitz
journal unavailable
This work proposes an agnostic XAI method based on the Volterra series that approximates models and presents relevance maps indicating higher and lower contributions to the approximation prediction (logit).
R. Machlev, M. Perl, J. Belikov + 2 more
IEEE Transactions on Industrial Informatics
By means of this approach, the PQD classifier outputs are optimized to be both accurate and easy to understand, allowing experts to make informed and trustworthy decisions.
Muhammad Monjurul Karim, Yu Li, Ruwen Qin
ArXiv
A Gated Recurrent Unit (GRU) network that learns spatio-temporal relational features for the early anticipation of traffic accidents from dashcam video data is presented and results confirm that the proposed AI model, with a human-inspired design, can outperform humans in the accident anticipation.
J. Bae
The Academic Society of Global Business Administration
The XAI methodology can contribute to increasing the transparency and interpretability of models in various AI-based prediction models, and to increase fairness and reliability of AI results between financial consumers and financial institutions.
Thomas Gniadek, Jason Kang, Talent Theparee + 1 more
Online Journal of Public Health Informatics
A framework for classifying XAI algorithms applied to clinical medicine is presented and communication modalities used to convey an XAI explanation can be used to classify algorithms and may affect clinical outcomes.
Vibekananda Dutta, T. Zielińska
J. Autom. Mob. Robotics Intell. Syst.
An Explainable Artificial Intelligence (XAI) based approach to action forecasting using structured database and object affordances definition and the efficiency of the presented solution was compared to the other ba‐ seline algorithms.
Hana Charaabi, Hiba Mzoughi, R. E. Hamdi + 1 more
2023 International Conference on Cyberworlds (CW)
An overview on the application of XAI in Deep Learning-based Magnetic Resonance Imaging (MRI) image analysis for Brain Tumor (BT) diagnosis is presented and three groups of post-hoc XAI methods improved the confidence on the DL based brain tumor diagnosis.