Explore groundbreaking research papers on Explainable AI that provide deep insights and advancements in making AI systems understandable to humans. Stay up-to-date with leading methodologies and significant findings that are shaping the future of AI explainability. Delve into top research works that ensure AI transparency, helping to bridge the gap between complex models and user comprehension.
Looking for research-backed answers?Try AI Search
Nadeesha D. Hettikankanamage, Niusha Shafiabady, Fiona Chatteur + 3 more
Sensors (Basel, Switzerland)
A systematic cross-domain review of XAI techniques applied to quantitative prediction tasks, with a focus on their methodological relevance and potential adaptation to biomedical imaging and sensing, and provides guidance for future research aimed at safe and interpretable AI deployment in biomedical imaging and sensing.
A. Anguita-Ruiz, Alberto Segura-Delgado, R. Alcalá + 2 more
PLoS Computational Biology
A novel rule-based XAI strategy (including pre-processing, knowledge-extraction and functional validation) for finding biologically relevant sequential patterns from longitudinal human gene expression data (GED) and proves the goodness of this strategy for the mining of biologically relevant gene-gene temporal relations.
Chalamalla Nikhitha Reddy
International Journal on Science and Technology
This study explores the application of Explainable AI (XAI) to address the inherent limitations of traditional "black-box" AI models, emphasizing transparency and interpretability in high-stakes domains such as natural hazard management.
Sheng Ye, Jiyu Li, Yifan Chai + 3 more
ArXiv
It is shown that XAI method, applied at point-scale, could be used for cross-scale aggregation of hydrologic responses, a fundamental question in scaling problems, using hydrologic connectivity as a demonstration.
Dipak Prasad Upadhyaya, K. Prantzalos, P. Golnari + 5 more
AMIA Joint Summits on Translational Science proceedings. AMIA Joint Summits on Translational Science
Amblyopia is a neurodevelopmental disorder affecting children's visual acuity, requiring early diagnosis for effective treatment. Traditional diagnostic methods rely on subjective evaluations of eye tracking recordings from high fidelity eye tracking instruments performed by specialized pediatric ophthalmologists, often unavailable in rural, low resource clinics. As such, there is an urgent need to develop a scalable, low cost, high accuracy approach to automatically analyze eye tracking recordings. Large Language Models (LLM) show promise in accurate detection of amblyopia; our prior work has...
Utsab Khakurel, D. Rawat
journal unavailable
This paper investigates XAI for algorithmic trustworthiness and transparency using some example use cases and by using SHAP (SHapley Additive exPlanations) library and visualizing the effect of features individually and cumulatively in the prediction process.
Otávio Morato de Andrade, Marco Antônio Sousa Alves
Revista Thesis Juris
The article concludes that XAI can contribute to the identification of biases in algorithmic models, then it is suggested that the ability to “explain” should be a requirement for adopting AI systems in sensitive areas such as court decisions.
Akshansh Mishra, V. Jatti, Eyob Messele Sefene + 1 more
Applied Mechanics
This research work focuses on the prediction of the surface roughness of additive-manufactured polylactic acid (PLA) specimens using eight different supervised machine learning regression-based algorithms, and demonstrates that the XGBoost algorithm provides the most accurate predictions for surface Roughness compared with other algorithms.
Lauren Hoffman, M. Mazloff, S. Gille + 2 more
Artificial Intelligence for the Earth Systems
This work trains linear regression and convolutional neural network models to make one-day predictions of sea-ice velocity in the Arctic from inputs of present-day wind velocity and previous-day ice velocity and concentration, and shows that wind speed over land is highly relevant for predicting ice motion offshore.
Latifa Dwiyanti, Hidetaka Nambo, Nur Hamid
AI
The results indicated that incorporating XAI and textual summaries significantly improved expert explainability and increased confidence in the model’s ability to support treatment decisions for ADPKD patients.
S. Wagan, S. Sidra
Journal of Industrial Integration and Management
It is suggested that XAI considerably improves trust and understanding among project managers, which further boosts the quality of decision-making processes and empirically prove the significance of XAI as a method of creating grounds of trust and comprehension within the project management environment.
David Gunning, M. Stefik, Jaesik Choi + 3 more
Science Robotics
This research presents a meta-modelling architecture that automates the very labor-intensive and therefore time-heavy and therefore expensive and expensive process of manually cataloging and cataloging artificial intelligence applications.
Tiago Martins, Ana Maria De Almeida, Elsa Cardoso + 1 more
IEEE Access
A new unified yet simple taxonomy for the categorization of XAI methods and present the explainability methods currently being applied in finance applications is provided.
W. Choi, Chan-Tong Lam, P. Pang + 1 more
Proceedings of the 30th ACM Conference on Innovation and Technology in Computer Science Education V. 1
It was found that behavioral and academic performance data were the most commonly used features, with the main prediction goals focused on course failure risk or grades, and SHapley Additive exPlanations were the most frequently utilized XAI technique.
P. Biecek, M. Chlebus, Janusz Gajda + 5 more
ArXiv
Comparison of various predictive models (logistic regression, logistic regression with weight of evidence transformations and modern artificial intelligence algorithms) are presented and it is shown that advanced tree based models give best results in prediction of client default.
Harikha Manthena, Shaghayegh Shajarian, Jeffrey Kimmell + 3 more
IEEE Access
This survey presents a comprehensive review of state-of-the-art ML techniques for malware analysis, with a specific focus on explainability methods, and examines existing XAI frameworks, their application in malware classification and detection, and the challenges associated with making malware detection models more interpretable.
J. Lötsch, D. Kringel, A. Ultsch
BioMedInformatics
This review focuses on the requirement that XAIs must be able to explain in detail the decisions made by the AI to the experts in the field.
Lokesh Gupta, Dr. Dinesh Chandra Misra
International Research Journal of MMC
An interpretable machine learning methodology with cyber security for integrating Explainable AI methods designed to improve an analyst's or team's ability to both operate a threat detection model, and enhance a model, in terms of performance, usability and interpretability is introduced.
W. Kongkitkul, S. Youwai, Siwipa Khamsoy + 1 more
ArXiv
A deep learning model with a multi-layer perceptron architecture is used to predict MR and DS based on aggregate gradation parameters derived from Bailey's Method, including coarse aggregate ratio (CA), fine aggregate coarse ratio (FAc), and other mix design variables.
Riza Ibrahim, Hilda Azkiyah
International Journal of Health, Medicine, and Sports
Drug discovery is a complex, lengthy, and costly process with a high failure rate, especially during clinical trials. The integration of Artificial Intelligence (AI) has revolutionized various stages of drug discovery by enabling faster and more accurate analysis of biological and chemical data. However, most AI models in this field operate as “black boxes,” where their decision-making processes are opaque and difficult to interpret. This lack of transparency poses significant challenges in terms of trust, validation, and adoption of AI-generated predictions in both clinical and regulatory set...