login
Home / Papers / A Review on Attacks against Artificial Intelligence (AI) and Their...

A Review on Attacks against Artificial Intelligence (AI) and Their Defence Image Recognition and Generation Machine Learning, Artificial Intelligence

1 Citations•2024•
Md. Tarek Hossain, Rumi Afrin, Mohd. Al- Amin Biswas
Control Systems and Optimization Letters

A review of adversarial assaults, data poisoning, model inversion attacks, and other methods that potentially jeopardize the integrity and dependability of AI-based image recognition and generation models, focusing on attacks especially targeting AI models used in picture identification and creation tasks.

Abstract

The main objective this paper is to review the adversarial assaults, data poisoning, model inversion attacks, and other methods that potentially jeopardize the integrity and dependability of AI-based image recognition and generation models. As artificial intelligence (AI) systems become more popular in numerous sectors, their vulnerability to attacks has arisen as a major worry. We focus on attacks especially targeting AI models used in picture identification and creation tasks in our review study. We investigate the wide range of assault strategies, including both traditional and more complex techniques. These attacks take use of flaws in machine learning algorithms, frequently resulting in misclassification, falsified picture production, or unauthorized access to sensitive data. We survey numerous defense strategies developed by scholars and practitioners to overcome these difficulties. Among these defenses are adversarial training, robust feature extraction, input sanitization, and model distillation. We explore the usefulness and limitations of each protection mechanism, highlighting the importance of a comprehensive approach that integrates numerous techniques to improve the resilience of AI models. Furthermore, we investigate the possible impact of these attacks on real-world applications such as driverless vehicles, medical imaging systems, and security monitoring, emphasizing the threats to public safety and privacy. The study also covers the legislative and ethical aspects surrounding AI security, as well as the responsibilities of AI developers in establishing adequate defense measures. This analysis highlights the critical need for continued research and collaboration to develop more secure AI systems that can withstand sophisticated attacks. As AI evolves and integrates into important areas, a concerted effort must be made to strengthen these systems' resilience against hostile threats and assure their responsible deployment for the benefit of society.