Home / Papers / Security Risks of ML Models: Adverserial Machine Learning

Security Risks of ML Models: Adverserial Machine Learning

1 Citations2024
Aidos Askhatuly, D. Berdysheva, D. Yedilkhan
2024 IEEE 4th International Conference on Smart Information Systems and Technologies (SIST)

By implementing and analyzing an adversarial attack on image classification models, this work demonstrates their detrimental effects on model performance and robustness and underscores the urgent need for proactive measures to safeguard machine learning systems against security threats.

Abstract

Machine learning has emerged as a transformative technology with diverse applications across various sectors. Despite its widespread adoption, the susceptibility of machine learning algorithms to security threats presents a significant challenge. While conventional security risks at hardware and network levels are well-documented, the unique vulnerabilities of machine learning systems require specialized attention.In this paper, we delve into the critical issue of security risks in machine learning, emphasizing the growing concerns surrounding adversarial attacks. To illustrate the practical implications of these security risks, we present a detailed case study on adversarial attacks. By implementing and analyzing an adversarial attack on image classification models, we demonstrate their detrimental effects on model performance and robustness.Our findings underscore the urgent need for proactive measures to safeguard machine learning systems against security threats. By raising awareness and fostering collaboration between researchers and practitioners, we can pave the way for more secure and resilient machine learning frameworks.