login
Home / Papers / Security of AI Agents

Security of AI Agents

11 Citations•2024•
Yifeng He, Ethan Wang, Yuyang Rong
2025 IEEE/ACM International Workshop on Responsible AI Engineering (RAIE)

This paper identifies and describes vulnerabilities in the current development of AI agents in detail from a system security perspective and delineates methods to make AI agents safer and more reliable.

Abstract

AI agents have been boosted by large language models. AI agents can function as intelligent assistants and complete tasks on behalf of their users with access to tools and the ability to execute commands in their environments. Through studying and experiencing the workflow of typical AI agents, we have raised several concerns regarding their security. These potential vulnerabilities are not addressed by the frameworks used to build the agents, nor by research aimed at improving the agents. In this paper, we identify and describe these vulnerabilities in detail from a system security perspective, emphasizing their causes and severe effects. Furthermore, we introduce defense mechanisms corresponding to each vulnerability with design and experiments to evaluate their viability. Altogether, this paper contextualizes the security issues in the current development of AI agents and delineates methods to make AI agents safer and more reliable.