The paper analyzes the vulnerabilities of systems that use large language models in their work, both from the point of view of legal regulation and from the point of view of engineering-application.
The rapid development of artificial intelligence models, in particular large language models, has given the industry the opportunity to perform a wide range of tasks in various fields of activity, from machine translation to the construction of complex question-answering systems. However, the rapid growth in the use of these models, in addition to the obvious advantages, also carries a number of risks in terms of information system security. The paper analyzes the vulnerabilities of systems that use large language models in their work, both from the point of view of legal regulation and from the point of view of engineering-application. In addition, a detailed analysis of the possible attack paths of a system using LLM built by OWASP is presented.