Home / Papers / Threat Modelling and Risk Analysis for Large Language Model (LLM)-Powered...

Threat Modelling and Risk Analysis for Large Language Model (LLM)-Powered Applications

1 Citations2024
Stephen Burabari Tete
ArXiv

This paper explores the threat modeling and risk analysis specifically tailored for LLM-powered applications, and introduces a framework combining STRIDE and DREAD methodologies for proactive threat identification and risk assessment.

Abstract

The advent of Large Language Models (LLMs) has revolutionized various applications by providing advanced natural language processing capabilities. However, this innovation introduces new cybersecurity challenges. This paper explores the threat modeling and risk analysis specifically tailored for LLM-powered applications. Focusing on potential attacks like data poisoning, prompt injection, SQL injection, jailbreaking, and compositional injection, we assess their impact on security and propose mitigation strategies. We introduce a framework combining STRIDE and DREAD methodologies for proactive threat identification and risk assessment. Furthermore, we examine the feasibility of an end-to-end threat model through a case study of a custom-built LLM-powered application. This model follows Shostack's Four Question Framework, adjusted for the unique threats LLMs present. Our goal is to propose measures that enhance the security of these powerful AI tools, thwarting attacks, and ensuring the reliability and integrity of LLM-integrated systems.