Home / Papers / Machine Against the RAG: Jamming Retrieval-Augmented Generation with Blocker Documents

Machine Against the RAG: Jamming Retrieval-Augmented Generation with Blocker Documents

9 Citations2024
Avital Shafran, R. Schuster, Vitaly Shmatikov
ArXiv

It is demonstrated that RAG systems that operate on databases with untrusted content are vulnerable to a new class of denial-of-service attacks the authors call jamming, and a new method based on black-box optimization is described and measured that does not rely on instruction injection.

Abstract

Retrieval-augmented generation (RAG) systems respond to queries by retrieving relevant documents from a knowledge database, then generating an answer by applying an LLM to the retrieved documents. We demonstrate that RAG systems that operate on databases with untrusted content are vulnerable to a new class of denial-of-service attacks we call jamming. An adversary can add a single ``blocker'' document to the database that will be retrieved in response to a specific query and result in the RAG system not answering this query - ostensibly because it lacks the information or because the answer is unsafe. We describe and measure the efficacy of several methods for generating blocker documents, including a new method based on black-box optimization. This method (1) does not rely on instruction injection, (2) does not require the adversary to know the embedding or LLM used by the target RAG system, and (3) does not use an auxiliary LLM to generate blocker documents. We evaluate jamming attacks on several LLMs and embeddings and demonstrate that the existing safety metrics for LLMs do not capture their vulnerability to jamming. We then discuss defenses against blocker documents.