A Survey on RAG with LLMs
No TL;DR found
Abstract
In the fast-paced realm of digital transformation, businesses are increasingly pressured to innovate and boost efficiency to remain competitive and foster growth. Large Language Models (LLMs) have emerged as game-changers across industries, revolutionizing various sectors by harnessing extensive text data to analyze and generate human-like text. Despite their impressive capabilities, LLMs often encounter challenges when dealing with domain-specific queries, potentially leading to inaccuracies in their outputs. In response, Retrieval-Augmented Generation (RAG) has emerged as a viable solution. By seamlessly integrating external data retrieval into text generation processes, RAG aims to enhance the accuracy and relevance of the generated content. However, existing literature reviews tend to focus primarily on the technological advancements of RAG, overlooking a comprehensive exploration of its applications. This paper seeks to address this gap by providing a thorough review of RAG applications, encompassing both task-specific and discipline-specific studies, while also outlining potential avenues for future research. By shedding light on current RAG research and outlining future directions, this review aims to catalyze further exploration and development in this dynamic field, thereby contributing to ongoing digital transformation efforts.