Home / Papers / Large Language Models (LLMs): Deployment, Tokenomics and Sustainability

Large Language Models (LLMs): Deployment, Tokenomics and Sustainability

2 Citations2024
Haiwei Dong, Shuang Xie
ArXiv

This paper explored the deployment strategies, economic considerations, and sustainability challenges associated with the state-of-the-art LLMs, and discussed the deployment debate between Retrieval-Augmented Generation and fine-tuning, highlighting their respective advantages and limitations.

Abstract

The rapid advancement of Large Language Models (LLMs) has significantly impacted human-computer interaction, epitomized by the release of GPT-4o, which introduced comprehensive multi-modality capabilities. In this paper, we first explored the deployment strategies, economic considerations, and sustainability challenges associated with the state-of-the-art LLMs. More specifically, we discussed the deployment debate between Retrieval-Augmented Generation (RAG) and fine-tuning, highlighting their respective advantages and limitations. After that, we quantitatively analyzed the requirement of xPUs in training and inference. Additionally, for the tokenomics of LLM services, we examined the balance between performance and cost from the quality of experience (QoE)'s perspective of end users. Lastly, we envisioned the future hybrid architecture of LLM processing and its corresponding sustainability concerns, particularly in the environmental carbon footprint impact. Through these discussions, we provided a comprehensive overview of the operational and strategic considerations essential for the responsible development and deployment of LLMs.