login
Home / Papers / Large Language Models (LLMs) and Generative AI in Cybersecurity and...

Large Language Models (LLMs) and Generative AI in Cybersecurity and Privacy: A Survey of Dual-Use Risks, AI-Generated Malware, Explainability, and Defensive Strategies

2 Citations2025
Kiarash Ahi, Saeed Valizadeh
2025 Silicon Valley Cybersecurity Conference (SVCC)

This paper presents a comprehensive survey of the beneficial and malicious applications of LLMs in cybersecurity, including zero-day detection, DevSecOps, federated learning, synthetic content analysis, and explainable AI (XAI), and practical recommendations for responsible and transparent LLM deployment.

Abstract

Large Language Models (LLMs) and generative AI (GenAI) systems, such as ChatGPT, Claude, Gemini, LLaMA, Copilot, Stable Diffusion by OpenAI, Anthropic, Google, Meta, Microsoft, Stability AI, respectively, are revolutionizing cybersecurity, enabling both automated defense and sophisticated attacks. These technologies power real-time threat detection, phishing defense, secure code generation, and vulnerability exploitation at unprecedented scales. LLM-generated malware alone is projected to account for 50% of detected threats in 2025, up from just 2% in 2021, emphasizing the need for next-generation security frameworks.This paper presents a comprehensive survey of the beneficial and malicious applications of LLMs in cybersecurity, including zero-day detection, DevSecOps, federated learning, synthetic content analysis, and explainable AI (XAI). Drawing on a review of over 70 academic papers, industry reports, and technical documents, this work synthesizes insights from real-world case studies across platforms like Google Play Protect, Microsoft Defender, Amazon Web Services (AWS), Apple’s App Store, OpenAI Plugin Stores, Hugging Face Spaces, and GitHub, alongside emerging initiatives like the SAFE Framework and AI-driven anomaly detection.We conclude with practical recommendations for responsible and transparent LLM deployment, including model watermarking, adversarial defense, and cross-industry collaboration—setting a new benchmark for rigorous, holistic cybersecurity research at the intersection of AI and threat defense—and offering a roadmap for secure, scalable LLM systems that serves as a critical reference for researchers, engineers, and security leaders navigating the complex challenges of AI-driven cybersecurity.