A machine learning approach to evaluate Governance, Risk, and Compliance (GRC) risks associated with Large Language Models (LLMs) is explored, enabling organizations to improve efficiency, foster innovation, and deliver customer value, while maintaining compliance and regulatory requirements.
In today’s AI-driven digital world, Governance, Risk, and Compliance (GRC) has become vital for organizations as they leverage AI technologies to drive business success and resilience. GRC represents a strategic approach that helps organization using Large Language Models (LLMs) automation tasks and enhances customer service, while maintaining the regulatory complexity across various industries and regions. This paper explores a machine learning approach to evaluate Governance, Risk, and Compliance (GRC) risks associated with Large Language Models (LLMs). It utilizes Azure OpenAI Service logs to construct a representative dataset, with key features including response_time_ms, model_type, temperature, tokens_used, is_logged, data_sensitivity, compliance_flag, bias_score, and toxicity_score. These features are used to train a model that predicts GRC risk levels in LLM interactions, enabling organizations to improve efficiency, foster innovation, and deliver customer value, while maintaining compliance and regulatory requirements.