login
Home / Papers / Evalita-LLM: Benchmarking Large Language Models on Italian

Evalita-LLM: Benchmarking Large Language Models on Italian

1 Citations•2025•
Bernardo Magnini, Roberto Zanoli, Michele Resta
ArXiv

Evalita-LLM, a new benchmark designed to evaluate Large Language Models on Italian tasks, is described, and an iterative methodology, where candidate tasks and candidate prompts are validated against a set of LLMs used for development is proposed.

Abstract

We describe Evalita-LLM, a new benchmark designed to evaluate Large Language Models (LLMs) on Italian tasks. The distinguishing and innovative features of Evalita-LLM are the following: (i) all tasks are native Italian, avoiding issues of translating from Italian and potential cultural biases; (ii) in addition to well established multiple-choice tasks, the benchmark includes generative tasks, enabling more natural interaction with LLMs; (iii) all tasks are evaluated against multiple prompts, this way mitigating the model sensitivity to specific prompts and allowing a fairer and objective evaluation. We propose an iterative methodology, where candidate tasks and candidate prompts are validated against a set of LLMs used for development. We report experimental results from the benchmark's development phase, and provide performance statistics for several state-of-the-art LLMs.