login
Home / Papers / Sentiment Analysis for Software Engineering: How Far Can Pre-trained Transformer...

Sentiment Analysis for Software Engineering: How Far Can Pre-trained Transformer Models Go?

109 Citations2020
Ting Zhang, Bowen Xu, Ferdian Thung
journal unavailable

This work is the first to fine-tune pre-trained Transformer-based models for the SA4SE task, and outperform the existing SA4 SE tools by 6.5-35.6% in terms of macro/micro-averaged F1 scores.

Abstract

Extensive research has been conducted on sentiment analysis for software engineering (SA4SE). Researchers have invested much effort in developing customized tools (e.g., SentiStrength-SE, SentiCR) to classify the sentiment polarity for Software Engineering (SE) specific contents (e.g., discussions in Stack Overflow and code review comments). Even so, there is still much room for improvement. Recently, pre-trained Transformer-based models (e.g., BERT, XLNet) have brought considerable breakthroughs in the field of natural language processing (NLP). In this work, we conducted a systematic evaluation of five existing SA4SE tools and variants of four state-of-the-art pre-trained Transformer-based models on six SE datasets. Our work is the first to fine-tune pre-trained Transformer-based models for the SA4SE task. Empirically, across all six datasets, our fine-tuned pre-trained Transformer-based models outperform the existing SA4SE tools by 6.5-35.6% in terms of macro/micro-averaged F1 scores.

Sentiment Analysis for Software Engineering: How Far Can Pre