The findings indicate that this approach successfully addresses the challenges posed by increasingly human-like text generated by advanced LLMs, offering a promising solution for maintaining digital content integrity.
With the enhancement of Large Language Models (LLMs) performance, the demand for reliable methods to distinguish AI-generated text from human-written content is steadily growing. This study presents a novel detection framework based on a hybrid architecture integrating BERT, BiLSTM, and Talking-Heads Attention mechanisms. By leveraging BERT's contextual embedding capabilities, BiLSTM's sequential dependency capture, and an enhanced attention layer, the proposed model achieves significant improvements in classification accuracy and robustness. Experiments conducted on the GPT Reddit Dataset (GRiD) demonstrate the efficacy of the model, achieving an accuracy of 98.5% and an F1-score of 96.3%, outperforming many models such as SVM, ALBERT, and RoBERTa. The findings indicate that this approach successfully addresses the challenges posed by increasingly human-like text generated by advanced LLMs, offering a promising solution for maintaining digital content integrity.