PulseAugur
LIVE 00:48:05
research · [1 source] ·
0
research

LLM variability in evidence screening raises concerns for software engineering SLRs

A new study evaluated 12 large language models (LLMs) from OpenAI, Google Gemini, and Anthropic, alongside four classical machine learning models, for their effectiveness in screening research papers for systematic literature reviews. The research found significant variability and non-determinism among LLMs, even when set to temperature zero. While abstract availability was crucial for performance, adding titles and keywords did not consistently improve results. The study concluded that LLMs did not consistently outperform traditional models, suggesting that their adoption should be carefully considered based on operational factors like reproducibility and cost. AI

Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →

IMPACT Highlights LLM variability and lack of consistent superiority over traditional models in systematic literature reviews, cautioning against uncritical adoption.

RANK_REASON Academic paper evaluating LLM performance on a specific task.

Read on arXiv cs.AI →

COVERAGE [1]

  1. arXiv cs.AI TIER_1 · Gilberto Sussumu Hida, Danilo Monteiro Ribeiro, Erika Yahata ·

    Beyond Accuracy: LLM Variability in Evidence Screening for Software Engineering SLRs

    arXiv:2604.27006v1 Announce Type: cross Abstract: Context: Study screening in systematic literature reviews is costly, inconsistency-prone, and risk-asymmetric, since false negatives can compromise validity. Despite rapid uptake of Large Language Models (LLMs), there is limited e…