PulseAugur
LIVE 10:56:22
research · [1 source] ·
0
research

LLMs exhibit 'anchored confabulation,' amplifying confident hallucinations with partial evidence

Researchers have identified a new phenomenon in large language models called "anchored confabulation," where providing partial evidence can paradoxically increase the model's tendency to confidently hallucinate. This effect, formalized as Parametric Hallucination Confidence (PHC), was observed across multiple model families and predicted by a new law, the Anchoring Threshold Law. The findings suggest implications for retrieval-augmented generation (RAG) systems, with a proposed LearnedRouter demonstrating significant performance gains by exploiting PHC. AI

Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →

IMPACT Identifies a new LLM failure mode that can be exploited to improve RAG systems and reduce hallucinations.

RANK_REASON Academic paper detailing a novel LLM behavior and its implications.

Read on arXiv cs.CL →

COVERAGE [1]

  1. arXiv cs.CL TIER_1 · Ashish Balkishan Lathkar ·

    Anchored Confabulation: Partial Evidence Non-Monotonically Amplifies Confident Hallucination in LLMs

    arXiv:2604.25931v1 Announce Type: new Abstract: We identify a previously unknown calibration property of large language models: providing one confirmed intermediate fact toward a multi-step reasoning chain increases the model's confident-wrong-answer rate before full evidence eli…