PulseAugur
LIVE 10:45:22
tool · [1 source] ·
0
tool

Distributed output templates, not single positions, drive LLM in-context learning

Researchers have demonstrated that in-context learning in large language models is driven by distributed output templates rather than single-position activations. Through multi-position intervention, they achieved up to 96% task transfer, pinpointing layer 8 as a causal locus for in-context learning task identity. This finding holds across multiple model architectures, suggesting a universal intervention window around 30% network depth. AI

Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →

IMPACT Reveals that in-context learning relies on distributed output templates, not single positions, potentially impacting how models are trained and prompted.

RANK_REASON Academic paper detailing new findings on in-context learning mechanisms in LLMs. [lever_c_demoted from research: ic=1 ai=1.0]

Read on arXiv cs.LG →

COVERAGE [1]

  1. arXiv cs.LG TIER_1 · Bryan Cheng, Jasper Zhang ·

    Single-Position Intervention Fails: Distributed Output Templates Drive In-Context Learning

    arXiv:2605.04061v1 Announce Type: new Abstract: Understanding how large language models encode task identity from few-shot demonstrations is a central open problem in mechanistic interpretability. Prior work uses linear probing to localize task representations, reporting high cla…