PulseAugur
LIVE 09:44:40
research · [2 sources] ·
0
research

New research reveals language models encode social role granularity

Researchers have identified a "Granularity Axis" within large language models, demonstrating that these models internally represent social roles from individual experiences to institutional reasoning. This axis accounts for a significant portion of the variance in role representations and is consistent across different model layers and prompt variations. Furthermore, the study shows that this granularity can be causally manipulated through activation steering, influencing the model's responses to adopt more micro or macro perspectives. AI

Summary written by gemini-2.5-flash-lite from 2 sources. How we write summaries →

IMPACT Reveals a structured latent dimension for social role granularity in LLMs, suggesting potential for more nuanced control over model persona and reasoning.

RANK_REASON Academic paper detailing a novel finding about internal representations in language models.

Read on arXiv cs.CL →

COVERAGE [2]

  1. arXiv cs.CL TIER_1 · Chonghan Qin, Xiachong Feng, Ziyun Song, Xiaocheng Feng, Jing Xiong, Lingpeng Kong ·

    The Granularity Axis: A Micro-to-Macro Latent Direction for Social Roles in Language Models

    arXiv:2605.06196v1 Announce Type: cross Abstract: Large language models (LLMs) are routinely prompted to take on social roles ranging from individuals to institutions, yet it remains unclear whether their internal representations encode the granularity of such roles, from micro-l…

  2. arXiv cs.CL TIER_1 · Lingpeng Kong ·

    The Granularity Axis: A Micro-to-Macro Latent Direction for Social Roles in Language Models

    Large language models (LLMs) are routinely prompted to take on social roles ranging from individuals to institutions, yet it remains unclear whether their internal representations encode the granularity of such roles, from micro-level individual experience to macro-level organiza…