Researchers have identified a "Granularity Axis" within large language models, demonstrating that these models internally represent social roles from individual experiences to institutional reasoning. This axis accounts for a significant portion of the variance in role representations and is consistent across different model layers and prompt variations. Furthermore, the study shows that this granularity can be causally manipulated through activation steering, influencing the model's responses to adopt more micro or macro perspectives. AI
Summary written by gemini-2.5-flash-lite from 2 sources. How we write summaries →
IMPACT Reveals a structured latent dimension for social role granularity in LLMs, suggesting potential for more nuanced control over model persona and reasoning.
RANK_REASON Academic paper detailing a novel finding about internal representations in language models.