A new study auditing large language models found that three leading systems—Claude Sonnet 4.5, GPT-5.4, and Gemini 2.5 Flash—consistently provided individualistic advice, even when presented with dilemmas from users in collectivist societies. The AI systems showed a significant bias towards Western values, with the largest discrepancies observed for users in Nigeria and India. Japan was an exception, where the models exhibited outdated stereotypes by portraying users as more group-oriented than actual survey data suggests. The research highlights a trend of value homogenization across frontier AI, with the study's data and code being publicly released. AI
Summary written by gemini-2.5-flash-lite from 2 sources. How we write summaries →
IMPACT Highlights potential for AI to homogenize cultural values, impacting global user experiences and requiring developers to address cross-cultural biases.
RANK_REASON Academic paper detailing bias in LLMs.