PulseAugur
LIVE 03:40:44
research · [2 sources] ·
0
research

LLMs show linguistic bias in recommendations across dialects, study finds

A new research paper investigates linguistic biases in large language models (LLMs) when generating recommendations. The study used datasets from Yelp and Walmart, prompting LLMs with variations of American English, Indian English, and Code-Switched Hindi-English. Results indicated that certain models, like mistral-small-3.1 and the llama-3.1 family, showed increased sensitivity to Indian English and Code-Switched prompts for restaurant recommendations. For product recommendations, the llama-3.1-70B model was particularly affected by Code-Switched prompts, influencing categories like beauty and home. AI

Summary written by gemini-2.5-flash-lite from 2 sources. How we write summaries →

IMPACT Highlights potential biases in LLM recommendation systems, suggesting a need for careful prompt engineering and model evaluation across diverse linguistic inputs.

RANK_REASON Academic paper investigating linguistic biases in LLM recommendations.

Read on arXiv cs.CL →

COVERAGE [2]

  1. arXiv cs.CL TIER_1 · Nitin Venkateswaran, Jason Ang, Deep Adhikari, Tarun Krishna Dasari ·

    An Investigation of Linguistic Biases in LLM-Based Recommendations

    arXiv:2604.25456v1 Announce Type: new Abstract: We investigate linguistic biases in LLM-based restaurant and product recommendations given prompts varying across Southern American English (AE), Indian English (IE), and Code-Switched Hindi-English dialects, using the Yelp Open dat…

  2. arXiv cs.CL TIER_1 · Tarun Krishna Dasari ·

    An Investigation of Linguistic Biases in LLM-Based Recommendations

    We investigate linguistic biases in LLM-based restaurant and product recommendations given prompts varying across Southern American English (AE), Indian English (IE), and Code-Switched Hindi-English dialects, using the Yelp Open dataset (Yelp Inc., 2023) and Walmart product revie…