AI models struggle with emotion nuance, researchers explore new evaluation and generation methods
ByPulseAugur Editorial·
Summary by gemini-2.5-flash-lite
from 14 sources
Researchers are exploring the nuances of emotion in AI, with several papers focusing on Large Language Models (LLMs) and speech processing. One study investigates how well small language models preserve emotions during machine translation across several European languages. Another paper introduces a new dataset and pipeline for speech captioning that accounts for emotion transitions in discourse. Additionally, research critically examines the metrics used to evaluate emotional expressiveness in speech generation, questioning the reliance on embedding similarity. Finally, a study analyzes how LLMs infer emotions, identifying internal mechanisms and proposing methods to improve their emotion recognition capabilities, while also highlighting the gap between LLM annotations and human judgment.
AI
IMPACT
Advances in understanding and generating emotional AI could lead to more nuanced human-AI interactions and improved affective computing applications.
RANK_REASON
Multiple academic papers published on arXiv exploring various aspects of emotion in AI systems.
arXiv:2604.27345v1 Announce Type: new Abstract: Human annotators frequently disagree on emotion labels, yet most evaluations of Large Language Model (LLM) emotion annotation collapse these judgments into a single gold standard, discarding the distributional information that disag…
arXiv:2604.27920v1 Announce Type: cross Abstract: Preserving affective nuance remains a challenge in Machine Translation (MT), where semantic equivalence often takes precedence over emotional fidelity. This paper evaluates the performance of three state-of-the-art Small Language …
Preserving affective nuance remains a challenge in Machine Translation (MT), where semantic equivalence often takes precedence over emotional fidelity. This paper evaluates the performance of three state-of-the-art Small Language Models (SLMs) -- EuroLLM, Aya Expanse, and Gemma -…
arXiv:2604.26347v1 Announce Type: cross Abstract: Objective metrics for emotional expressiveness are vital for speech generation, particularly in expressive synthesis and voice conversion requiring emotional prosody transfer. To quantify this, the field widely relies on emotion s…
arXiv:2604.26417v1 Announce Type: new Abstract: Emotion perception and adaptive expression are fundamental capabilities in human-agent interaction. While recent advances in speech emotion captioning (SEC) have improved fine-grained emotional modeling, existing systems remain limi…
Human annotators frequently disagree on emotion labels, yet most evaluations of Large Language Model (LLM) emotion annotation collapse these judgments into a single gold standard, discarding the distributional information that disagreement encodes. We ask whether LLMs capture the…
Emotion perception and adaptive expression are fundamental capabilities in human-agent interaction. While recent advances in speech emotion captioning (SEC) have improved fine-grained emotional modeling, existing systems remain limited to static, single-emotion characterization w…
Objective metrics for emotional expressiveness are vital for speech generation, particularly in expressive synthesis and voice conversion requiring emotional prosody transfer. To quantify this, the field widely relies on emotion similarity between reference and generated samples.…
arXiv cs.CL
TIER_1·Bangzhao Shu, Arinjay Singh, Mai ElSherief·
arXiv:2604.25866v1 Announce Type: new Abstract: Large language models (LLMs) are increasingly used in emotionally sensitive human-AI applications, yet little is known about how emotion recognition is internally represented. In this work, we investigate the internal mechanisms of …
arXiv:2604.25776v1 Announce Type: new Abstract: Critical analyses of emotion recognition technology have raised ethical concerns around task validity and potential downstream impacts, urging researchers to ensure alignment between their stated motivations and practice. However, t…
Large language models (LLMs) are increasingly used in emotionally sensitive human-AI applications, yet little is known about how emotion recognition is internally represented. In this work, we investigate the internal mechanisms of emotion recognition in LLMs using sparse autoenc…
Critical analyses of emotion recognition technology have raised ethical concerns around task validity and potential downstream impacts, urging researchers to ensure alignment between their stated motivations and practice. However, these discussions have not adequately influenced …
arXiv:2502.04424v4 Announce Type: replace Abstract: With the integration of multimodal large language models (MLLMs) into robotic systems and AI applications, embedding emotional intelligence (EI) capabilities is essential for enabling these models to perceive, interpret, and res…
arXiv:2604.23348v1 Announce Type: new Abstract: Recent multimodal large language models (MLLMs) have shown strong capabilities in perception, reasoning, and generation, and are increasingly used in applications such as social robots and human-computer interaction, where understan…