A new paper published on arXiv suggests that language model surprisal, often used as a proxy for contextual predictability and metaphor novelty, may be misleading. The research indicates that lexical frequency is a stronger predictor of metaphor novelty than surprisal itself. Analysis of eight Pythia model sizes and 154 training checkpoints revealed that the association between surprisal and novelty changes over training stages, mirroring the surprisal-frequency association. AI
Summary written by gemini-2.5-flash-lite from 2 sources. How we write summaries →
IMPACT Challenges the use of LM surprisal as a sole metric for metaphor novelty, suggesting lexical frequency is a more significant factor.
RANK_REASON The cluster contains an academic paper published on arXiv.