Researchers have developed a new method called RbtAct to improve the actionability of feedback generated by large language models for scientific peer reviews. This approach leverages existing peer review rebuttals as implicit supervision, learning which reviewer comments led to concrete revisions. A new dataset, RMR-75K, was created to map review segments to their corresponding rebuttal segments, enabling the training of models like Llama-3.1-8B-Instruct for more specific and implementable guidance. AI
Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →
IMPACT Enhances AI's ability to provide actionable feedback in scientific peer review, potentially improving research quality.
RANK_REASON This is a research paper introducing a new method and dataset for improving AI-generated peer review feedback.