PulseAugur
LIVE 08:10:28
ENTITY SWE-bench

SWE-bench

PulseAugur coverage of SWE-bench — every cluster mentioning SWE-bench across labs, papers, and developer communities, ranked by signal.

Total · 30d
44
44 over 90d
Releases · 30d
0
0 over 90d
Papers · 30d
24
24 over 90d
TIER MIX · 90D
RELATIONSHIPS
SENTIMENT · 30D

3 day(s) with sentiment data

RECENT · PAGE 1/2 · 31 TOTAL
  1. TOOL · CL_30699 ·

    AI coding tools advance with Teams integration, faster updates, and more compute

    AI coding tools are rapidly maturing, with recent updates from Cursor, GitHub Copilot, and Anthropic's Claude Code. Cursor has integrated into Microsoft Teams, allowing users to delegate tasks and retrieve information d…

  2. TOOL · CL_28290 ·

    AI agents exhibit "Bystander Effect," sacrificing reasoning for conformity

    Researchers have identified a "Bystander Effect" in multi-agent systems where collaboration can lead to reduced reasoning quality, a phenomenon termed "cognitive loafing." Through analysis of 22,500 trajectories across …

  3. RESEARCH · CL_28293 ·

    New LLM training methods boost efficiency and error recovery

    Researchers have developed new techniques for improving the efficiency of training large language models (LLMs). One method, Step Rejection Fine-Tuning (SRFT), leverages unsuccessful training trajectories by assessing t…

  4. TOOL · CL_25288 ·

    AI coding benchmark scores may be misleading, analysis finds

    A recent analysis suggests that widely reported AI coding benchmark scores may be misleading. Models that achieve high scores on benchmarks like SWE-Bench when tested under specific conditions see a dramatic drop in per…

  5. SIGNIFICANT · CL_23645 ·

    DeepSeek releases open-source coding model matching GPT-4o

    DeepSeek has released V3-0324, an open-source coding model that matches or surpasses leading models like GPT-4o and Claude 3.5 Sonnet in coding performance. This Mixture-of-Experts model, with 671 billion total paramete…

  6. COMMENTARY · CL_23256 ·

    Jack Clark predicts 60% chance of automated AI R&D by 2028

    Jack Clark, co-founder of Anthropic, has predicted a 60% chance that AI research will be fully automated by the end of 2028, and a 30% chance by 2027. He bases this forecast on rapid advancements in AI capabilities acro…

  7. COMMENTARY · CL_20705 ·

    AI models: Choose benchmarks over hype for true performance

    A recent analysis highlights that tech companies often select AI models based on hype rather than performance on relevant benchmarks. The article emphasizes that benchmarks like SWE-bench for coding, Terminal-Bench for …

  8. TOOL · CL_20742 ·

    VCBench benchmark tests LLMs for venture capital founder success prediction

    Researchers have introduced VCBench, a novel benchmark designed to evaluate the capabilities of large language models in predicting founder success within the venture capital industry. This benchmark includes a dataset …

  9. RESEARCH · CL_20477 ·

    New RL method optimizes agent training by controlling rollout pass rates

    Researchers have developed a new technique called Prefix Sampling (PS) to improve the efficiency of reinforcement learning (RL) for AI agents. This method addresses wasted compute on rollout groups with skewed pass rate…

  10. TOOL · CL_19659 ·

    SubQuadratic's SSA offers linear scaling for LLMs, challenging AI industry's compute moat

    A new attention mechanism called Subquadratic Sparse Attention (SSA) has been developed, offering a linearly scaling solution for long-context retrieval and reasoning. This innovation promises significant speedups, with…

  11. TOOL · CL_19355 ·

    Subquadratic debuts 12M-token context window with linear scaling architecture

    Subquadratic, a startup with 11 PhD researchers, has launched a new model featuring its Subquadratic Selective Attention (SSA) architecture, which claims to scale linearly with context length. This innovation allows for…

  12. COMMENTARY · CL_17096 ·

    Developers report Claude Opus 4.7 regression, citing coding issues and context loss

    Developers are reporting a significant decline in the performance of Anthropic's Claude Opus 4.7, particularly for coding tasks, with many switching back to the previous version, Opus 4.6. Users cite issues such as the …

  13. SIGNIFICANT · CL_17494 ·

    Claude Opus 4.7 Is a Regression: Why Developers Are Switching Back to 4.6

    Developers are reporting a significant decline in performance with Anthropic's Claude Opus 4.7, leading many to revert to the previous version, Opus 4.6. Users cite issues such as the model arguing with instructions, ge…

  14. RESEARCH · CL_15893 ·

    MolViBench benchmark evaluates LLMs on molecular coding tasks for drug discovery

    Researchers have introduced MolViBench, a novel benchmark designed to evaluate the capabilities of large language models (LLMs) in molecular coding tasks. This benchmark addresses the gap left by existing evaluations, w…

  15. TOOL · CL_13981 ·

    DeepClaude slashes coding agent costs by 17x using DeepSeek V4 Pro

    An open-source tool called DeepClaude has gained significant traction by allowing developers to use the Claude Code agent loop with DeepSeek V4 Pro instead of Anthropic's models. This swap drastically reduces costs, wit…

  16. RESEARCH · CL_13613 ·

    Vintage AI trained on 1930s data learns to code and fix software bugs

    Researchers have fine-tuned a large language model, Talkie-1930-13B, trained only on data predating 1931, to perform software engineering tasks. Despite its limited knowledge base, the model successfully patched a bug i…

  17. RESEARCH · CL_11687 ·

    AI agent swarms may fail due to 'Inverse-Wisdom Law,' study finds

    A new paper introduces the Inverse-Wisdom Law, challenging the assumption that AI agent swarms benefit from the "Wisdom of the Crowd." The research demonstrates that these swarms can prioritize internal architectural ag…

  18. FRONTIER RELEASE · CL_17253 ·

    Mistral’s Model Lets You Vibe Long-Running Code in the Cloud

    Mistral AI has released Mistral Medium 3.5, a new 128 billion parameter model designed for extended coding tasks with a 256K context window. This model powers new remote coding agents within Mistral's Vibe platform, ena…

  19. RESEARCH · CL_07393 ·

    Qwen 3.6 Plus outperforms DeepSeek V4 Pro in price and quality benchmarks

    A recent battle test of six April-released Large Language Models (LLMs) revealed that the Qwen 3.6 Plus, released 22 days prior, outperformed the newer DeepSeek V4 Pro. Despite DeepSeek V4 Pro's advanced reasoning archi…

  20. RESEARCH · CL_06668 ·

    AgentEval framework improves AI agent workflow evaluation with DAG-based error tracking

    Researchers have developed AgentEval, a new framework for evaluating agentic workflows by representing them as directed acyclic graphs (DAGs). This approach allows for detailed step-level assessment and tracking of erro…