PulseAugur
LIVE 00:04:13
research · [2 sources] ·
100
research

Nous Research's Lighthouse Attention speeds up LLM pretraining

Researchers at Nous Research have developed Lighthouse Attention, a novel hierarchical attention mechanism designed to accelerate the pretraining of large language models with long contexts. This method achieves a 1.4x to 1.7x speedup compared to standard FlashAttention by pooling queries, keys, and values symmetrically across a multi-level pyramid. Lighthouse Attention places the selection logic outside the attention kernel, allowing it to leverage optimized dense-attention kernels for improved efficiency during training. AI

Summary written by gemini-2.5-flash-lite from 2 sources. How we write summaries →

IMPACT Accelerates LLM pretraining for long contexts, potentially enabling more efficient development of advanced models.

RANK_REASON The cluster describes a new research paper proposing a novel method for improving LLM training efficiency.

Read on MarkTechPost →

Nous Research's Lighthouse Attention speeds up LLM pretraining

COVERAGE [2]

  1. MarkTechPost TIER_1 · Asif Razzaq ·

    Nous Research Proposes Lighthouse Attention: A Training-Only Selection-Based Hierarchical Attention That Delivers 1.4–1.7× Pretraining Speedup at Long Context

    <p>Nous Research has published Lighthouse Attention, a selection-based hierarchical attention mechanism that wraps around standard scaled dot-product attention during pretraining and is removed afterward. Unlike prior methods such as NSA and HISA that pool only keys and values, L…

  2. Mastodon — sigmoid.social TIER_1 · [email protected] ·

    Nous Research has introduced Lighthouse Attention, a selection-based hierarchical mechanism for long-context LLM pretraining that pools queries, keys and values

    Nous Research has introduced Lighthouse Attention, a selection-based hierarchical mechanism for long-context LLM pretraining that pools queries, keys and values across a multi-resolution pyramid. The approach achieves 1.4-1.7x wall-clock speedup against standard FlashAttention. h…