PulseAugur
LIVE 09:57:04
ENTITY Mamba

Mamba

PulseAugur coverage of Mamba — every cluster mentioning Mamba across labs, papers, and developer communities, ranked by signal.

Total · 30d
63
63 over 90d
Releases · 30d
0
0 over 90d
Papers · 30d
61
61 over 90d
TIER MIX · 90D
RELATIONSHIPS
SENTIMENT · 30D

3 day(s) with sentiment data

RECENT · PAGE 2/3 · 41 TOTAL
  1. TOOL · CL_15733 ·

    FractalMamba++ scales vision models across resolutions using Hilbert curves

    Researchers have introduced FractalMamba++, an enhanced vision backbone designed to improve the performance of Mamba-based models, particularly with high-resolution inputs. This new architecture leverages the geometric …

  2. TOOL · CL_15792 ·

    COREY scheduler optimizes Mamba SSMs but static tuning remains faster

    Researchers have developed COREY, a new runtime scheduler designed to optimize the performance of Mamba selective state space models (SSMs). COREY maps activation entropy to chunk sizes, aiming to improve the efficiency…

  3. TOOL · CL_16032 ·

    Rhamba framework integrates attention and Mamba for fMRI self-supervised learning

    Researchers have developed Rhamba, a novel framework for self-supervised learning on resting-state fMRI data. This framework combines region-aware masking with hybrid Attention-Mamba architectures to improve the analysi…

  4. RESEARCH · CL_16254 ·

    MedMamba and MambaSL advance time series classification with state space models

    Researchers have developed MedMamba, a novel architecture based on the Mamba state space model, specifically designed for classifying medical time series data like ECGs and EEGs. This approach addresses limitations of t…

  5. RESEARCH · CL_14356 ·

    New AI models tackle image and video restoration with advanced techniques

    Researchers have developed several new methods for image and video restoration tasks. One approach, Continuous Expert Assembly (CEA), uses a dynamic parameterization framework to adapt to diverse local degradation patte…

  6. RESEARCH · CL_13315 ·

    Group theory reveals limited options for language model positional encodings

    A machine learning researcher at Jane Street has explored the mathematical structure of positional encodings used in attention mechanisms. By formalizing desirable properties of these encodings, the research reveals tha…

  7. RESEARCH · CL_14140 ·

    Lost in State Space: Probing Frozen Mamba Representations

    A new research paper investigates the internal workings of Mamba, a recurrent neural network architecture. The study tested the hypothesis that Mamba's state could directly yield semantic sentence summaries without addi…

  8. RESEARCH · CL_11378 ·

    New MSR framework improves CT-MRI cervical spine registration with hybrid modeling

    Researchers have developed a new framework called MSR for rigid-deformable hybrid modeling in CT-MRI registration of the cervical spine. This approach combines rigid alignment of individual vertebrae with deformable mod…

  9. RESEARCH · CL_16114 ·

    Deep learning models show promise in pavement, aero-engine, and affect recognition tasks

    Researchers are exploring deep learning models for predictive maintenance and performance analysis across various domains. One study utilizes CNN and LSTM networks with extensive pavement condition data from Texas to mo…

  10. RESEARCH · CL_10163 ·

    COMMA network enhances 3D dispersed vessel segmentation with coordinate awareness

    Researchers have developed a new network architecture called COMMA for segmenting 3D dispersed blood vessels in medical imaging. This Coordinate-aware Modulated Mamba Network utilizes both global and local branches to m…

  11. RESEARCH · CL_08676 ·

    Mamba backbone powers new efficient neural combinatorial optimization framework

    Researchers have developed ECO, an efficient framework for Neural Combinatorial Optimization that utilizes a Mamba backbone. This approach separates trajectory generation from gradient updates, employing a supervised wa…

  12. FRONTIER RELEASE · CL_07710 ·

    NVIDIA launches Nemotron 3 Nano Omni, unifying multimodal AI for efficiency

    NVIDIA has released Nemotron 3 Nano Omni, an open multimodal model capable of processing text, images, audio, and video. This model aims to unify these modalities into a single architecture, improving efficiency and ena…

  13. RESEARCH · CL_06939 ·

    AdaMamba framework integrates adaptive frequency analysis for improved time series forecasting

    Researchers have introduced AdaMamba, a new framework designed for long-term time series forecasting. This model addresses the challenge of cross-domain heterogeneity in real-world data by adaptively integrating frequen…

  14. RESEARCH · CL_06932 ·

    New Mamba model variant enhances memory retention and bilinear computation

    Researchers have introduced Bilinear Input Modulation (BIM) to enhance Selective State Space Models (SSMs), specifically Mamba, by incorporating state-input products. This augmentation allows for improved memory retenti…

  15. RESEARCH · CL_06871 ·

    Sequence models predict heart failure patient instability and mortality

    Researchers have developed sequence models to predict one-year clinical instability and mortality in heart failure patients using electronic health records. The study, conducted on a Swedish cohort of over 42,000 patien…

  16. RESEARCH · CL_05160 ·

    MambaCSP model offers hardware-efficient CSI prediction with hybrid attention

    Researchers have developed MambaCSP, a new AI model designed for efficient channel state prediction in wireless networks. This model utilizes a hybrid-attention state space architecture, combining the linear-time effici…

  17. RESEARCH · CL_02901 ·

    New AI models enhance image and video super-resolution with diffusion and efficient architectures

    Researchers are developing new methods for image and video super-resolution using advanced AI techniques. Several papers explore diffusion models for joint spatiotemporal super-resolution, enabling adaptation across dif…

  18. RESEARCH · CL_03019 ·

    Memristor-based AI systems show promise for efficient learning and neuromorphic computing

    Researchers are exploring Self-Organising Memristive Networks (SOMNs) as a physical alternative to conventional hardware for artificial intelligence, aiming for energy-efficient, brain-like continual learning. These net…

  19. RESEARCH · CL_01130 ·

    Apple enables parallel RNN training, challenging transformer dominance

    Apple researchers have developed ParaRNN, a new framework that enables parallel training of nonlinear Recurrent Neural Networks (RNNs). This advancement overcomes the historical sequential bottleneck in RNN training, ac…

  20. RESEARCH · CL_01131 ·

    Apple researchers unveil parallel RNN training and enhanced SSMs at ICLR 2026

    Apple researchers are presenting new work at ICLR 2026, focusing on advancements in recurrent neural networks (RNNs) and state space models (SSMs). Their paper "ParaRNN" introduces a parallelized training framework that…