PulseAugur
LIVE 03:41:14
ENTITY LRU

LRU

PulseAugur coverage of LRU — every cluster mentioning LRU across labs, papers, and developer communities, ranked by signal.

Total · 30d
3
3 over 90d
Releases · 30d
0
0 over 90d
Papers · 30d
3
3 over 90d
TIER MIX · 90D
RECENT · PAGE 1/1 · 3 TOTAL
  1. TOOL · CL_20119 ·

    Apple researchers unveil SpecMD for faster MoE model inference

    Apple's machine learning research team has published a paper detailing SpecMD, a new framework for evaluating Mixture-of-Experts (MoE) model caching policies. Their experiments show that traditional caching assumptions …

  2. RESEARCH · CL_05173 ·

    New ML-based GPU caching algorithm LCR boosts LLM inference speed

    Researchers have developed a new GPU caching algorithm called Learning-Augmented LRU (LALRU) designed to improve efficiency during AI inference. This algorithm integrates learned predictions with caching policies to ens…

  3. RESEARCH · CL_03019 ·

    Memristor-based AI systems show promise for efficient learning and neuromorphic computing

    Researchers are exploring Self-Organising Memristive Networks (SOMNs) as a physical alternative to conventional hardware for artificial intelligence, aiming for energy-efficient, brain-like continual learning. These net…