PulseAugur
LIVE 08:17:08
ENTITY CIFAR-10

CIFAR-10

PulseAugur coverage of CIFAR-10 — every cluster mentioning CIFAR-10 across labs, papers, and developer communities, ranked by signal.

Total · 30d
63
63 over 90d
Releases · 30d
0
0 over 90d
Papers · 30d
63
63 over 90d
TIER MIX · 90D
RELATIONSHIPS
SENTIMENT · 30D

4 day(s) with sentiment data

RECENT · PAGE 1/3 · 54 TOTAL
  1. TOOL · CL_27615 ·

    New OUIDecay method adapts CNN regularization layer-by-layer

    Researchers have introduced OUIDecay, a novel adaptive weight decay method for convolutional neural networks. This technique dynamically adjusts regularization strength for each layer based on online activation patterns…

  2. TOOL · CL_27734 ·

    Muon optimizer fails on convex Lipschitz functions, study finds

    A new paper challenges the theoretical underpinnings of the Muon optimization algorithm, demonstrating that it does not converge on convex Lipschitz functions. The research suggests that Muon's practical success likely …

  3. RESEARCH · CL_25801 ·

    New framework corrects target shift in online learning systems

    Researchers have developed a new framework to analyze and improve online learning systems that encounter distributional shifts. Their work, focusing on kernel regression, reveals that online learning effectively uses sh…

  4. TOOL · CL_25579 ·

    OrScale optimization method improves neural network training

    Researchers have introduced OrScale, a novel optimization technique designed to enhance neural network training. OrScale builds upon the Muon method by incorporating layer-wise trust-ratio scaling, which measures the Fr…

  5. TOOL · CL_25770 ·

    Optical networks achieve superior image denoising via pre-training

    Researchers have developed a novel pre-training method for all-optical image denoising using diffractive networks. This approach involves an initial training phase with a large dataset of 3.45 million images, followed b…

  6. TOOL · CL_25771 ·

    Spectral Surgery method rebalances deep network accuracy post-hoc

    Researchers have developed a new post-hoc optimization method called Spectral Surgery to improve deep network classification performance. This technique directly perturbs model weights along specific "spike eigenvectors…

  7. TOOL · CL_25620 ·

    New STMD method speeds diffusion model inference without teacher

    Researchers have developed Stochastic Transition-Map Distillation (STMD), a novel framework designed to accelerate the inference process for diffusion models without requiring a pre-trained teacher model. This method di…

  8. TOOL · CL_25657 ·

    New SWAP-Score metric evaluates neural networks without training

    Researchers have introduced SWAP-Score, a novel zero-shot metric designed to evaluate neural networks without requiring training. This method measures a network's expressivity using sample-wise activation patterns and d…

  9. RESEARCH · CL_22009 ·

    GONO optimizer adapts Adam's momentum using directional consistency for better convergence

    Researchers have introduced the GONO framework, an optimization signal designed to improve deep learning training by addressing the decoupling of directional alignment and loss convergence. Unlike existing optimizers th…

  10. RESEARCH · CL_22003 ·

    New research details efficient data reconstruction techniques for neural networks

    Researchers have developed new techniques for data reconstruction attacks on neural networks, aiming to recover sensitive training data. Their unified optimization formulation, based on initial and trained parameter val…

  11. RESEARCH · CL_21794 ·

    New parameter E predicts Mixture-of-Experts model health, preventing dead experts.

    Researchers have introduced a new dimensionless control parameter, E = T*H/(O+B), to predict the health of expert ecologies in Mixture-of-Experts (MoE) models. This parameter, derived from four hyperparameters, can prev…

  12. TOOL · CL_20375 ·

    New MetaAdamW optimizer uses self-attention for adaptive learning rates

    Researchers have developed MetaAdamW, a novel optimizer that enhances adaptive learning rates and weight decay by employing a self-attention mechanism. This Transformer-based approach dynamically adjusts hyperparameters…

  13. TOOL · CL_20379 ·

    Lookahead Drifting Model improves image generation with sequential drifting terms

    Researchers have introduced a novel 'lookahead drifting model' for distribution mapping, building upon the existing 'drifting model' paradigm. This new approach computes a sequence of drifting terms at each training ite…

  14. RESEARCH · CL_20296 ·

    LLMs accelerate neural architecture search with novel delta-based code generation

    Researchers are exploring novel methods for Neural Architecture Search (NAS) using Large Language Models (LLMs). One approach, SPARK, aims to improve LLM knowledge integration by explicitly selecting functional factors …

  15. RESEARCH · CL_18735 ·

    AI research tackles layer free-riding and enhances data privacy for models

    Researchers have identified a phenomenon in Forward-Forward networks called layer free-riding, where later layers can inherit tasks already partially handled by earlier layers, leading to a decay in gradient. Three loca…

  16. RESEARCH · CL_18836 ·

    Researchers accelerate discrete autoregressive models with Wasserstein flow and Jacobi decoding

    Researchers have developed a new method to accelerate the inference of discrete autoregressive normalizing flows, a type of generative model. The proposed technique, Selective Jacobi Decoding, allows for parallel iterat…

  17. RESEARCH · CL_18341 ·

    GEM-FI: Gated Evidential Mixtures with Fisher Modulation

    Researchers have introduced GEM-FI, a novel family of models designed to improve uncertainty estimation in deep learning. This approach addresses limitations of existing Evidential Deep Learning methods, which can be ov…

  18. RESEARCH · CL_18343 ·

    Researchers develop Evolutionary Dynamic Loss for distribution-free pretraining

    Researchers have developed a new framework called Evolutionary Dynamic Loss (EDL) for pretraining classification losses. EDL learns a transferable loss function using synthetic data, avoiding the need for real samples d…

  19. TOOL · CL_26961 ·

    New AI framework learns classification losses without real data

    Researchers have developed a new framework called Evolutionary Dynamic Loss (EDL) for pretraining classification losses without using real data. EDL learns a transferable loss function by generating synthetic prediction…

  20. TOOL · CL_15651 ·

    Researchers develop DUNE, a dual-branch method to create robust unlearnable examples for AI models.

    Researchers have developed DUNE, a novel dual-branch approach to create robust unlearnable examples for AI model training. This method optimizes perturbations in both spatial and color domains to degrade model generaliz…