PulseAugur
LIVE 06:44:23
ENTITY Tiny-ImageNet

Tiny-ImageNet

PulseAugur coverage of Tiny-ImageNet — every cluster mentioning Tiny-ImageNet across labs, papers, and developer communities, ranked by signal.

Total · 30d
5
5 over 90d
Releases · 30d
0
0 over 90d
Papers · 30d
5
5 over 90d
TIER MIX · 90D
RECENT · PAGE 1/1 · 6 TOTAL
  1. TOOL · CL_21937 ·

    New AS-LoRA method improves privacy in federated learning

    Researchers have developed AS-LoRA, a novel framework for adaptive selection of LoRA components in privacy-preserving federated learning. This method addresses aggregation errors common in such setups by allowing each l…

  2. TOOL · CL_20416 ·

    New Covariance-Aware Goodness method boosts Forward-Forward learning performance

    Researchers have developed a new method called Covariance-Aware Goodness (BiCovG) to improve the performance of the Forward-Forward (FF) learning algorithm, particularly in convolutional neural networks. This approach a…

  3. RESEARCH · CL_18735 ·

    AI research tackles layer free-riding and enhances data privacy for models

    Researchers have identified a phenomenon in Forward-Forward networks called layer free-riding, where later layers can inherit tasks already partially handled by earlier layers, leading to a decay in gradient. Three loca…

  4. RESEARCH · CL_21948 ·

    New AI unlearning methods balance data removal with model utility

    Researchers have developed new methods for machine unlearning, a process that removes specific data from AI models without full retraining. One approach, SHRED, uses self-distillation and logit demotion to identify and …

  5. RESEARCH · CL_08682 ·

    JEPAMatch paper introduces geometric shaping for semi-supervised learning

    Researchers have introduced JEPAMatch, a novel approach to semi-supervised learning that aims to improve model performance when labeled data is scarce. This method moves beyond traditional confidence-based pseudo-labeli…

  6. RESEARCH · CL_06359 ·

    New research tackles Fast Adversarial Training with dynamic guidance and a fair benchmark

    Researchers have developed a new strategy called Distribution-aware Dynamic Guidance (DDG) to improve the robustness of AI models trained using Fast Adversarial Training (FAT). DDG addresses issues like catastrophic ove…