Tiny-ImageNet
PulseAugur coverage of Tiny-ImageNet — every cluster mentioning Tiny-ImageNet across labs, papers, and developer communities, ranked by signal.
-
New AS-LoRA method improves privacy in federated learning
Researchers have developed AS-LoRA, a novel framework for adaptive selection of LoRA components in privacy-preserving federated learning. This method addresses aggregation errors common in such setups by allowing each l…
-
New Covariance-Aware Goodness method boosts Forward-Forward learning performance
Researchers have developed a new method called Covariance-Aware Goodness (BiCovG) to improve the performance of the Forward-Forward (FF) learning algorithm, particularly in convolutional neural networks. This approach a…
-
AI research tackles layer free-riding and enhances data privacy for models
Researchers have identified a phenomenon in Forward-Forward networks called layer free-riding, where later layers can inherit tasks already partially handled by earlier layers, leading to a decay in gradient. Three loca…
-
New AI unlearning methods balance data removal with model utility
Researchers have developed new methods for machine unlearning, a process that removes specific data from AI models without full retraining. One approach, SHRED, uses self-distillation and logit demotion to identify and …
-
JEPAMatch paper introduces geometric shaping for semi-supervised learning
Researchers have introduced JEPAMatch, a novel approach to semi-supervised learning that aims to improve model performance when labeled data is scarce. This method moves beyond traditional confidence-based pseudo-labeli…
-
New research tackles Fast Adversarial Training with dynamic guidance and a fair benchmark
Researchers have developed a new strategy called Distribution-aware Dynamic Guidance (DDG) to improve the robustness of AI models trained using Fast Adversarial Training (FAT). DDG addresses issues like catastrophic ove…