DP-SGD
PulseAugur coverage of DP-SGD — every cluster mentioning DP-SGD across labs, papers, and developer communities, ranked by signal.
3 day(s) with sentiment data
-
KANs trained with DP-SGD analyzed for population risk bounds
Researchers have established the first population risk bounds for Kolmogorov-Arnold Networks (KANs) when trained using mini-batch stochastic gradient descent (SGD) with differential privacy (DP-SGD). This new analysis c…
-
New DP-LAC method enhances private federated LLM fine-tuning
Researchers have developed DP-LAC, a new method for differentially private federated fine-tuning of language models. This technique improves upon existing adaptive clipping methods by estimating an initial clipping thre…
-
New DP-SGD subsampling methods offer improved privacy-utility trade-offs
Two new research papers explore optimized subsampling techniques for Differentially Private Stochastic Gradient Descent (DP-SGD). The first paper, focusing on random shuffling, provides tight upper and lower bounds with…
-
Researchers reveal supply-chain attacks can steal secrets from local LLM fine-tuning
Researchers have developed a novel method to steal sensitive information from locally fine-tuned large language models by exploiting vulnerabilities in their supply chain code. This technique moves beyond passive weight…