PulseAugur
LIVE 08:13:09
research · [4 sources] ·
0
research

Federated Learning advances balance privacy, utility, and fairness

Researchers are exploring advanced techniques to enhance privacy in Federated Learning (FL), a method where models train on decentralized data. One study compares Differential Privacy (DP) and Homomorphic Encryption (HE) for cardiovascular disease risk modeling using Swedish healthcare data, finding HE comparable to centralized methods but with higher computational overhead, while DP showed greater performance degradation for certain models. Another approach, FedPF, introduces a differentially private fair FL algorithm that balances fairness and utility by framing them as competing objectives, demonstrating significant discrimination reduction with competitive accuracy and a low computational footprint. A third paper combines DP with adaptive quantization to improve communication efficiency and privacy in non-IID FL settings, showing substantial data reduction on image datasets while maintaining accuracy and robust privacy. AI

Summary written by gemini-2.5-flash-lite from 4 sources. How we write summaries →

IMPACT Advances in privacy-preserving federated learning could enable more secure and efficient collaborative AI development in sensitive domains like healthcare and edge computing.

RANK_REASON Multiple arXiv papers detailing novel research in privacy-preserving federated learning techniques.

Read on arXiv cs.CV →

COVERAGE [4]

  1. arXiv cs.LG TIER_1 · Gaurang Sharma, Juha Pajula, Aada Illikainen, Markus Rautell, Noora Lipsonen, Petri Alhainen, Mika Hilvo ·

    Privacy-Preserving Federated Learning via Differential Privacy and Homomorphic Encryption for Cardiovascular Disease Risk Modeling

    arXiv:2604.27598v1 Announce Type: new Abstract: Protecting sensitive health data while enabling collaborative analysis is a central challenge in healthcare. Traditional machine learning (ML) requires institutions to pool anonymized patient records, centralizing analytical develop…

  2. arXiv cs.LG TIER_1 · Mika Hilvo ·

    Privacy-Preserving Federated Learning via Differential Privacy and Homomorphic Encryption for Cardiovascular Disease Risk Modeling

    Protecting sensitive health data while enabling collaborative analysis is a central challenge in healthcare. Traditional machine learning (ML) requires institutions to pool anonymized patient records, centralizing analytical development and privacy risks at a single site. Privacy…

  3. arXiv cs.AI TIER_1 · Kangkang Sun, Jun Wu, Minyi Guo, Jianhua Li, Jianwei Huang ·

    FedPF: Accurate Target Privacy Preserving Federated Learning Balancing Fairness and Utility

    arXiv:2510.26841v2 Announce Type: replace-cross Abstract: Federated Learning (FL) enables collaborative model training without data sharing, yet participants face a fundamental challenge, e.g., simultaneously ensuring fairness across demographic groups while protecting sensitive …

  4. arXiv cs.CV TIER_1 · Emre Ard{\i}\c{c}, Yakup Gen\c{c} ·

    Enhanced Privacy and Communication Efficiency in Non-IID Federated Learning with Adaptive Quantization and Differential Privacy

    arXiv:2604.23426v1 Announce Type: new Abstract: Federated learning (FL) is a distributed machine learning method where multiple devices collaboratively train a model under the management of a central server without sharing underlying data. One of the key challenges of FL is the c…