Secure Federated Learning: Challenges and mitigations

Speaker:  Dipankar Dasgupta – Memphis, USA
Topic(s):  Artificial Intelligence, Machine Learning, Computer Vision, Natural language processing

Abstract

Federated Learning (FL) promises collaborative decentralized intelligence without centralized data sharing, although its distributed design introduces some security challenges. This talk will discuss two fundamental threats undermining FL’s trustworthiness: client data poisoning, where adversaries inject corrupted local samples to destabilize global convergence, and client model poisoning, where untrusted clients manipulate or exfiltrate local models to degrade global learnability and performance. I will talk on our Secure Federated Learning (SFL) framework that embeds confidentiality, integrity, and authenticity directly into the learnability lifecycle, transforming FL as a trust-based AI paradigm.
 
Our SFL has two core innovations: (a) a globally trained Ensemble of Classifiers (EOCL), located at the active client edges that proactively filters the noisy samples (potentially injected by malicious edges or remote adversaries) from the local private training data, preventing them from corrupting local model updates and, ultimately, safeguarding the integrity of the global aggregation; and (b) Secure Local Enclaves (MODELARMOR), located in active edges to isolate the clients’ local models and EOCLs from any unauthorized access, thus defending against tampering or parameter manipulation. The security of communications between the server and active edges are assumed to be protected by state-of-the-art cryptographic protocols to ensure end-to-end model confidentiality. Each active edge receives: a regularly updated EOCL and a verified local replica of the global model at the beginning of any global iteration, ensuring authenticity and traceable integrity throughout training. 

The talk will highlight that SFL offers a deployable security blueprint for privacy-preserving AI in critical applications, including healthcare networks, autonomous UAV/UGV swarms, and IOT ecosystems.

References:
A. Roy and D. Dasgupta. Secure Federated Learning Framework for Resilient and Trustworthy Collaboration. Under submission to IEEE 2025.
Wu, X. et al. (2025). A Dual-Defense Self-balancing Framework Against Bilateral Model Attacks in Federated Learning. In Lecture Notes in Computer Science, ICA3PP, vol 1525, Springer, 2024, Singapore. https://doi.org/10.1007/978-981-96-1525-4_14
Roy and D. Dasgupta, "Enhancing Federated Learning Security: Combating Clients’ Data Poisoning with Classifier Ensembles," 2024 IEEE 6th International Conference on Cognitive Machine Intelligence (CogMI), Washington, DC, USA, 2024,

About this Lecture

Number of Slides:  50
Duration:  60 minutes
Languages Available:  English
Last Updated:  03/12/2025

Request this Lecture

To request this particular lecture, please complete this online form.

Request a Tour

To request a tour with this speaker, please complete this online form.

All requests will be sent to ACM headquarters for review.