Options
Gradient Guard: Robust Federated Learning using Saliency Maps
Journal
2024 3rd International Conference for Innovation in Technology, INOCON 2024
Date Issued
2024-01-01
Author(s)
Nagaraj, Yeshwanth
Gupta, Ujjwal
Abstract
This paper presents a novel approach to counter Directed Deviation Attacks (DDA) in the domain of Federated Learning (FL). DDAs exploit gradient manipulation, disrupting model learning and increasing test error rates. This study introduces a novel defense mechanism employing Saliency Maps, a tool highlighting influential input regions to detect gradient anomalies caused by malicious clients.Existing defenses struggle against DDAs, prompting exploration of an alternative solution. By quantitatively measuring Structural Similarity Index (SSI) between Saliency Maps of benign and potentially malicious client updates, abnormal gradient patterns can be swiftly identified. This method safeguards FL models by isolating contributions of suspicious clients. Its efficacy across diverse algorithms, architectures, and datasets is demonstrated.Empirical evaluations reveal superiority, with the proposed approach ensuring Byzantine-robustness against up to 50% malicious clients, compared to traditional defenses 20-30% limit and recent work, FLAIR which offers byzantine-robustness upto a malicious client percentage of 45%. This paper introduces a pioneering technique combining Saliency Maps and SSI to protect FL models and underscores the need for proactive measures in the evolving landscape of decentralized learning.The rise of Federated Learning (FL) brings decentralized learning to the forefront, enabling efficient and private model training across devices. However, FL's decentralized nature exposes it to adversarial challenges, including Directed Deviation Attacks (DDA). DDAs manipulate gradients to divert models from optimal paths, increasing test errors. Traditional defenses fall short against DDAs, necessitating innovative solutions.This research harnesses Saliency Maps, traditionally used to understand model behavior, for defense against DDAs. Coupled with the Structural Similarity Index (SSI), it provides a dual visual and quantitative strategy. The paper thoroughly explores this approach's foundations, mechanics, and effectiveness through empirical analysis, highlighting the need for proactive defenses in the decentralized learning era.
Subjects