Options
A Study on Criteria for Training Collaborator Selection in Federated Learning
Date Issued
01-01-2022
Author(s)
Shambhat, Vishruth
Maurya, Akansh
Danannavar, Shubham Subhas
Kalla, Rohit
Anand, Vikas Kumar
Indian Institute of Technology, Madras
Abstract
Federated learning is an important aspect of enabling the deployment of deep learning. It addresses data privacy and security concerns by decentralizing data. In a federated learning setup, models are trained locally in the data center called collaborators, and the model weights are aggregated by a central server. We present our work towards the efficient aggregation of trained model weights from multiple institutions which is Task 1 in the federated learning challenge (FeTS 2021). We devised a scoring system for selecting appropriate collaborators every round and aggregating their weight by simple averaging. We calculate the score based on the sensitivity, dice coefficient, and Hausdorff distance of each segmentation class on validation data from every collaborator. The best collaborators are chosen serially based on the scores with higher scores indicating better performing networks. Our approach gave mean dice score of 0.58 ± 0.259 & 0.639 ± 0.176 & 0.55 ± 0.201 on Enhancing Tumor, Whole Tumor and Tumor Core respectively.
Volume
12963 LNCS