light-weight federated learning with augmented knowledge distillation for human activity recognition
abstract
the field of deep learning has experienced significant growth in recent years in various
domains where data can be collected and processed. however, as data plays a central role in
the deep learning revolution, there are risks associated with moving the data from where it is
produced to central servers and data centers for processing. to address this issue, federated
learning (fl) was introduced as a framework for collaboratively training a global model on
distributed data. however, deploying fl comes with several unique challenges, including
communication overhead and system and statistical heterogeneity. while fl is inherently
private as clients don’t share local data, privacy is still a concern in the fl context since
sensitive data can be leaked from the exchanged gradients.
to address these challenges, this thesis proposes the incorporation of techniques such as
knowledge distillation (kd) and differential privacy (dp) with fl. specifically, a modelagnostic fl algorithm based on kd is proposed, called the federated learning algorithm
based on knowledge distillation (fedakd). fedakd utilizes a shared dataset as a proxy
dataset to calculate and transfer knowledge in the form of soft labels, which are then sent
to the server for aggregation and broadcast back to clients to train on them in addition
to local training. additionally, we elaborate on applying local differential privacy (ldp)
where clients apply gradient clipping and noise injection according to the differentially private stochastic gradient descent (dp-sgd). the fedakd algorithm is evaluated utilizing
human activity recognition (har) datasets in terms of accuracy and communication efficiency.