While federated learning is promising for efficient collaborative learning without revealing local data, it remains vulnerable to white-box privacy attacks, suffers from high communication overhead, and struggles to adapt to heterogeneous models. Here, the authors show a federated distillation method to tackle these challenges, which leverages the strengths of knowledge distillation in a federated learning setting.
- Jiawei Shao
- Fangzhao Wu
- Jun Zhang