Fig. 3: Privacy evaluation.
From: Towards fairness-aware and privacy-preserving enhanced collaborative learning for healthcare

The existing model-heterogeneous FL methods and Vanilla FedAvg are deficient in their defense against gradient inversion attacks. This shortcoming exposes a significant vulnerability: a malicious central server can easily exploit the received gradient information to reconstruct clients' private data, thereby posing a substantial privacy risk. These methods resort to implementing established encryption techniques within FL, but this approach often leads to reduced system efficiency and accuracy loss. DynamicFL, on the other hand, offers a more effective and robust solution. In the DynamicFL framework, the central server remains unaware of the specific private models employed by individual clients. This lack of information severely restricts the server’s capability to reconstruct semantic information, significantly reducing the potential for privacy breaches and thus bolstering the protection of client data. Source data are provided as a Source Data file.