Table 8 Key steps in federated learning workflow.
Step | Description |
---|---|
Client selection | Randomly selecting a subset of clients for each training round to optimize computation efficiency |
Local model training | Clients train their models independently on local datasets |
Gradient clipping | Restricting gradient magnitudes to prevent excessive influence from single data points |
Differential privacy (DP) | Adding Gaussian noise to gradients to ensure privacy preservation |
Model update transmission | Clients send noise-protected model updates to the central server |
Global model aggregation | Server aggregates all updates using the Federated Averaging (FedAvg) algorithm |
Model synchronization | Updated global model is redistributed to clients for further training |