Table 1 Comparison of FL schemes on key metrics.

From: Quantization-based chained privacy-preserving federated learning

Scheme

Communication volume (GB/round)

Security

Convergence speed (epochs)

FedAvg9

High (5.6)

Low

30

Chain-PPFL18

Medium (3.8)

Moderate

28

Q-Chain FL (Proposed)

Low (2.1)

High

20

  1. Security-High indicates compressed model parameter differences transmission instead of direct model parameter transmission.
  2. Convergence Speed is the number of rounds for the model accuracy of 90% on the MNIST dataset