Fig. 2: Overview of different FL design choices.

FL topologies—communication architecture of a federation. a Centralised: the aggregation server coordinates the training iterations and collects, aggregates and distributes the models to and from the Training Nodes (Hub & Spoke). b Decentralised: each training node is connected to one or more peers and aggregation occurs on each node in parallel. c Hierarchical: federated networks can be composed from several sub-federations, which can be built from a mix of Peer to Peer and Aggregation Server federations (d)). FL compute plans—trajectory of a model across several partners. e Sequential training/cyclic transfer learning. f Aggregation server, g Peer to Peer.