Fig. 2: ML architecture.

a, For each element in the gate set, we train a separate decoder module. There is a one-to-one mapping between the logical gates (left box) to the ML decoder module (right box). b, Single-qubit decoder module inner structure based on an LSTM core. The hidden state Ht−1 = ct−1, ht−1 (we omit superscript (q)), has two components ct−1 and ht−1, which are designed to keep track of long-term and short-term history correspondingly. Together with the new syndrome St, Ht−1 follows the internal information flow as shown in the box and outputs the updated hidden state Ht. The red-boxed operations represent a linear layer followed by corresponding activation functions (sigmoid σ and \(\tanh\)). The blue circled operations represent element-wise addition (+), multiplication (×) and \(\tanh\). c, To handle correlated decoding, we use a module that input the hidden states \({H}_{t-1}^{(C)},{H}_{t-1}^{(T)}\) and new syndromes \({S}_{t}^{(C)},{S}_{t}^{(T)}\) from both control (C) and target (T) logical qubits. The hidden states are updated simultaneously.