Extended Data Fig. 1: Overview of the four ML-QEM models and their encoded features.
From: Machine learning for practical quantum error mitigation

(a) Linear regression (specifically ordinary least-square (OLS)): input features are vectors including circuit features (such as the number of two-qubit gates n2Q and SX gates nSX), noisy expectation values \({\langle \hat{O}\rangle }^{{\rm{noisy}}}\), and observables \(\hat{O}\). The model consists of a linear function that maps input features to mitigated values \({\langle \hat{O}\rangle }^{{\rm{mit}}}\). (b) Random forest (RF): the model consists of an ensemble of decision trees and produces a prediction by averaging the predictions from each tree. (c) Multilayer perception (MLP): the same encoding as that for linear regression is used, and the model consists of one or more fully connected layers of neurons. The non-linear activation functions enable the approximation of non-linear relationships. (d) Graph neural network (GNN): graph-structured input data is used, with node and edge features encoding quantum circuit and noise information. The model consists of multiple layers of message-passing operations, capturing both local and global information within the graph and enabling intricate relationships to be modeled.