Extended Data Fig. 3: Integration performance of GLUE under different hyperparameter settings.
From: Multi-omics single-cell data integration and regulatory inference with graph-linked embedding

Integration performance is quantified by a, overall integration score, and b, FOSCTTM (n=4 repeats with different model random seeds). The error bars indicate mean ± s.d. ‘Dimensionality’ denotes the cell embedding dimensionality. ‘Preprocessing dimensionality’ is the reduced dimensionality used for the first transformation layers of the data encoders (see Methods). ‘Hidden layer depth’ is the number of hidden layers in the data encoders and modality discriminator. ‘Hidden layer dimensionality’ is the dimensionality of hidden layers in the data encoders and modality discriminator. ‘Dropout’ is the dropout rate of hidden layers in data encoders and modality discriminator. ‘Lambda graph’ is the weight of the graph loss (\(\lambda _{{{\mathcal{G}}}}\)). ‘Lambda align’ is the weight of the adversarial alignment (λD). ‘Negative sampling rate’ is the number of empirical samples used in negative edge sampling (samples from pns). For each hyperparameter, the center value is the default. To control computational cost, one hyperparameter was varied at a time, with all others set to their default values. The performance of GLUE was robust across a wide range of hyperparameter settings, except for failed alignments in which the adversarial alignment weight was too low or no hidden layers were used in the neural networks (equivalently a linear model with insufficient capacity).