Extended Data Fig. 9: Loss quantification of the CellLENS model when using different image feature extraction processes. | Nature Immunology

Extended Data Fig. 9: Loss quantification of the CellLENS model when using different image feature extraction processes.

From: CellLENS enables cross-domain information fusion for enhanced cell population delineation in single-cell spatial omics data

Extended Data Fig. 9

(A) We compared the CellLENS model training losses, across three CellLENS variations: 1) Default CellLENS, where the imaging feature extraction part is done by training an Alex-Net like CNN encoder (supervised by local cell type neighborhood composition vector). 2) CellLENS with a pre-trained ResNet50, where the image features were directly extracted with the pre-trained ResNet 50 model, flattened, and reduced to a vector with 128 dimensions. This vector is swapped with the original image feature vector obtained from the retrained Alex-Net model, and the rest of the CellLENS training process remains the same; 3) CellLENS with a pre-trained ViT (transformer), using a similar process as the pre-trained ResNet50 in (2). (B) We compared the losses from three CellLENS variations: 1) CellLENS default with Alex-Net as described above. 2) CellLENS but swapping out the Alex-Net architecture with a ResNet50 architecture, and retraining its weights (initialized at pretrained weights). 3) CellLENS but swapping out the Alex-Net architecture with a ViT architecture, and retraining its weights (initialized at pretrained weight). The model loss was calculated the same as described in the Methods section paragraph ‘Information retrieval efficacy evaluation of the LENS-GNN duo module’. In these cases, we implemented a 80/20 train test data split. Retraining was only done on the train data, and loss values were calculated on test data.

Back to article page