Fig. 1: Schematic representation of using self-supervised representations learnt from whole slide image histology for segmentation of tissue substructures, pathological features and understanding morphology-expression-genetic associations using RNAPath. | Nature Communications

Fig. 1: Schematic representation of using self-supervised representations learnt from whole slide image histology for segmentation of tissue substructures, pathological features and understanding morphology-expression-genetic associations using RNAPath.

From: Self-supervised learning for characterising histomorphological diversity and spatial RNA expression prediction across 23 human tissue types

Fig. 1

A Histology whole slide images (WSI) are preprocessed by segmentation and tiling into 63 × 63 μm2 squared regions. B Self-supervised learning is used to extract morphological features from tiles. C By using learned features, tiles are classified through a K-Nearest Neighbours model and phenotypes—in terms of extent of detected regions in the sample—are derived. D RNAPath model takes as input tile embeddings and predicts both local (tile-level) and global (sample-level) gene expression as output, together with the heatmap to visualise predicted spatial gene activity.

Back to article page