Fig. 1: Self-supervised deep learning of protein subcellular localization with cytoself. | Nature Methods

Fig. 1: Self-supervised deep learning of protein subcellular localization with cytoself.

From: Self-supervised deep learning encodes high-resolution features of protein subcellular localization

Fig. 1: Self-supervised deep learning of protein subcellular localization with cytoself.The alt text for this image may have been generated using AI.

a, Workflow of the learning process. Only images and the proteins identifiers are required as input. We trained our model with a second fiducial channel for the cell nuclei, but its presence is optional as its performance contribution is negligible (Fig. 4). The protein identification pretext task ensures that images corresponding to the same or similar proteins have similar representations. b, Architecture of our VQ-VAE-2 (ref. 37) -based deep-learning model featuring our two innovations: split-quantization and protein identification pretext task. Numbers in the encoders and decoders indicate encoder1, encoder2, decoder1 or decoder2 (Supplementary File 1). Global representation and local representation use different codebooks. c, The level of use of the codebook (that is, perplexity) increases and then saturates during training and is enhanced by applying split quantization.

Back to article page