Fig. 1: The TA-GAN method. | Nature Machine Intelligence

Fig. 1: The TA-GAN method.

From: Resolution enhancement with a task-assisted GAN to guide optical nanoscopy image analysis and acquisition

Fig. 1

a, Architecture of the TA-GANAx. The losses (circles) are backpropagated to the networks of the same colour: the generator (violet), the discriminator (green) and the task network (blue). DG, discriminator loss for generated images; GEN, generation loss; GAN, GAN loss; DR, discriminator loss for real images; TL, task loss. The TA-GANAx is applied to the axonal F-actin dataset using the segmentation of F-actin rings as an auxiliary task to optimize the generator. b, Representative example chosen out of 52 test images for the comparison of the TA-GANAx and algorithmic super-resolution baselines on the axonal F-actin dataset. The confocal image is the low-resolution input and the STED image is the aimed ground truth. Insets: segmentation of the axonal F-actin rings (green) predicted by the U-Netfixed-ax with the bounding boxes (white line) corresponding to the manual expert annotations13. PSNR and SSIM metrics are written on the generated images. Scale bars, 1 μm. c, The TA-GANNano is trained on the simulated nanodomain dataset using the localization of nanodomains as the auxiliary task. d, Representative example chosen out of 75 test images for the comparison of the TA-GANNano with the baselines for nanodomain localization. The black circles represent the position of the nanodomains on the ground-truth datamap and the blue circles represent the nanodomains identified by an expert on images from the test set (Methods). The intensity scale is normalized for each image by its respective minimum and maximum values. Scale bars, 250 nm.

Back to article page