Fig. 3: Domain adaptation. | Nature Machine Intelligence

Fig. 3: Domain adaptation.

From: Resolution enhancement with a task-assisted GAN to guide optical nanoscopy image analysis and acquisition

Fig. 3

a, The semantic segmentation of F-actin rings (green) and fibres (magenta) is used as the auxiliary task to train the TA-GANDend. b, Example of confocal, real STED and TA-GANDend synthetic images chosen among 26 test images. Insets: the regions identified as rings and fibres by the U-Netfixed-dend trained on real STED images13. White solid line shows the border of the dendritic mask generated from the MAP2 channel, following the methods presented in ref. 13. c, The same semantic segmentation task is used to train the TA-CycleGAN. The reference to compute the TL is the segmentation of real fixed-cell STED images by U-Netfixed-dend. The fixed cycle (top) uses U-Netfixed-dend to encourage semantic consistency between the input fixed-cell image and the end-of-cycle reconstructed image. The live cycle (bottom) does not use a task network, enabling the use of non-annotated images from the live F-actin dataset. Once trained, the TA-CycleGAN can generate domain-adapted datasets (right). DL, discriminator loss for live-cell images; DF, discriminator loss for fixed cell images; GANL, GAN loss for live-cell images; GANF, GAN loss for fixed cell images; CYC, cycle loss; GEN, generation loss; Lrec, live reconstructed; Lgen, live generated; Frec, fixed reconstructed; Fgen, fixed generated; Livegen, generated live-cell image; Fixedgen, generated fixed cell image. d, Representative example chosen among 28 annotated live-cell STED test images for the segmentation of F-actin nanostructures. The nanostructures on the live-cell STED images (top left) are not properly segmented by the U-Netfixed-dend (bottom left). The U-NetLive is trained with synthetic images generated by the TA-CycleGAN to segment the F-actin nanostructures on real live-cell STED images. The segmentation predictions generated by the U-NetLive (bottom right) are similar to the manual expert annotations (top right). e, The semantic segmentation task is used to train the TA-GANLive. The generator of the TA-GANLive takes as input the confocal image as well as an STED subregion and a decision matrix indicating the position of the STED subregion in the FOV (Methods). f, Representative example of real and synthetic live-cell STED images of F-actin generated with TA-GANLive, chosen among the initial images from 159 imaging sequences. The annotations of both real and synthetic images are obtained with the U-NetLive. Colour bar: raw photon counts. Scale bars, 1 μm.

Back to article page