Fig. 2: Principle of E-U-Net training. | Nature Communications

Fig. 2: Principle of E-U-Net training.

From: Live-dead assay on unlabeled cells using phase imaging with computational specificity

Fig. 2

a The E-U-Net. architecture includes an EfficientNet as the encoding path and five stages of decoding. The E-U-Net includes a Down+Conv+BN + ReLU block and 7 other blocks. The Down-Conv-BN-ReLU block represents a chain of down-sampling layer, convolutional layer, batch normalization layer, and ReLU layer. Similarly, the Conv+BN + ReLU is a chain of convolutional layer, batch normalization layer, and ReLU layer. b The network architecture of EfficientNet-B3. Different blocks are marked in different colors. They correspond to the layer blocks of EfficientNet in a. c The major layers inside the MBConvX module. X = 1 and X = 6 indicate the ReLU and ReLU6 are used in the module, respectively. The skip connection between the input and output of the module is not used in the first MBConvX module in each layer block. d Training and validation loss vs epochs plotted in the log scale.

Back to article page