Figure 7 | Scientific Reports

Figure 7

From: Critical evaluation of artificial intelligence as a digital twin of pathologists for prostate cancer pathology

Figure 7

Shows the structures of the (A) inception, (B) residual, and (C) soft attention blocks used in the present study. In the PlexusNet architecture, (D) two consecutive blocks are connected by an average pooling that reduces the width and heights of the feature maps of the next block by half. When we calculate the channel numbers, the “round half up” approach is used to convert the channel numbers to integer numbers. BN: batch normalization; MN: max normalization; LN: layer normalization; B/LN: either batch or layer normalization. γ is the compression rate used to reduce the channel information, similar to the compression ratio in DenseNet49; l: the level index. H: Height; W: width; C: Channel. The subscript of the level definition for H and W was ignored to emphasize that H and W did not change during tensor processing in each block.

Back to article page