Fig. 3: Deep learning architecture for MIDAS. | Nature Communications

Fig. 3: Deep learning architecture for MIDAS.

From: MIDAS: rapid, multiplexed molecular profiling for integrated host–pathogen analysis

Fig. 3: Deep learning architecture for MIDAS.

a Particle classifier. The input to this module was a particle-like raw diffraction image (128 × 128 pixel 2D image) proposed by a maximally stable extremal region blob detection algorithm. The image was processed through a trained convolutional neural network (NN) consisting of two convolutional layers (Conv2D), two pooling layers (Max pool), and two fully connected layers (FC). The NN labeled the image as a particle (1) or a non-particle (0) diffraction pattern. ReLU, rectified linear unit; Pd, dropout rate. b The particle detection NN was trained with 930 original particle and 88,357 non-particle diffraction patterns as well as augmented dataset. The held-out test set accuracy reached ~99.2% for particle detection. c Code classifier. Diffraction images classified as particles from (a) entered the next module for code classification. The code classification NN had eight Conv2D and three FC layers and produced the final output of shape classes or codes. PReLU, parametric ReLU. d The code classification network was trained with 930 original diffraction images and augmented dataset. The held-out test set accuracy was ~94.1%. Source data are provided as a Source Data file.

Back to article page