Fig. 1: Visualization of DNN feature space via manifold learning. | Nature Communications

Fig. 1: Visualization of DNN feature space via manifold learning.

From: Revealing hidden patterns in deep neural network feature space continuum via manifold learning

Fig. 1

a The cartoon illustration of manifolds of input data, DNN features and outputs in a 1D deep learning regression task from medical images. For segmentation, registration, reconstruction, super-resolution, the output manifold is high dimensional, whereas for 1D continuous prediction and classification, outputs are 1D continuous and discrete values, respectively. There are one-to-one relationship among the data points in input data/DNN feature manifold and output manifold. Here, `Conv', `ReLU', and `Max Pool' refer to the convolutional, rectified linear unit, and max pooling layers, respectively, in a DNN. b The purpose of this work is to discover the output manifold and use that information to visualize the manifold of extracted DNN features in lower dimension. To this end, MDA first computes distances between the DNN-estimated labels, enabling construction of the outline of the manifold for the estimated labels in HD. This outline provides the basis for grouping the labels based on their distances on the manifold surface. Next, a Bayesian approach is used to embed HD feature points at a specific DNN layer, constrained by the sorted label groups. Finally, a deep learning is employed to transform the projected features to a lower dimension for visualization and analysis. This figure was created with BioRender.com.

Back to article page