Table 1 Summary of the experiments and results
From: Revealing hidden patterns in deep neural network feature space continuum via manifold learning
Experiments | Dataset | DNN | Results | Key conclusion |
---|---|---|---|---|
Analysis of feature datasets of DNNs of different complexities in different biomedical disciplines | BraTS, TCGA, LINCS L1000, ISIC-2019, DR, COVID | Dense-UNet, MLP, VAE-MLP, SRGAN, ResNet, AlexNet, UNet, mCNN (Table S6), fMLP (Table S5) | MDA significantly outperforms existing data analysis methods such as t-SNE, UMAP, LLE and Isomap. | |
Robustness test of DNNs against noise | BraTS, COVID | Dense-UNet, U-Net, ResNet, AlexNet | MDA shows the robustness of a DNN to noise through feature space visualization. | |
Generalizability test of DNNs | TCGA, LINCS L1000 | MLP, VAE-MLP | MDA reveals the generalizability of DNN towards unknown datasets more accurately than other methods. | |
Neural collapse in DNNs for regression tasks | MNIST, TCGA | mCNN, fMLP | Novel phenomena such as neural collapse can be discovered from MDA visualizations, which is not possible in results from other visualization methods. | |
Quantification of manifold structure | BraTS, MNIST | ResNet, mCNN | Supplementary Fig. S27 | MDA preserves the high dimensional manifold structure in low dimensional representation more accurately than existing methods. |
Neural network behavior for extrapolation task | MNIST, TCGA | mCNN, fMLP | MDA offers meaningful visualization of the DNNs’ feature space in extrapolation tasks. | |
Change in DNNs’ feature space with epoch | BraTS, COVID | Dense-UNet, ResNet | Supplementary Fig. S22 | MDA captures the gradual improvement of manifold properties of the DNN feature space over the course of the epochs. |