Table 1 Comparative summary of existing deep learning methods for hyperspectral image classification and the gap addressed by HSICNet.
Author / Model | Architecture highlights | Dataset(s) used | Key limitation | Gap addressed by HSICNet |
|---|---|---|---|---|
Ashraf et al.6 | 3D UNet with spectral-spatial encoding | Indian Pines | Heavy computation, overfitting | Lightweight dual-branch structure |
Chhapariya et al.5 | Residual attention CNN (DSSpRAN) | Indian Pines, Salinas | Shallow depth, weak spatial learning | Stronger 2D spatial stream |
Reddy et al.3 | 3D CNN with self-attention | Indian Pines | High complexity, difficult real-time use | Lightweight fusion with attention |
Farooque et al.4 | Swin Transformer + 3D Atrous CNN | Salinas | Overhead in transformer fusion | Efficient attention-guided fusion |
Dong et al.36 | CNN + Graph Attention Network (GAT) | Pavia University | Tuning complexity, less interpretability | Simpler fusion, interpretable weights |
Sun et al.9 | Low-cost sparse CNN (LCTCS) | Pavia University | Accuracy drops under an imbalance | Improved per-class generalisation |
Zhu et al.15 | SS-ConvNeXt for denoising and spatial features | Indian Pines | High computation cost | PCA + efficient convolution layers |
Esmaeili et al.7 | CNN + Genetic Algorithm for band selection | Salinas | Not real-time suitable | End-to-end integration of PCA |
Sellami et al.48 | Semi-supervised Hypergraph CNN | Indian Pines | Poor scalability on extensive data | Generalizable across three datasets |
Bai et al.11 | Spectrum Complementary Learning Network | Indian Pines | Does not use generative fusion | Dynamic attention-based fusion |