Table 1 Comparison of different amplitude-coded compressive spectral imaging methods

From: Spectral imaging with deep learning

Article

CSI architecture

Performance (PSNR)

Reconstruction model

Deep-learning techniques

AutoEncoder30

SD/DD/SS CASSI

32.46 on CAVE (SS-CASSI)

Autoencoder Equation (Eq. (11) in ref. 30)

Autoencoder prior

HyperReconNet39

SD CASSI

33.63 on ICVL, 31.36 on Harvard

CNN

Hardware representation layer (joint training)

Spatial–spectral prior24

SD CASSI

34.13 on ICVL, 32.84 on Harvard, 30.03 on KAIST

Unrolled network

Learned network prior

External–internal learning35

SD CASSI

35.884 on ICVL, 33.585 on Harvard, 29.055 on CAVE

CNN

Dense structure, back-projection pixel loss

λ-Net36

SD CASSI

32.29 on ICVL (average of 16 scenes)

conditional GAN

Self-attention, hierarchical structure

DNU42

SD CASSI

34.24 on ICVL, 32.71 on Harvard

Unrolled network

Learned network prior

HCS2-Net31

SD/SS CASSI

34.52 on ICVL (10 scenes), 39.22 on CAVE (SS-CASSI), 29.33 on CAVE (SD-CASSI)

CNN (untrained)

Residual block, attention module, unsupervised learning, hardware code concatenated to the input measurement, deep image prior

Deep-Tensor45

SD CASSI

30.92 on ICVL, Harvard and KAIST (best mean)

CNN (untrained)

Learned tensor decomposition

  1. Evaluation results are collected from each original works