Table 2 Restoration performance comparison on unseen sampling ratios (5× and 7×) for the brain dataset. UNet128-FT and ViT-L-FT follow the same pretraining-finetuning pipeline as MR-IPT to ensure a fair evaluation of generalization performance. Results demonstrate MR-IPT’s superior adaptability to novel acceleration factors, outperforming both UNet128-FT and ViT-L-FT in PSNR and SSIM.
From: Magnetic resonance image processing transformer for general accelerated image restoration
Brain-Cartesian Equispaced | ||||
|---|---|---|---|---|
ACC = 5X | ACC = 7X | |||
PSNR | SSIM | PSNR | SSIM | |
UNet128 | 35.20 | 9.9554 | 32.66 | 0.9339 |
UNet128-FT | 35.52 | 0.9554 | 32.96 | 0.9346 |
ViT-L | 35.58 | 0.9447 | 33.60 | 0.9296 |
ViT-L-FT | 36.29 | 0.9490 | 34.60 | 0.9378 |
MR-IPT-type | 39.71 | 0.9757 | 36.56 | 0.9619 |
MR-IPT-level | 39.92 | 0.9763 | 36.69 | 0.9626 |
MR-IPT-split | 39.78 | 0.9758 | 36.49 | 0.9615 |