Table 2 Existing image fusion methods.
Ref | Technique | Advantages and limitations |
|---|---|---|
Spatial domain methods | ||
Wang and Xing14 | Principal Component Analysis (PCA) | Simple and quick to implement. Images must be converted into 1-D vectors resulting in loss of row-column correlation |
Nawaz et al.55 | 2D-PCA | Address the shortcomings of PCA without the necessity for vectorization beforehand |
Calhoun and Adali56 | Independent component analysis (ICA) | Finds statistically independent vectors in a linear generative model, can represent visual characteristics more accurately |
Liu et al.15 | Sparse representation (SR) based methods | Failed to accurately capture small details like edges and textures, require computing power and are time-consuming, Noise prone |
Transform-domain approaches | ||
Wang et al.49 | Wavelet Transform (WT) | Analyze the direction of the frequency sub- band in image fusion. Efficiently combine images while preserving both time and frequency information. Does not effectively capture the diverse directional aspects present in the scene being fused |
Suriya and Rangarajan19 | Discrete wavelet transform (DWT) | Loses shift-invariance because of the down- sampling process. Better at distinguishing high frequencies from low frequencies. Lacks translation invariance, presence of artifacts like aliasing and inconsistent directional leading to a phenomenon known as pseudo-Gibbs |
Jin et al.17 | Pyramid transform (PT) | Easy frequency-domain fusion approaches. Produce unwanted edges and suffer from blocking artifacts |
Mankar and Daimiwal50 | Contourlet transforms (CoT) | Improve wavelets isotropic quality and capture intrinsic geometrical elements of the image |
Guo and abate51 | Shearlet transform (ST) | Address the directionality issue, Less susceptible to shift-invariance due to the subsampling process |
Hybrid approaches | ||
Atyali and Khot24 | Principal component Analysis-Discrete wavelet transform (PCA-DWT) | Hybrid method of PCA and DWT, failed to capture the details of both the source images in the fused image |
Li et al.53 | Coupled neural P-multi-modality medical images (CNP-MIF) | Integration of Coupled neural P (CNP) system and non-subsampled ST (NSST), fall short in delivering adequate contrast |
Li et al.54 | Dynamic threshold neural P systems-multi-modality medical images (DTNP-MIF) | Integration of DTNP and non-subsampled CoT (NSCT), limited to multi-modality images only, higher computational complexity |
Wang et al.16 | Multi-dictionary Linear Sparse Representation and Region Fusion Model (MDLSR-RFM) | Introduced MDLSR for focus detection in the source image and introduced a RFM for boundary region fusion. It does not maintain the critical features of the source images and produces unwanted artifacts, leading to the distortion of local features |
Intuitionistic fuzzy based methods | ||
Balasubramaniam and Ananthi25 | Based on Vlachos and Sergiadis37 IF entropy | This method is used to eliminate ambiguity but does not address all uncertainties. It does not effectively retain the key features of the source images |
Palanisami et al.13 | Sugeno complements and IF entropy-based method | Due to the complexity of the steps in this method, the computational cost is slightly high |
Jiang et al.12 | Gaussian filtering and IF entropy-based method | IF entropy is utilized for fusing detailed images, and a simple binary decision map is generated, which fails to effectively preserve the details of the base images in the fused image |
Jiang et al.3 | IFSM and Laplacian pyramid decomposition | The IFSM used in this method exhibits drawbacks; it yields counter-intuitive results and fails to discern variations in membership and non-membership values, which affect the outcomes of image fusion |