Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Advertisement

Scientific Reports
  • View all journals
  • Search
  • My Account Login
  • Content Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • RSS feed
  1. nature
  2. scientific reports
  3. articles
  4. article
Fractal dimension-based multi-focus image fusion via distance-weighted regional energy in curvelet domain
Download PDF
Download PDF
  • Article
  • Open access
  • Published: 04 April 2026

Fractal dimension-based multi-focus image fusion via distance-weighted regional energy in curvelet domain

  • Ming Lv1,2,3,
  • Zhenhong Jia1,2,3,
  • Wu Le2,
  • Liangliang Li4 &
  • …
  • Hongbing Ma5 

Scientific Reports , Article number:  (2026) Cite this article

  • 83 Accesses

  • Metrics details

We are providing an unedited version of this manuscript to give early access to its findings. Before final publication, the manuscript will undergo further editing. Please note there may be errors present which affect the content, and all legal disclaimers apply.

Subjects

  • Engineering
  • Mathematics and computing

Abstract

To address the challenges of information loss and noise interference in multi-focus image fusion, this paper presents a novel curvelet-domain fusion algorithm based on distance-weighted regional energy (DWRE) and fractal dimension. The proposed method first decomposes the source images using the curvelet transform, obtaining both low- and high-frequency sub-bands. For the high-frequency sub-bands, fusion is performed using DWRE and fractal dimension in conjunction with a consistency verification strategy, ensuring that salient details and structural information are effectively preserved. For the low-frequency sub-band, an averaging-based fusion rule is applied to maintain overall intensity information. Finally, the inverse curvelet transform is employed to reconstruct the fused image. To evaluate the effectiveness of the proposed algorithm, experiments are conducted on the Lytro and MFI-WHU benchmark datasets. The results demonstrate that our approach achieves superior fusion performance compared to several state-of-the-art (SOTA) methods, particularly in terms of detail preservation, noise suppression, and visual quality. Additionally, it demonstrates significant advantages in terms of the objective evaluation metrics \(Q_{AB/F}\), \(Q_{CB}\), \(Q_{FMI}\), \(Q_{G}\), \(Q_{MI}\), \(Q_{NCIE}\), \(Q_{P}\), \(Q_{MSE}\), \(Q_{PSNR}\) and \(Q_{Y}\).

Similar content being viewed by others

Interactive residual coordinate attention and contrastive learning for infrared and visible image fusion in triple frequency bands

Article Open access 02 January 2024

A dual-stream feature decomposition network with weight transformation for multi-modality image fusion

Article Open access 03 March 2025

WMambaFuse: an infrared and visible image fusion network based on wavelet mamba

Article Open access 18 March 2026

Data availability

The experimental datasets used in this study are all from open datasets, and the authors declare no conflicts of interest. The Lytro and MFI-WHU public datasets we used can be obtained from the following links, respectively: www.researchgate.net/publication/291522937_Lytro_Multi-focus_Image_Dataset; https://github.com/HaoZhang1018/MFI-WHU.

References

  1. Bernardi, G. & Brisebarre, G. A comprehensive survey on image fusion: Which approach fits which need. Inf. Fusion. 126, 103594 (2026).

    Google Scholar 

  2. Li, X.; Li, X.; Ye, T. Bridging the gap between multi-focus and multi-modal: A focused integration framework for multi-modal image fusion. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 1617–1626 (Waikoloa, HI, 2024)

  3. Jin, X., Li, B. & Gao, X. A universal adversarial example generation method for object detection system coupled with multi-focus image fusion model. Int. J. Mach. Learn. Cybern. 17, 186. https://doi.org/10.1007/s13042-026-03018-3 (2026).

    Google Scholar 

  4. Wang, Z., Yu, S., Duan, H. & Wang, S. Rethinking multi-focus image fusion: An input space optimization view. IEEE Trans. Image Process. 35, 1321–1336 (2026).

    Google Scholar 

  5. Li, L., Si, Y. & Wang, L. A novel approach for multi-focus image fusion based on SF-PAPCNN and ISML in NSST domain. Multim. Tools Appl. 79, 24303–24328 (2020).

    Google Scholar 

  6. Li, B., Zhang, L., Bao, T. & Lei, Y. When multi-focus image fusion meets nonlinear spiking neural P systems. IEEE Trans. Multimedia. 28, 545–560 (2026).

    Google Scholar 

  7. Tang, L., Li, C. & Ma, J. Mask-DiFuser: A masked diffusion model for unified unsupervised image fusion. IEEE Trans. Pattern Anal. Mach. Intell. 48, 591–608 (2026).

    Google Scholar 

  8. Li, X., Li, X., Tan, H., Li, J. SAMF: Small-area-aware multi-focus image fusion for object detection. In Proceedings of the 49th IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP), 3845–3849 (Seoul, South Korea, 2024)

  9. Li, H. et al. Focus affinity perception and super-resolution embedding for multifocus image fusion. IEEE Trans. Neural Netw. Learn. Syst. 36, 4311–4325 (2025).

    Google Scholar 

  10. Liu, Y., Liu, S. & Wang, Z. A general framework for image fusion based on multi-scale transform and sparse representation. Inf. Fusion. 24, 147–164 (2015).

    Google Scholar 

  11. Liu, Y. & Wang, L. Multi-focus image fusion: A survey of the state of the art. Inf. Fusion. 64, 71–91 (2020).

    Google Scholar 

  12. Ouyang, Y., Zhai, H., Jiang, J., Hu, H. & Li, X. Text-MFF: Degradation multi-focus image fusion using multi expert text constraints. Expert Syst. Appl. 311, 131369 (2026).

    Google Scholar 

  13. Luo, F., Zhao, B. & Fuentes, J. A review on multi-focus image fusion using deep learning. Neurocomputing 618, 129125 (2025).

    Google Scholar 

  14. Xie, X. et al. LightMFF: A simple and efficient ultra-lightweight multi-focus image fusion network. Appl. Sci. (Basel) 15, 7500. https://doi.org/10.3390/app15137500 (2025).

    Google Scholar 

  15. Zhao, L., Zhang, X., Huang, B., Tian, M., Wang, Z. MFANet: Multi-feature aggregation network for multi-focus image fusion. In Proceedings of the 2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 1–5 (Hyderabad, India, 2025)

  16. Zhang, H. & Le, Z. MFF-GAN: An unsupervised generative adversarial network with adaptive and gradient joint constraints for multi-focus image fusion. Inf. Fusion. 66, 40–53 (2021).

    Google Scholar 

  17. Ouyang, Y., Zhai, H. & Hu, H. FusionGCN: Multi-focus image fusion using superpixel features generation GCN and pixel-level feature reconstruction CNN. Expert Syst. Appl. 262, 125665 (2025).

    Google Scholar 

  18. Fang, J. et al. KCUNET: Multi-focus image fusion via the parallel integration of KAN and convolutional layers. Entropy 27, 785. https://doi.org/10.3390/e27080785 (2025).

    Google Scholar 

  19. Jin, X. et al. Multi-focus image fusion based on double branch encoder and depth information learning. Signal. Image. Video. Process. 19, 930. https://doi.org/10.1007/s11760-025-04316-z (2025).

    Google Scholar 

  20. Wang, X. et al. MMAE: A universal image fusion method via mask attention mechanism. Pattern. Recogn. 158, 111041 (2025).

    Google Scholar 

  21. Liu, Y., Lei, P., Wang, T., Fang, F. & Zhang, G. PromptIF: A prompt-based general image fusion framework. Displays 93, 103386 (2026).

    Google Scholar 

  22. Panigrahy, C., Seal, A., Mahato, N. K., Krejcar, O. & Herrera-Viedma, E. Multi-focus image fusion using fractal dimension. Appl. Opt. 59, 5642–5652 (2020).

    Google Scholar 

  23. Zhang, X., Yan, H. & He, H. Multi-focus image fusion based on fractional-order derivative and intuitionistic fuzzy sets. Front. Inf. Technol. Electron. Eng. 21, 834–843 (2020).

    Google Scholar 

  24. Zhang, X., Chen, S. & Zhang, J. Adaptive sliding mode consensus control based on neural network for singular fractional order multi-agent systems. Appl. Math. Comput. 434, 127442 (2022).

    Google Scholar 

  25. Zhang, X. & Dai, L. Image enhancement based on rough set and fractional order differentiator. Fractal Fract. 6, 214 (2022).

    Google Scholar 

  26. Li, L. et al. Fractal dimension-based multi-focus image fusion via coupled neural P systems in NSCT domain. Fractal Fract. 8, 554. https://doi.org/10.3390/fractalfract8100554 (2024).

    Google Scholar 

  27. Candes, E. & Demanet, L. Fast discrete curvelet transforms. Multiscale Model. Simul. 5, 861–899 (2006).

    Google Scholar 

  28. Li, L., Song, S., Lv, M., Jia, Z. & Ma, H. Multi-focus image fusion based on fractal dimension and parameter adaptive unit-linking dual-channel PCNN in curvelet transform domain. Fractal Fract. 9, 157. https://doi.org/10.3390/fractalfract9030157 (2025).

    Google Scholar 

  29. Panigrahy, C. & Seal, A. Parameter adaptive unit-linking pulse coupled neural network based MRI-PET/SPECT image fusion. Biomed. Signal Process. Control 83, 104659 (2023).

    Google Scholar 

  30. Li, X., Zhou, F. & Tan, H. Multi-focus image fusion based on nonsubsampled contourlet transform and residual removal. Signal Process. 184, 108062 (2021).

    Google Scholar 

  31. Nejati, M., Samavi, S. & Shirani, S. Multi-focus image fusion using dictionary-based sparse representation. Inf. Fusion 25, 72–84 (2015).

    Google Scholar 

  32. Zhang, H. & Le, Z. MFF-GAN: An unsupervised generative adversarial network with adaptive and gradient joint constraints for multi-focus image fusion. Inf. Fusion 66, 40–53 (2021).

    Google Scholar 

  33. Zhang, H. & Ma, J. SDNet: A versatile squeeze-and-decomposition network for real-time image fusion. Int J Comput Vis 129, 2761–2785. https://doi.org/10.1007/s11263-021-01501-8 (2021).

    Google Scholar 

  34. Xu, H., Ma, J. & Jiang, J. U2Fusion: A unified unsupervised image fusion network. IEEE Trans. Pattern Anal. Mach. Intell. 44, 502–518 (2022).

    Google Scholar 

  35. Tang, H. & Liu, G. EgeFusion: Towards edge gradient enhancement in infrared and visible image fusion with multi-scale transform. IEEE Trans. Comput. Imaging. 10, 385–398 (2024).

    Google Scholar 

  36. Hu, X., Jiang, J., Wang, C., Jiang, K., Liu, X., Ma, J. Balancing task-invariant interaction and task-specific adaptation for unified image fusion. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) (Honolulu, Hawai’i, 2025).

  37. Bai, H., Zhao, Z. & Zhang, J. ReFusion: Learning image fusion from reconstruction with learnable loss via meta-learning. Int. J. Comput. Vis. 133, 2547–2567. https://doi.org/10.1007/s11263-024-02256-8 (2025).

    Google Scholar 

  38. Jie, Y., Li, X., Tan, T., Yang, L. & Wang, M. Multi-modality image fusion using fuzzy set theory and compensation dictionary learning. Opt. Laser Technol. 181, 112001 (2025).

    Google Scholar 

  39. Xie, X., Guo, B. & Li, P. SwinMFF: Toward high-fidelity end-to-end multi-focus image fusion via Swin Transformer-based network. Vis. Comput. 41, 3883–3906. https://doi.org/10.1007/s00371-024-03637-3 (2025).

    Google Scholar 

  40. Qu, X., Yan, J. & Xiao, H. Image fusion algorithm based on spatial frequency-motivated pulse coupled neural networks in nonsubsampled contourlet transform domain. Acta Autom. Sin. 34, 1508–1514 (2008).

    Google Scholar 

  41. Jin, X. et al. Combining depth and frequency features with Mamba for multi-focus image fusion. Inf. Fusion 124, 103355 (2025).

    Google Scholar 

  42. Pan, C., Jiang, Q. & Zheng, H. DANet: A dual-branch framework with diffusion-integrated autoencoder for infrared-visible image fusion. IEEE Trans. Instrum. Meas. 74, 5031713 (2025).

    Google Scholar 

  43. Meng, B., Liu, H. & Ding, Z. Multi-scene image fusion via memory aware synapses. Sci. Rep. 15, 14280. https://doi.org/10.1038/s41598-025-88261-4 (2025).

    Google Scholar 

  44. Liu, Z., Blasch, E. & Xue, Z. Objective assessment of multiresolution image fusion algorithms for context enhancement in night vision: A comparative study. IEEE Trans. Pattern Anal. Mach. Intell. 34, 94–109 (2012).

    Google Scholar 

  45. Li, L. et al. Infrared and visible image fusion via sparse representation and guided filtering in Laplacian pyramid domain. Remote Sens. 16, 3804. https://doi.org/10.3390/rs16203804 (2024).

    Google Scholar 

  46. Li, L. et al. An effective infrared and visible image fusion approach via rolling guidance filtering and gradient saliency map. Remote Sens. 15, 2486. https://doi.org/10.3390/rs15102486 (2023).

    Google Scholar 

  47. Haghighat, M., Razian, M. Fast-FMI: Non-reference image fusion metric. In Proceedings of the IEEE 8th International Conference on Application of Information and Communication Technologies, 424–426 (Astana, Kazakhstan, 2014)

  48. Lv, M., Song, S., Jia, Z., Li, L. & Ma, H. Multi-focus image fusion based on dual-channel Rybak neural network and consistency verification in NSCT domain. Fractal Fract. 9, 432. https://doi.org/10.3390/fractalfract9070432 (2025).

    Google Scholar 

  49. Lv, M., Jia, Z., Li, L. & Ma, H. Multi-focus image fusion via PAPCNN and fractal dimension in NSST domain. Mathematics 11, 3803. https://doi.org/10.3390/math11183803 (2023).

    Google Scholar 

  50. Li, L., Lv, M., Jia, Z. & Ma, H. Sparse representation-based multi-focus image fusion method via local energy in shearlet domain. Sensors 23, 2888. https://doi.org/10.3390/s23062888 (2023).

    Google Scholar 

  51. Prabha, K. L. et al. Integration of multi agent reinforcement learning with golden jackal optimization for predicting average localization error in wireless sensor networks. Sci Rep 15, 27015. https://doi.org/10.1038/s41598-025-13053-9 (2025).

    Google Scholar 

  52. Lv, M., Jia, Z., Li, L. & Ma, H. Fractal dimension-based multi-focus image fusion via AGPCNN and consistency verification in NSCT domain. Fractal Fract. 10, 1. https://doi.org/10.3390/fractalfract10010001 (2026).

    Google Scholar 

  53. Yin, M., Liu, X., Liu, Y. & Chen, X. Medical image fusion with parameter-adaptive pulse coupled neural network in nonsubsampled shearlet transform domain. IEEE Trans. Instrum. Meas. 68, 49–64 (2019).

    Google Scholar 

  54. Khan, M. Z. et al. Multimodality medical image fusion using directional total variation based linear spectral clustering in NSCT domain. Sci Rep 16, 5367. https://doi.org/10.1038/s41598-025-26916-y (2026).

    Google Scholar 

  55. Li, L. & Ma, H. Pulse coupled neural network-based multimodal medical image fusion via guided filtering and WSEML in NSCT domain. Entropy 23, 591. https://doi.org/10.3390/e23050591 (2021).

    Google Scholar 

  56. Tang, W., Liu, Y., Cheng, J., Li, C. & Chen, X. Green fluorescent protein and phase contrast image fusion via detail preserving cross network. IEEE Trans. Comput. Imaging 7, 584–597 (2021).

    Google Scholar 

  57. Zhang, X. Benchmarking and comparing multi-exposure image fusion algorithms. Inf. Fusion 74, 111–131 (2021).

    Google Scholar 

  58. Yang, Y. & Wang, W. A variational multi-scale model for multi-exposure image fusion. IEEE Trans. Image Process. 35, 701–716 (2026).

    Google Scholar 

  59. Parida, P., Panda, M. K., Rout, D. K. & Panda, S. K. Infrared and visible image fusion using quantum computing induced edge preserving filter. Image Vis. Comput. 153, 105344 (2025).

    Google Scholar 

  60. Panda, M. K. & Thangaraj, V. Bayesian’s probabilistic strategy for feature fusion from visible and infrared images. Vis. Comput. 40, 4221–4233 (2024).

    Google Scholar 

  61. Panda, M. K., Parida, P. & Rout, D. K. A weight induced contrast map for infrared and visible image fusion. Comput. Electr. Eng. 117, 109256 (2024).

    Google Scholar 

  62. Panda, M.K., Subudhi, B.N. Integration of bi-dimensional empirical mode decomposition with two streams deep learning network for infrared and visible image fusion. In Proceedings of the 30th European Signal Processing Conference (EUSIPCO), 493–497 (Belgrade, Serbia, 2022)

  63. Panda, M.K., Subudhi, B.N. Pixel-level visual and thermal images fusion using maximum and minimum value selection strategy. In Proceedings of the 2020 IEEE International Symposium on Sustainable Energy, Signal Processing and Cyber Security (iSSSC), (Virtual, Gunupur Odisha, India, 2020)

  64. Panda, M.K., Subudhi, B.N. Edge preserving image fusion using intensity variation approach. In Proceedings of the IEEE Region 10 Annual International Conference, Proceedings/TENCON, 251–256 (Virtual, Osaka, Japan, 2020)

  65. Panda, M. K. & Subudhi, B. N. Two streams ResNet-50 network for infrared and visible image fusion. Multimed. Tools Appl. 84, 43191–43224 (2025).

    Google Scholar 

  66. Li, X., Li, X., Tan, T., Li, H. & Ye, T. UMCFuse: A unified multiple complex scenes infrared and visible image fusion framework. IEEE Trans. Image Process. 34, 6231–6245 (2025).

    Google Scholar 

  67. Yan, H., Zhang, J. & Zhang, X. Injected infrared and visible image fusion via L1 decomposition model and guided filtering. IEEE Trans. Comput. Imaging 8, 162–173 (2022).

    Google Scholar 

  68. Li, J. et al. Comparative analysis of pixel-level fusion algorithms and a new high-resolution dataset for SAR and optical image fusion. Remote Sens. 15, 5514 (2023).

    Google Scholar 

  69. Vivone, G. & Deng, L. Deep learning in remote sensing image fusion: Methods, protocols, data, and future perspectives. IEEE Geosci. Remote Sens. Mag. 13, 269–310 (2025).

    Google Scholar 

  70. Matteo, C., Giuseppe, G. & Gemine, V. Hyperspectral pansharpening: Critical review, tools, and future perspectives. IEEE Geosci. Remote Sens. Mag. 13, 311–338 (2025).

    Google Scholar 

  71. Huang, W. & Wu, T. MCFTNet: Multimodal cross-layer fusion transformer network for hyperspectral and LiDAR data classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 18, 12803–12818 (2025).

    Google Scholar 

  72. Wen, X., Ma, H. & Li, L. A three-branch pansharpening network based on spatial and frequency domain interaction. Remote Sens. 17, 13. https://doi.org/10.3390/rs17010013 (2025).

    Google Scholar 

  73. Li, L. & Ma, H. Saliency-guided nonsubsampled shearlet transform for multisource remote sensing image fusion. Sensors 21, 1756. https://doi.org/10.3390/s21051756 (2021).

    Google Scholar 

  74. Wen, X., Ma, H. & Li, L. A multi-stage progressive pansharpening network based on detail injection with redundancy reduction. Sensors 24, 6039. https://doi.org/10.3390/s24186039 (2024).

    Google Scholar 

  75. Li, J. et al. Enhanced deep image prior for unsupervised hyperspectral image super-resolution. IEEE Trans. Geosci. Remote Sens. 63, 5504218 (2025).

    Google Scholar 

  76. Li, L., Ma, H. & Jia, Z. Gamma correction-based automatic unsupervised change detection in SAR images via FLICM model. J. Indian Soc. Remote Sens. 51, 1077–1088 (2023).

    Google Scholar 

  77. Li, L., Ma, H. & Jia, Z. Multiscale geometric analysis fusion-based unsupervised change detection in remote sensing images via FLICM model. Entropy 24, 291. https://doi.org/10.3390/e24020291 (2022).

    Google Scholar 

  78. Sun, Y., Lei, L., Li, Z., Kuang, G. & Yu, Q. Detecting changes without comparing images: Rules induced change detection in heterogeneous remote sensing images. ISPRS J. Photogramm. Remote Sens. 230, 241–257 (2025).

    Google Scholar 

  79. Sun, Y. et al. Iterative global mapping-local searching for heterogeneous change detection with unregistered images. Int. J. Comput. Vis. 134, 143 (2026).

    Google Scholar 

  80. Li, L., Ma, H. & Jia, Z. Change detection from SAR images based on convolutional neural networks guided by saliency enhancement. Remote Sens. 13, 3697. https://doi.org/10.3390/rs13183697 (2021).

    Google Scholar 

  81. Li, L. et al. Synthetic aperture radar image change detection based on principal component analysis and two-level clustering. Remote Sens. 16, 1861. https://doi.org/10.3390/rs16111861 (2024).

    Google Scholar 

  82. Liu, J. et al. CMNet: Global–local feature fusion CNN-Mamba network for remote sensing object detection. Remote Sens. 18, 591. https://doi.org/10.3390/rs18040591 (2026).

    Google Scholar 

  83. Tang, Z. et al. Adaptive fine-grained fusion network for multimodal UAV object detection. IEEE Trans. Image Process. 35, 1870–1882 (2026).

    Google Scholar 

Download references

Funding

This research was funded by the National Natural Science Foundation of China under Grant No. 62261053; the Tianshan Talent Training Project-Xinjiang Science and Technology Innovation Team Program (2023TSYCTD0012); the Research Project of Xinjiang Sky-Ground Integrated Intelligent Computing Technology Laboratory under Grant No.2025A05-1.

Author information

Authors and Affiliations

  1. School of Computer Science and Technology, Xinjiang University, Urumqi, 830046, China

    Ming Lv & Zhenhong Jia

  2. Xinjiang Sky-Ground Integrated Intelligent Computing Technology Laboratory, Changji, 831199, China

    Ming Lv, Zhenhong Jia & Wu Le

  3. Key Laboratory of Signal Detection and Processing, Xinjiang University, Urumqi, 830046, China

    Ming Lv & Zhenhong Jia

  4. School of Computer and Artificial Intelligence, Zhengzhou University, Zhengzhou, 450001, China

    Liangliang Li

  5. Department of Electronic Engineering, Tsinghua University, Beijing, 100084, China

    Hongbing Ma

Authors
  1. Ming Lv
    View author publications

    Search author on:PubMed Google Scholar

  2. Zhenhong Jia
    View author publications

    Search author on:PubMed Google Scholar

  3. Wu Le
    View author publications

    Search author on:PubMed Google Scholar

  4. Liangliang Li
    View author publications

    Search author on:PubMed Google Scholar

  5. Hongbing Ma
    View author publications

    Search author on:PubMed Google Scholar

Contributions

The experiments and data collection were conducted by M.L., Z.J., W.L., L.L., and H.M. The manuscript was drafted by M.L., with contributions from the co-authors. All authors have read and agreed to the published version of the manuscript.

Corresponding author

Correspondence to Zhenhong Jia.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lv, M., Jia, Z., Le, W. et al. Fractal dimension-based multi-focus image fusion via distance-weighted regional energy in curvelet domain. Sci Rep (2026). https://doi.org/10.1038/s41598-026-44394-8

Download citation

  • Received: 20 October 2025

  • Accepted: 11 March 2026

  • Published: 04 April 2026

  • DOI: https://doi.org/10.1038/s41598-026-44394-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Keywords

  • Multi-focus image
  • Image fusion
  • Curvelet
  • DWRE
  • Consistency verification
Download PDF

Advertisement

Explore content

  • Research articles
  • News & Comment
  • Collections
  • Subjects
  • Follow us on Facebook
  • Follow us on X
  • Sign up for alerts
  • RSS feed

About the journal

  • About Scientific Reports
  • Contact
  • Journal policies
  • Guide to referees
  • Calls for Papers
  • Editor's Choice
  • Journal highlights
  • Open Access Fees and Funding

Publish with us

  • For authors
  • Language editing services
  • Open access funding
  • Submit manuscript

Search

Advanced search

Quick links

  • Explore articles by subject
  • Find a job
  • Guide to authors
  • Editorial policies

Scientific Reports (Sci Rep)

ISSN 2045-2322 (online)

nature.com footer links

About Nature Portfolio

  • About us
  • Press releases
  • Press office
  • Contact us

Discover content

  • Journals A-Z
  • Articles by subject
  • protocols.io
  • Nature Index

Publishing policies

  • Nature portfolio policies
  • Open access

Author & Researcher services

  • Reprints & permissions
  • Research data
  • Language editing
  • Scientific editing
  • Nature Masterclasses
  • Research Solutions

Libraries & institutions

  • Librarian service & tools
  • Librarian portal
  • Open research
  • Recommend to library

Advertising & partnerships

  • Advertising
  • Partnerships & Services
  • Media kits
  • Branded content

Professional development

  • Nature Awards
  • Nature Careers
  • Nature Conferences

Regional websites

  • Nature Africa
  • Nature China
  • Nature India
  • Nature Japan
  • Nature Middle East
  • Privacy Policy
  • Use of cookies
  • Legal notice
  • Accessibility statement
  • Terms & Conditions
  • Your US state privacy rights
Springer Nature

© 2026 Springer Nature Limited

Nature Briefing AI and Robotics

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing: AI and Robotics