Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Advertisement

Scientific Reports
  • View all journals
  • Search
  • My Account Login
  • Content Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • RSS feed
  1. nature
  2. scientific reports
  3. articles
  4. article
Temporally consistent low-light face video enhancement via video-to-video conditional diffusion
Download PDF
Download PDF
  • Article
  • Open access
  • Published: 18 March 2026

Temporally consistent low-light face video enhancement via video-to-video conditional diffusion

  • Xiaofeng Ding1,
  • Kailin He1,
  • Huo Sun2 &
  • …
  • Juying Yang1 

Scientific Reports , Article number:  (2026) Cite this article

  • 493 Accesses

  • Metrics details

We are providing an unedited version of this manuscript to give early access to its findings. Before final publication, the manuscript will undergo further editing. Please note there may be errors present which affect the content, and all legal disclaimers apply.

Subjects

  • Engineering
  • Mathematics and computing

Abstract

Low-light face videos suffer from severe noise and detail loss, limiting their use in surveillance and photography applications. To address these challenges, this paper proposes DL-Diff, a novel low-light face video enhancement framework that formulates this task as a conditional video-to-video (V2V) generation problem based on pre-trained Latent Diffusion Models (LDMs). DL-Diff extends pre-trained text-to-video models through three components: a pseudo-3D UNet backbone, a restoration component for spatial detail recovery, and a temporal component for inter-frame consistency. A multi-stage training strategy enables efficient domain adaptation from images to videos. Experiments on DID and SDSD datasets demonstrate that DL-Diff achieves superior performance in both perceptual quality (FID: 41.29, LPIPS: 0.17) and temporal consistency (AB(Var): 25.40, MABD: 0.08), significantly outperforming existing methods. The framework produces high-quality videos with realistic visual effects and no flickering artifacts, particularly excelling in extremely dark scenarios. This work demonstrates the potential of leveraging pre-trained diffusion models for video enhancement tasks.

Similar content being viewed by others

Adaptive diffusion models for overcoming data scarcity in long-distance face recognition

Article Open access 27 January 2026

Content style decoupling for multi style image generation using latent diffusion architecture

Article Open access 29 January 2026

Deepfake video deception detection using visual attention-based method

Article Open access 17 November 2025

Data availability

The datasets generated and/or analyzed during the current study are publicly available. The Diagnostic Imaging Data Set (DID) can be accessed at https://digital.nhs.uk/data-and-information/data-collections-and-data-sets/data-sets/diagnostic-imaging-data-set. Additionally, the SDSD Dataset is available at https://doi.org/10.48550/arXiv.2404.00834.

References

  1. He, C., Li, K., Xu, G. et al Degradation-resistant unfolding network for heterogeneous image fusion. In 2023 IEEE/CVF International Conference on Computer Vision (ICCV). 12577–12587. https://doi.org/10.1109/ICCV51070.2023.01159 (2023).

  2. Xia, B., Zhang, Y., Wang, S. et al Diffir: Efficient diffusion model for image restoration. In 2023 IEEE/CVF International Conference on Computer Vision (ICCV). 13049–13059. https://doi.org/10.1109/ICCV51070.2023.01204 (2023).

  3. Liu, Z., Luo, P., Wang, X. et al Deep learning face attributes in the wild. In 2015 IEEE International Conference on Computer Vision (ICCV). 3730–3738. https://doi.org/10.1109/ICCV.2015.425 (2015).

  4. Moser, B. B. et al. Diffusion models, image super-resolution, and everything: A survey. IEEE Trans. Neural Netw. Learn. Syst. 36(7), 11793–11813. https://doi.org/10.1109/TNNLS.2024.3476671 (2025).

    Google Scholar 

  5. Saharia, C. et al. Image super-resolution via iterative refinement. IEEE Trans. Pattern Anal. Mach. Intell. 45(4), 4713–4726. https://doi.org/10.1109/TPAMI.2022.3204461 (2023).

    Google Scholar 

  6. Yu, F., Gu, J., Li, Z. et al Scaling up to excellence: Practicing model scaling for photo-realistic image restoration in the wild. In 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 25669–25680. https://doi.org/10.1109/CVPR52733.2024.02425 (2024).

  7. Duan, J. et al. A survey of embodied AI: From simulators to research tasks. IEEE Trans. Emerg. Top. Comput. Intell. 6(2), 230–244. https://doi.org/10.1109/TETCI.2022.3141105 (2022).

    Google Scholar 

  8. Liu, J., Fan, X., Huang, Z. et al Target-aware dual adversarial learning and a multi-scenario multi-modality benchmark to fuse infrared and visible for object detection. In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 5792–5801. https://doi.org/10.1109/CVPR52688.2022.00571 (2022).

  9. Agustsson, E. & Timofte, R. Ntire 2017 challenge on single image super-resolution: Dataset and study. In 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). 1122–1131. https://doi.org/10.1109/CVPRW.2017.150 (2017).

  10. He, C. et al. Hqg-net: Unpaired medical image enhancement with high-quality guidance. IEEE Trans. Neural Netw. Learn. Syst. 35(12), 18404–18418. https://doi.org/10.1109/TNNLS.2023.3315307 (2024).

    Google Scholar 

  11. Shen, Z., Wang, W., Lu, X. et al Human-aware motion deblurring. In 2019 IEEE/CVF International Conference on Computer Vision (ICCV). 5571–5580. https://doi.org/10.1109/ICCV.2019.00567 (2019).

  12. Yuan, X., Baek, J., Xu, K. et al Inflation with diffusion: Efficient temporal adaptation for text-to-video super-resolution. In 2024 IEEE/CVF Winter Conference on Applications of Computer Vision Workshops (WACVW). 489–496. https://doi.org/10.1109/WACVW60836.2024.00059 (2024).

  13. Chen, Y. et al. Dual degradation image inpainting method via adaptive feature fusion and u-net network. Appl. Soft Comput. 174, 113010. https://doi.org/10.1016/j.asoc.2024.113010 (2025).

    Google Scholar 

  14. Xu, G. et al. Dm-fusion: Deep model-driven network for heterogeneous image fusion. IEEE Trans. Neural Netw. Learn. Syst. 35(7), 10071–10085. https://doi.org/10.1109/TNNLS.2023.3238511 (2024).

    Google Scholar 

  15. Yi, X., Xu, H., Zhang, H. et al Diff-retinex: Rethinking low-light image enhancement with a generative diffusion model. In 2023 IEEE/CVF International Conference on Computer Vision (ICCV). 12268–12277. https://doi.org/10.1109/ICCV51070.2023.01130 (2023).

  16. Yue, J. et al. Dif-fusion: Toward high color fidelity in infrared and visible image fusion with diffusion models. IEEE Trans. Image Process. 32, 5705–5720. https://doi.org/10.1109/TIP.2023.3322046 (2023).

    Google Scholar 

  17. Zhao, Z., Bai, H., Zhu, Y. et al DDFM: Denoising diffusion model for multi-modality image fusion. In 2023 IEEE/CVF International Conference on Computer Vision (ICCV). 8048–8059. https://doi.org/10.1109/ICCV51070.2023.00742 (2023).

  18. Chen, Z., Long, F., Qiu, Z. et al Learning spatial adaptation and temporal coherence in diffusion models for video super-resolution. In 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 9232–9241. https://doi.org/10.1109/CVPR52733.2024.00882 (2024).

  19. Liu, J., Anirudh, R., Thiagarajan, J.J. et al Dolce: A model-based probabilistic diffusion framework for limited-angle CT reconstruction. In 2023 IEEE/CVF International Conference on Computer Vision (ICCV). 10464–10474. https://doi.org/10.1109/ICCV51070.2023.00963 (2023).

  20. Zhang, R., Isola, P., Efros, A.A. et al The unreasonable effectiveness of deep features as a perceptual metric. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 586–595. https://doi.org/10.1109/CVPR.2018.00068 (2018).

  21. Li, Y. et al. Zero-shot medical image translation via frequency-guided diffusion models. IEEE Trans. Med. Imaging 43(3), 980–993. https://doi.org/10.1109/TMI.2023.3325703 (2024).

    Google Scholar 

  22. Peng, Y. et al. Top-level design and simulated performance of the first portable CT-MR scanner. IEEE Access 10, 102325–102333. https://doi.org/10.1109/ACCESS.2022.3208278 (2022).

    Google Scholar 

  23. Zhao, K. et al. MRI super-resolution with partial diffusion models. IEEE Trans. Med. Imaging 44(3), 1194–1205. https://doi.org/10.1109/TMI.2024.3483109 (2025).

    Google Scholar 

  24. Wang, Y., Yang, W., Chen, X. et al Sinsr: Diffusion-based image super-resolution in a single step. In 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 25796–25805. https://doi.org/10.1109/CVPR52733.2024.02437 (2024).

  25. Ancuti C.O., Ancuti, C. & Timofte, R. Nh-haze: An image dehazing benchmark with non-homogeneous hazy and haze-free images. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). 1798–1805. https://doi.org/10.1109/CVPRW50498.2020.00230 (2020).

  26. Blattmann, A., Rombach, R., Ling, H. et al Align your latents: High-resolution video synthesis with latent diffusion models. In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 22563–22575. https://doi.org/10.1109/CVPR52729.2023.02161 (2023).

  27. Chen, Y., Chen, L., Xia, R. et al Cat: Image super-resolution algorithm via channel attention and transformer. Array 100628. https://doi.org/10.1016/j.array.2024.100628 (2025).

  28. Guan, S. et al. Fully dense unet for 2-D sparse photoacoustic tomography artifact removal. IEEE J. Biomed. Health Inform. 24(2), 568–576 (2019).

    Google Scholar 

  29. Fu, H., Zheng, W., Wang, X. et al Dancing in the dark: A benchmark towards general low-light video enhancement. In 2023 IEEE/CVF International Conference on Computer Vision (ICCV). 12831–12840. https://doi.org/10.1109/ICCV51070.2023.01183 (2023).

  30. Wang, R., Xu, X., Fu, C.W. et al Seeing dynamic scene in the dark: A high-quality video dataset with mechatronic alignment. In 2021 IEEE/CVF International Conference on Computer Vision (ICCV). 9680–9689. https://doi.org/10.1109/ICCV48922.2021.00956(2021).

  31. Dosovitskiy, A. & Brox, T. Generating images with perceptual similarity metrics based on deep networks. Adv. Neural Inf. Process. Syst. 29 (2016).

  32. Buzuti, L. F. & Thomaz, C. E. Fréchet autoencoder distance: A new approach for evaluation of generative adversarial networks. Comput. Vis. Image Underst. 235, 103768 (2023).

    Google Scholar 

  33. Erfurt, J., Helmrich, C.R., Bosse, S. et al A study of the perceptually weighted peak signal-to-noise ratio (WPSNR) for image compression. In 2019 IEEE International Conference on Image Processing (ICIP). 2339–2343 (IEEE, 2019)

  34. Bakurov, I. et al. Structural similarity index (SSIM) revisited: A data-driven approach. Expert Syst. Appl. 189, 116087 (2022).

    Google Scholar 

  35. Xu, X., Zhou, K., Hu, T. et al Low-light video enhancement via spatial-temporal consistent decomposition. arXiv preprint arXiv:2405.15660 (2024).

  36. Liu, L., An, J., Liu, J. et al Low-light video enhancement with synthetic event guidance. In Proceedings of the AAAI Conference on Artificial Intelligence. 1692–1700. https://doi.org/10.1609/aaai.v37i2.25257 (2023).

  37. Hou, J., Zhu, Z., Hou, J. et al Global structure-aware diffusion process for low-light image enhancement. In Advances in Neural Information Processing Systems (NeurIPS). 79734–79747 (2023)

  38. Zhu, Y., Zhao, W., Li, A. et al Flowie: Efficient image enhancement via rectified flow. In 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 13–22. https://doi.org/10.1109/CVPR52733.2024.00010 (2024).

Download references

Author information

Authors and Affiliations

  1. Sichuan University Jinjiang College, Meishan, 620860, Sichuan, China

    Xiaofeng Ding, Kailin He & Juying Yang

  2. Sichuan Railway College, Chengdu, 611732, Sichuan, China

    Huo Sun

Authors
  1. Xiaofeng Ding
    View author publications

    Search author on:PubMed Google Scholar

  2. Kailin He
    View author publications

    Search author on:PubMed Google Scholar

  3. Huo Sun
    View author publications

    Search author on:PubMed Google Scholar

  4. Juying Yang
    View author publications

    Search author on:PubMed Google Scholar

Contributions

Xiaofeng Ding: Writing—original draft; Writing—review & editing; Conceptualization; Resources; Formal Analysis. Kaitong He: Writing—review & editing; Methodology;Supervision; Methodology; Software. Huo Sun: Writing—original draft;Writing—review & editing;Conceptualization;Resources;Data curation; Juying Yang: Writing—review & editing;Methodology; Supervision;Formal analysis. All authors have read and agreed to the published version of the manuscript.

Corresponding author

Correspondence to Xiaofeng Ding.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ding, X., He, K., Sun, H. et al. Temporally consistent low-light face video enhancement via video-to-video conditional diffusion. Sci Rep (2026). https://doi.org/10.1038/s41598-026-44219-8

Download citation

  • Received: 24 December 2025

  • Accepted: 10 March 2026

  • Published: 18 March 2026

  • DOI: https://doi.org/10.1038/s41598-026-44219-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Keywords

  • Low-light face video enhancement (LLFVE)
  • Latent diffusion models (LDMs)
  • Video-to-video generation (V2V)
  • Temporal consistency
  • Pre-trained models
Download PDF

Advertisement

Explore content

  • Research articles
  • News & Comment
  • Collections
  • Subjects
  • Follow us on Facebook
  • Follow us on X
  • Sign up for alerts
  • RSS feed

About the journal

  • About Scientific Reports
  • Contact
  • Journal policies
  • Guide to referees
  • Calls for Papers
  • Editor's Choice
  • Journal highlights
  • Open Access Fees and Funding

Publish with us

  • For authors
  • Language editing services
  • Open access funding
  • Submit manuscript

Search

Advanced search

Quick links

  • Explore articles by subject
  • Find a job
  • Guide to authors
  • Editorial policies

Scientific Reports (Sci Rep)

ISSN 2045-2322 (online)

nature.com footer links

About Nature Portfolio

  • About us
  • Press releases
  • Press office
  • Contact us

Discover content

  • Journals A-Z
  • Articles by subject
  • protocols.io
  • Nature Index

Publishing policies

  • Nature portfolio policies
  • Open access

Author & Researcher services

  • Reprints & permissions
  • Research data
  • Language editing
  • Scientific editing
  • Nature Masterclasses
  • Research Solutions

Libraries & institutions

  • Librarian service & tools
  • Librarian portal
  • Open research
  • Recommend to library

Advertising & partnerships

  • Advertising
  • Partnerships & Services
  • Media kits
  • Branded content

Professional development

  • Nature Awards
  • Nature Careers
  • Nature Conferences

Regional websites

  • Nature Africa
  • Nature China
  • Nature India
  • Nature Japan
  • Nature Middle East
  • Privacy Policy
  • Use of cookies
  • Legal notice
  • Accessibility statement
  • Terms & Conditions
  • Your US state privacy rights
Springer Nature

© 2026 Springer Nature Limited

Nature Briefing AI and Robotics

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing: AI and Robotics