Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Advertisement

Scientific Reports
  • View all journals
  • Search
  • My Account Login
  • Content Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • RSS feed
  1. nature
  2. scientific reports
  3. articles
  4. article
Generative adversarial networks for high-fidelity 3D point cloud completion
Download PDF
Download PDF
  • Article
  • Open access
  • Published: 18 March 2026

Generative adversarial networks for high-fidelity 3D point cloud completion

  • Di Zhao1,
  • Sizhe Mao1,
  • Junhan Shao1 &
  • …
  • Hui Huang1 

Scientific Reports , Article number:  (2026) Cite this article

  • 481 Accesses

  • Metrics details

We are providing an unedited version of this manuscript to give early access to its findings. Before final publication, the manuscript will undergo further editing. Please note there may be errors present which affect the content, and all legal disclaimers apply.

Subjects

  • Engineering
  • Mathematics and computing

Abstract

3D point clouds are essential for representing geometric structures in various fields such as autonomous driving and virtual reality. However, real-world data often suffers from incompleteness due to occlusions and noise, and existing completion methods typically rely on paired complete–incomplete training data or are limited to recovering relatively small missing regions, which restricts their effectiveness under high missing-rate scenarios. This paper introduces a GAN-based method for completing 3D point clouds, capable of reconstructing detailed structures from partial inputs. Our end-to-end framework, consisting of an encoder, generator, and discriminator, optimizes topological accuracy and spatial continuity through a multi-term joint loss. Experimental results on the ModelNet40 dataset demonstrate superior performance over traditional and deep learning-based methods, achieving Chamfer Distance (CD = 0.085), Earth Mover’s Distance (EMD = 0.199), and F-Score (0.208). The generated high-quality point clouds support downstream tasks like path planning and robotic grasping. The source code and experimental datasets used in this work are publicly available at: DOI: https://doi.org/10.5281/zenodo.18421141.

Data availability

The datasets analyzed during the current study are publicly available in the ModelNet40 dataset, provided by the Princeton ModelNet repository (https://modelnet.cs.princeton.edu). The source code and representative experimental data required to reproduce the results reported in this paper are permanently archived and publicly available via Zenodo at DOI: 10.5281/zenodo.18421141.

References

  1. Wang, Q. & Kim, M. K. Applications of 3D point cloud data in the construction industry: a fifteen-year review from 2004 to 2018. Adv. Eng. Inform. 39, 306–319. https://doi.org/10.1016/j.aei.2019.02.007 (2019).

    Google Scholar 

  2. Chauhan, A. et al. A survey of deep reinforcement learning techniques for energy-efficient green cloud computing. Cluster Comput. 28, 989. https://doi.org/10.1007/s10586-025-05727-w (2025).

    Google Scholar 

  3. Yang, Z. et al. Mpv-pcqa: multimodal no-reference point cloud quality assessment via point cloud and captured dynamic video. Multimedia Syst. 31, 310. https://doi.org/10.1007/s00530-025-01887-2 (2025).

    Google Scholar 

  4. Xu, T. et al. A miniature A-mode ultrasound system for noninvasive bone surface point cloud acquisition. Ann. Biomed. Eng. https://doi.org/10.1007/s10439-025-03918-5 (2025).

    Google Scholar 

  5. Zhang, S., Hu, S., Zhao, X., Zhang, D. & Tao, B. An accurate 3D reconstruction method for large workpieces based on 3D vision. In: (eds Matsuno, T. et al.) Intelligent robotics and applications. ICIRA 2025. Lecture Notes in Computer Science, vol. 16076. Springer, Singapore https://doi.org/10.1007/978-981-95-2101-2_32. (2026).

    Google Scholar 

  6. Jung, Y. et al. A transfer function design for medical volume data using a knowledge database based on deep image and primitive intensity profile features retrieval. J. Comput. Sci. Technol. 39 (2), 320–335. https://doi.org/10.1007/s11390-024-3419-7 (2024).

    Google Scholar 

  7. Wang, J. et al. Non-rigid point cloud registration via anisotropic hybrid field harmonization. IEEE Trans. Pattern Anal. Mach. Intell. https://doi.org/10.1109/TPAMI.2025.3572584 (2025).

    Google Scholar 

  8. Zhou, Z., Luo, Y. & Sun, T. A quantitative 3D reconstruction evaluation method based on Blender. In: Proceedings of the 10th International Conference on Computer and Communications (ICCC), 761–765 (2024). https://doi.org/10.1109/ICCC62609.2024.10942306

  9. Wang, L. et al. A cascaded graph convolutional network for point cloud completion. Vis. Comput. 41, 659–674. https://doi.org/10.1007/s00371-024-03354-x (2025).

    Google Scholar 

  10. Lu, C. H. & Chen, X. H. Improved iterative Poisson point cloud surface reconstruction. In: Proceedings of the 3rd International Conference on Digital Society and Intelligent Systems (DSInS), 382–385 (2023). https://doi.org/10.1109/DSInS60115.2023.10455616

  11. Li, M., Li, G. & Li, X. PoseNorm-PCN: pose-normalized human point cloud completion from a single front view. Vis. Comput. 42, 52. https://doi.org/10.1007/s00371-025-04295-9 (2026).

    Google Scholar 

  12. Fu, Z. et al. AEDNet: adaptive embedding and multiview-aware disentanglement for point cloud completion. In: (eds Leonardis, A., Ricci, E., Roth, S., Russakovsky, O., Sattler, T. & Varol, G.) Computer Vision—ECCV 2024. ECCV 2024. Lecture Notes in Computer Science, vol. 15069. Springer, Cham https://doi.org/10.1007/978-3-031-73247-8_8. (2025).

    Google Scholar 

  13. Zhang, M. et al. Joint-learning: a robust segmentation method for 3D point clouds under label noise. Computer Animation and Virtual Worlds 36.3, e70038 (2025). https://doi.org/10.1002/cav.70038

  14. Ji, J., Zhao, R. & Lei, M. Latent diffusion transformer for point cloud generation. Vis. Comput. 40, 3903–3917. https://doi.org/10.1007/s00371-024-03396-1 (2024).

    Google Scholar 

  15. Tychola, K. A., Vrochidou, E. & Papakostas, G. A. Deep learning based computer vision under the prism of 3D point clouds: a systematic review. Vis. Comput. 40, 8287–8329. https://doi.org/10.1007/s00371-023-03237-7 (2024).

    Google Scholar 

  16. Yao, G. et al. DS-GAN: a dual sub-structure GAN for thermal infrared image colorization using U-Net with ConvNeXt and multi-scale large kernel attention. Vis. Comput. 41, 12441–12459. https://doi.org/10.1007/s00371-025-04165-4 (2025).

    Google Scholar 

  17. Hu, X. et al. Msembgan: Multi-stitch embroidery synthesis via region-aware texture generation. IEEE Trans. Vis. Comput. Graph. 31.9, 5334–5347. https://doi.org/10.1109/TVCG.2024.3447351 (2024).

    Google Scholar 

  18. Chen, Z., Hu, Z., Dai, S. & Zhou, L. KANs vs MLPs in OT-GAN. In: Proceedings of the 5th International Symposium on Computer Engineering and Intelligent Communications (ISCEIC), pp. 356–360 (2024). https://doi.org/10.1109/ISCEIC63613.2024.10810254

  19. Kazhdan, M., Bolitho, M. & Hoppe, H. Poisson surface reconstruction. In: Proceedings of the 4th Eurographics Symposium on Geometry Processing (SGP), pp. 61–70 (2006).

  20. Leng, B. et al. Shape embedding and retrieval in multi-flow deformation. Comput. Vis. Media. 10, 439–451. https://doi.org/10.1007/s41095-022-0315-3 (2024).

    Google Scholar 

  21. Liang, G., Zhao, X., Zhao, J. & Zhou, F. MVCNN: a deep learning-based ocean–land waveform classification network for single-wavelength LiDAR bathymetry. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 16, 656–674. https://doi.org/10.1109/JSTARS.2022.3229062 (2023).

    Google Scholar 

  22. Toikkanen, M., Kwon, D. & Lee, M. ReSGAN: intracranial hemorrhage segmentation with residuals of synthetic brain CT scans. In: (eds de Bruijne, M. et al.) Medical image computing and computer assisted intervention—MICCAI 2021. MICCAI 2021. Lecture Notes in Computer Science, vol. 12901. Springer, Cham https://doi.org/10.1007/978-3-030-87193-2_38. (2021).

    Google Scholar 

  23. Dai, A., Qi, C. R. & Nießner, M. Shape completion using 3D-encoder-predictor CNNs and shape synthesis. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 6545–6554 (2017). https://doi.org/10.1109/CVPR.2017.693

  24. Chung, Y. H. & Chen, Y. L. Three-dimensional image inpainting system using 3D-ED-GAN for efficient vision-based detection for rotor dynamic balance system. IEEE Access 10, 60025–60038. https://doi.org/10.1109/ACCESS.2022.3180339 (2022).

    Google Scholar 

  25. Xu, L. et al. SPDGrNet: A Lightweight and Efficient Image Classification Network for Zea mays Diseases. J. Crop Health. 77, 92. https://doi.org/10.1007/s10343-025-01154-4 (2025).

    Google Scholar 

  26. Li, J. et al. Edge-guided generative network with attention for point cloud completion. Vis. Comput. 41, 785–798. https://doi.org/10.1007/s00371-024-03364-9 (2025).

    Google Scholar 

  27. Tian, Z. et al. Enhanced 3D shoeprint classification via multi-scale PointNet + + with attention mechanisms. Vis. Comput. 42, 120. https://doi.org/10.1007/s00371-025-04337-2 (2026).

    Google Scholar 

  28. Zan, G., Wang, Y. & Gao, P. Improved DGCNN based on Transformer for point cloud segmentation. In: Lu, H., Cai, J. (eds) Artificial Intelligence and Robotics. ISAIR 2023. Communications in Computer and Information Science. Springer, Singapore (2024). (1998). https://doi.org/10.1007/978-981-99-9109-9_27

  29. Chen, J. H. & Hsu, C. C. PointCNN-Hand: 3D hand joints estimate by PointCNN from hand point cloud. In: Proceedings of the 2021 International Conference on System Science and Engineering (ICSSE), 458–463 (2021). https://doi.org/10.1109/ICSSE52999.2021.9538459

  30. Ma, X., Yin, Q., Zhang, X. & Tang, L. FoldingNet-based geometry compression of point cloud with multi descriptions. In: Proceedings of the 2022 IEEE International Conference on Multimedia and Expo Workshops (ICMEW), 1–6 (2022). https://doi.org/10.1109/ICMEW56448.2022.9859339

  31. Gong, B., Nie, Y., Lin, Y., Han, X. & Yu, Y. ME-PCN: point completion conditioned on mask emptiness. In: Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision (ICCV), 12468–12477 (2021). https://doi.org/10.1109/ICCV48922.2021.01226

  32. Wang, X. et al. TopNet: transformer-efficient occupancy prediction network for octree-structured point cloud geometry compression. In: Proceedings of the 2025 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 27305–27314 (2025). https://doi.org/10.1109/CVPR52734.2025.02543

  33. Huang, Z., Yu, Y., Xu, J., Ni, F. & Le, X. PF-Net: point fractal network for 3D point cloud completion. In: Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 7659–7667 (2020). https://doi.org/10.1109/CVPR42600.2020.00768

  34. Li, J., Guo, S., Meng, X., Lai, Z. & Han, S. DPG-Net: densely progressive-growing network for point cloud completion. Neurocomputing 491, 1–13. https://doi.org/10.1016/j.neucom.2022.03.060 (2022).

    Google Scholar 

  35. Liu, M., Sheng, L., Yang, S., Shao, J. & Hu, S. M. Morphing and sampling network for dense point cloud completion. arXiv preprint arXiv:1912.00280 (2019).

  36. Wang, Y., Tan, D. J., Navab, N. & Tombari, F. SoftPool++: an encoder–decoder network for point cloud completion. Int. J. Comput. Vis. 130, 1145–1164. https://doi.org/10.1007/s11263-022-01588-7 (2022).

    Google Scholar 

  37. Wen, X. et al. PMP-Net: point cloud completion by learning multi-step point moving paths. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 7443–7452 (2021). https://doi.org/10.1109/CVPR46437.2021.00736

  38. Liu, Z. & Xue, R. Visual image encryption based on compressed sensing and Cycle-GAN. Vis. Comput. 40, 5857–5870. https://doi.org/10.1007/s00371-023-03140-1 (2024).

    Google Scholar 

  39. Shen, B. et al. Point cloud upsampling generative adversarial network based on residual multi-scale off-set attention. Virtual Real. Intell. Hardw. 5 (1), 81–91. https://doi.org/10.1016/j.vrih.2022.08.016 (2023).

    Google Scholar 

  40. Yadav, N. K., Singh, S. K. & Dubey, S. R. ISA-GAN: inception-based self-attentive encoder–decoder network for face synthesis using delineated facial images. Vis. Comput. 40, 8205–8225. https://doi.org/10.1007/s00371-023-03233-x (2024).

    Google Scholar 

  41. Chen, W. et al. Stacked deep fusion GAN for enhanced text-to-image generation. Vis. Comput. 41, 8947–8960. https://doi.org/10.1007/s00371-025-03908-7 (2025).

    Google Scholar 

  42. Rathnakumari, L. & Rao, G. R. K. Enhancing heart disease prediction through a CNN-GAN hybrid deep learning model. SN Comput. Sci. 7, 47 https://doi.org/10.1007/s42979-025-04601-1. (2026).

  43. Liu, X. et al. Toward the unification of generative and discriminative visual foundation model: a survey. Vis. Comput. 41, 3371–3412. https://doi.org/10.1007/s00371-024-03608-8 (2025).

    Google Scholar 

  44. Brimos, P., Seregkos, P., Karamanou, A., Kalampokis, E. & Tarabanis, K. Deep learning missing value imputation on traffic data using self-attention and GAN-based methods. In: Proceedings of the 2024 Panhellenic Conference on Electronics & Telecommunications (PACET), 1–4 (2024). https://doi.org/10.1109/PACET60398.2024.10497055

  45. Tian, Y. et al. DGL-GAN: discriminator-guided GAN compression. Vis. Comput. 41, 4639–4660. https://doi.org/10.1007/s00371-024-03682-y (2025).

    Google Scholar 

  46. Liang, H. & Wang, R. Research on multi-feature fusion shadow puppet motifs generation based on CSPMotifsGAN and cultural heritage preservation. Comput. Animat. Virtual Worlds 363, e70047 (2025).

  47. Sarker, S. et al. A comprehensive overview of deep learning techniques for 3D point cloud classification and semantic segmentation. Mach. Vis. Appl. 35, 67. https://doi.org/10.1007/s00138-024-01543-1 (2024).

    Google Scholar 

  48. Shishegaran, A., Varaee, H., Rabczuk, T. & Shishegaran, G. High correlated variables creator machine: Prediction of the compressive strength of concrete. Comput. Struct. 247, 106479 (2021). https://arxiv.org/abs/2009.06421

    Google Scholar 

  49. Shishegaran, A. Computational methods for shape prediction for steel plate with stiffener subjected to explosive loads (Institut für Strukturmechanik, 2025). https://doi.org/10.25643/dbt.66715

Download references

Funding

This work was supported by the Major Science and Technology Projects (No. 318J009).

Author information

Authors and Affiliations

  1. School of Mechanical Engineering, Hubei University of Technology, Wuhan, China

    Di Zhao, Sizhe Mao, Junhan Shao & Hui Huang

Authors
  1. Di Zhao
    View author publications

    Search author on:PubMed Google Scholar

  2. Sizhe Mao
    View author publications

    Search author on:PubMed Google Scholar

  3. Junhan Shao
    View author publications

    Search author on:PubMed Google Scholar

  4. Hui Huang
    View author publications

    Search author on:PubMed Google Scholar

Contributions

D.Z. was responsible for the overall research framework and study design. S.M. designed the algorithms and performed data analysis and processing. J.S. contributed to the optimization of the model modules. H.H. conducted the experimental design and experimental data analysis. All authors reviewed and approved the final manuscript.

Corresponding authors

Correspondence to Di Zhao or Sizhe Mao.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhao, D., Mao, S., Shao, J. et al. Generative adversarial networks for high-fidelity 3D point cloud completion. Sci Rep (2026). https://doi.org/10.1038/s41598-026-44111-5

Download citation

  • Received: 03 February 2026

  • Accepted: 09 March 2026

  • Published: 18 March 2026

  • DOI: https://doi.org/10.1038/s41598-026-44111-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Keywords

  • 3D point cloud completions
  • Generative adversarial networks
  • 3D object generation
  • Deep learning
Download PDF

Advertisement

Explore content

  • Research articles
  • News & Comment
  • Collections
  • Subjects
  • Follow us on Facebook
  • Follow us on X
  • Sign up for alerts
  • RSS feed

About the journal

  • About Scientific Reports
  • Contact
  • Journal policies
  • Guide to referees
  • Calls for Papers
  • Editor's Choice
  • Journal highlights
  • Open Access Fees and Funding

Publish with us

  • For authors
  • Language editing services
  • Open access funding
  • Submit manuscript

Search

Advanced search

Quick links

  • Explore articles by subject
  • Find a job
  • Guide to authors
  • Editorial policies

Scientific Reports (Sci Rep)

ISSN 2045-2322 (online)

nature.com footer links

About Nature Portfolio

  • About us
  • Press releases
  • Press office
  • Contact us

Discover content

  • Journals A-Z
  • Articles by subject
  • protocols.io
  • Nature Index

Publishing policies

  • Nature portfolio policies
  • Open access

Author & Researcher services

  • Reprints & permissions
  • Research data
  • Language editing
  • Scientific editing
  • Nature Masterclasses
  • Research Solutions

Libraries & institutions

  • Librarian service & tools
  • Librarian portal
  • Open research
  • Recommend to library

Advertising & partnerships

  • Advertising
  • Partnerships & Services
  • Media kits
  • Branded content

Professional development

  • Nature Awards
  • Nature Careers
  • Nature Conferences

Regional websites

  • Nature Africa
  • Nature China
  • Nature India
  • Nature Japan
  • Nature Middle East
  • Privacy Policy
  • Use of cookies
  • Legal notice
  • Accessibility statement
  • Terms & Conditions
  • Your US state privacy rights
Springer Nature

© 2026 Springer Nature Limited

Nature Briefing AI and Robotics

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing: AI and Robotics