Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Advertisement

Scientific Data
  • View all journals
  • Search
  • My Account Login
  • Content Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • RSS feed
  1. nature
  2. scientific data
  3. data descriptors
  4. article
BODIES: BOdy shape parameter and 3D meshes of Individuals basEd on SUPR
Download PDF
Download PDF
  • Data Descriptor
  • Open access
  • Published: 24 March 2026

BODIES: BOdy shape parameter and 3D meshes of Individuals basEd on SUPR

  • Alberto Cannavò  ORCID: orcid.org/0000-0002-6884-92681,
  • Francesco Manigrasso1,
  • Federica Moro1 &
  • …
  • Fabrizio Lamberti1 

Scientific Data , Article number:  (2026) Cite this article

  • 457 Accesses

  • Metrics details

We are providing an unedited version of this manuscript to give early access to its findings. Before final publication, the manuscript will undergo further editing. Please note there may be errors present which affect the content, and all legal disclaimers apply.

Subjects

  • Computational science
  • Software

Abstract

Today, an increasing number of applications in domains such as cultural heritage, healthcare, education, entertainment, and fashion require high-fidelity 3D avatars. However, generating avatars that faithfully reproduce users’ bodies through modeling or acquisition techniques remains challenging and time-consuming, particularly in applications where the accurate quantitative reproduction of body shape and precise anthropometric measurements is required. Thus, attention is shifting towards machine learning-based approaches, in particular those able to fit a parametric model representing the avatar to the intended body shape. Among these models, the Sparse Unified Part-Based Human Representation (SUPR) has been proven to offer superior performance compared to other representations. However, its adoption is primarily hindered by the lack of datasets built upon it. This paper addresses this gap by proposing BOdy shape parameter and 3D meshes of Individuals basEd on SUPR (BODIES), a dataset containing 84,000 synthetic-generated subjects described using the SUPR model with different numbers of parameters. The paper also presents the results of three experimental studies aimed at assessing the improvements brought by the SUPR model over the state-of-the-art when used to feed an existing framework for generating 3D avatar meshes.

Similar content being viewed by others

High-fidelity 3D Buddhist sculpture reconstruction from single images using domain-adaptive diffusion

Article Open access 18 December 2025

Face2Bone explainable AI model predicts osteoporosis risk from facial images in proof of concept study

Article Open access 20 November 2025

Personalized design aesthetic preference modeling: a variational autoencoder and meta-learning approach for multi-modal feature representation and transfer optimization

Article Open access 24 November 2025

Data availability

The BODIES dataset is available for download at28.

Code availability

The source code that can be used to run the experiments described in this paper is available at https://tinyurl.com/kmrk4rwj.

References

  1. Weidner, F. et al. A systematic review on the visualization of avatars and agents in AR and VR displayed using head-mounted displays. IEEE Trans. Vis. Comput. Graph. 29, 2596–2606, https://doi.org/10.1109/TVCG.2023.3247072 (2023).

    Google Scholar 

  2. Thota, K. S. P., Suh, S., Zhou, B. & Lukowicz, P. Estimation of 3D body shape and clothing measurements from frontal- and side-view images. Proc. IEEE Int. Conf. Image Process. 2631–2635 https://doi.org/10.1109/ICIP46576.2022.9897520 (2022).

  3. Li, B., Deng, Y., Yang, Y. & Zhao, X. An embeddable implicit IUVD representation for part-based 3D human surface reconstruction. IEEE Trans. Image Process. 33, 4334–4347, https://doi.org/10.1109/TIP.2024.3430073 (2024).

    Google Scholar 

  4. Lin, J., Zeng, A., Wang, H., Zhang, L. & Li, Y. One-stage 3D whole-body mesh recovery with component-aware transformer. Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. 21159–21168, https://doi.org/10.1109/CVPR52729.2023.02027 (2023).

  5. Black, M. J., Patel, P., Tesch, J. & Yang, J. BEDLAM: a synthetic dataset of bodies exhibiting detailed lifelike animated motion. Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. 8726–8737, https://doi.org/10.1109/CVPR52729.2023.00843 (2023).

  6. Xiu, Y., Yang, J., Tzionas, D. & Black, M. ICON: implicit clothed humans obtained from normals. Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. 13286–13296, https://doi.org/10.1109/CVPR52688.2022.01294 (2022).

  7. Peng, S. et al. Neural body: implicit neural representations with structured latent codes for novel view synthesis of dynamic humans. Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. 9054–9063, https://doi.org/10.1109/CVPR46437.2021.00894 (2021).

  8. Xu, H. et al. GHUM & GHUML: generative 3D human shape and articulated pose models. Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. 6184–6193, https://doi.org/10.1109/CVPR42600.2020.00622 (2020).

  9. Tian, Y., Zhang, H., Liu, Y. & Wang, L. Recovering 3D human mesh from monocular images: a survey. IEEE Trans. Pattern Anal. Mach. Intell. 45, 15406–15425, https://doi.org/10.1109/TPAMI.2023.3298850 (2023).

    Google Scholar 

  10. Loper, M., Mahmood, N., Romero, J., Pons-Moll, G. & Black, M. J. SMPL: a skinned multi-person linear model. ACM Trans. Graph. 34, 1–16, https://doi.org/10.1145/2816795.2818013 (2015).

    Google Scholar 

  11. Gu, D., Yun, Y., Tuan, T. T. & Ahn, H. Dense-Pose2SMPL: 3D human body shape estimation from single and multiple images. IEEE Access 10, 75859–75871, https://doi.org/10.1109/ACCESS.2022.3191644 (2022).

    Google Scholar 

  12. Li, X. et al. Learning to infer inner-body under clothing from monocular video. IEEE Trans. Vis. Comput. Graph. 29, 5083–5096, https://doi.org/10.1109/TVCG.2022.3202240 (2022).

    Google Scholar 

  13. Osman, A., Bolkart, T., Tzionas, D. & Black, M. J. SUPR: a sparse unified part-based human representation. Proc. European Conf. on Computer Vision, 568–585, https://doi.org/10.1007/978-3-031-20086-1_33 (2022).

  14. Paquette, S. Anthropometric survey (ANSUR II) pilot study: methods and summary statistics. Anthrotch, US Army Natick Soldier Research, Development and Engineering Center (2009).

  15. Cannavó, A., Pesando, R. & Lamberti, F. A framework for animating customized avatars from monocular videos in virtual try-on applications. Proc. Int. Conf. Extended Reality 69–88, https://doi.org/10.1007/978-3-031-43401-3_5 (2023).

  16. Cannavó, A., Offre, G. & Lamberti, F. A semi-automated pipeline for the creation of virtual fitting room experiences featuring motion capture and cloth simulation. IEEE Comput. Graph. Appl. 45, 84–98, https://doi.org/10.1109/MCG.2024.3521716 (2024).

    Google Scholar 

  17. Bartol, K. & Gumhold, S. Protocols for high-quality indoor and outdoor scanning of clothed people. Proc. Int. Conf. and Exhibition on 3D Body Scanning and Processing Technologies 1–10 (2023).

  18. Liu, Y. et al. Implicit-based collision-aware clothed human reconstruction from a single image. Comput. & Graph. 128, 104201, https://doi.org/10.1016/j.cag.2025.104201 (2025).

    Google Scholar 

  19. Tajdari, F. et al. 4D feet: registering walking foot shapes using attention-enhanced dynamic-synchronized graph convolutional LSTM network. IEEE Open J. Comput. Soc. 5, 343–355, https://doi.org/10.1109/OJCS.2024.3406645 (2024).

    Google Scholar 

  20. Pang, H. E. et al. Benchmarking and analyzing 3D human pose and shape estimation beyond algorithms. Proc. Int. Conf. Neural Inf. Process. Syst.35 (2022).

  21. Gonzalez-Tejeda, Y. & Mayer, H. CALVIS: chest waist and pelvis circumference from 3D human body meshes as ground truth for deep learning. Proc. Int. Workshop Shape Motion Imaging Data https://doi.org/10.48550/arXiv.2003.00834, https://github.com/neoglez/calvis (2019).

  22. Varol, G. et al. Learning from synthetic humans. Proc. IEEE Conf. Comput. Vis. Pattern Recognit. 109–117, https://doi.org/10.1109/CVPR.2017.492, https://github.com/gulvarol/surreal (2017).

  23. Patel, P. et al. AGORA: avatars in geography optimized for regression analysis. Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. 13468–13478 https://doi.org/10.1109/CVPR46437.2021.01326, https://github.com/pixelite1201/agora_evaluation (2021).

  24. Mahmood, N., Ghorbani, N., Troje, N. F., Pons-Moll, G. & Black, M. J. AMASS: archive of motion capture as surface shapes. Proc. IEEE/CVF Int. Conf. Comput. Vis. 5442–5451, https://doi.org/10.1109/ICCV.2019.00554, url: https://github.com/nghorbani/amass (2019).

  25. Pons-Moll, G., Pujades, S., Hu, S. & Black, M. J. ClothCap: seamless 4D clothing capture and retargeting. ACM Trans. Graph. 36, 1–15, https://doi.org/10.1145/3072959.3073711 (2017).

    Google Scholar 

  26. Zhuang, Y. et al. IDOL: instant photorealistic 3D human creation from a single image. Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. 26308–26319 https://doi.org/10.1109/CVPR52734.2025.02450, https://github.com/yiyuzhuang/IDOL (2025).

  27. Xiong, Z. et al. MVHumanNet: a large-scale dataset of multi-view daily dressing human captures. Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. 19801–19811 https://doi.org/10.1109/CVPR52733.2024.01872, https://github.com/GAP-LAB-CUHK-SZ/MVHumanNet (2024).

  28. Cannavó, A., Manigrasso, F., Moro, F. & Lamberti, F. BODIES: Body shape parameters and 3D meshes of individuals based on SUPR. Zenodo https://doi.org/10.5281/zenodo.17912003 (2025).

  29. Rumman, N. A.& Fratarcangeli, M. Skin deformation methods for interactive character animation. Proc. Int. Joint Conf. Comput. Vis. Imaging and Comput. Graph. 153–174, https://doi.org/10.1007/978-3-319-64870-5_8 (2017).

  30. Tanner, J. M. Foetus into man: physical growth from conception to maturity. Harvard Univ. Press (1990).

  31. Brožek, J., PařÍzková, J., Mendez, J. & Bartkett, H. The evaluation of body surface, body volume and body composition in human biology research. Anthropologie 25, 235–259 (1987).

    Google Scholar 

  32. Ronneberger, O., Fischer, P. & Brox, T. U-net: convolutional networks for biomedical image segmentation. Proc. Int. Conf on Medical Image Computing and Computer-Assisted Intervention 234–241, https://doi.org/10.1007/978-3-319-24574-4_28 (2015).

  33. Hasler, N., Stoll, C., Sunkel, M., Rosenhahn, B. & Seidel, H. A statistical model of human pose and body shape. Comput. Graph. Forum 28, 337–346, https://doi.org/10.1111/j.1467-8659.2009.01373.x (2009).

    Google Scholar 

  34. Klambauer, G., Unterthiner, T., Mayr, A. & Hochreiter Self-normalizing neural networks. Proc. Int. Conf. Neural Inf. Process. Syst. 30, 972–981 (2017).

    Google Scholar 

  35. Santurkar, S., Tsipras, D., Ilyas, A. & Madry, A. How does batch normalization help optimization? Proc. Int. Conf. Neural Inf. Process. Syst. 31, 2488–2498 (2018).

    Google Scholar 

  36. Wang, X. et al. ESRGAN: enhanced super-resolution generative adversarial networks. Proc. European Conf. on Computer Vision - Workshops https://doi.org/10.1007/978-3-030-11021-5_5 (2018).

  37. Chen, X. et al. Robust human matting via semantic guidance. Asian Conf. Comput. Vis. 2984–2999, https://doi.org/10.1007/978-3-031-26284-5_37 (2022).

  38. Fischer, J. & Gumhold, S. Fast and accurate parameter conversion for parametric human body models. Proc. ACM Comput. Graph. Interact. Tech. 8, 1–21, https://doi.org/10.1145/3747869 (2025).

    Google Scholar 

  39. Liu, L. & Zhao, K. Report on methods and applications for crafting 3D humans. arXiv preprint arXiv:2406.01223 1–9 Preprint at https://doi.org/10.48550/arXiv.2406.01223 (2024).

  40. Cao, Q., Yu, H., Charisse, P., Qiao, S. & Stevens, B. Is high-fidelity important for human-like virtual avatars in human computer interactions? Int. J. Netw. Dyn. Intell. 2, 15–23, https://doi.org/10.53941/ijndi0201008 (2023).

    Google Scholar 

  41. Restivo, S. et al. Interacting with ancient Egypt remains in high-fidelity virtual reality experiences. Proc. Eurographics Workshop Graph. Cult. Herit. https://doi.org/10.2312/gch.20231175 (2023).

Download references

Acknowledgements

This research was developed in collaboration with Protocube Reply and was supported by PON “Ricerca e Innovazione” 2014-2020 - DM 1062/2021 funds. The authors want to thank Kundan Sai Prabhu Thota, one of the authors of2, for his support through the configuration and usage of the framework that was then optimized and used for the experimental studies.

Author information

Authors and Affiliations

  1. Department of Control and Computer Engineering, Politecnico di Torino, Turin, Italy

    Alberto Cannavò, Francesco Manigrasso, Federica Moro & Fabrizio Lamberti

Authors
  1. Alberto Cannavò
    View author publications

    Search author on:PubMed Google Scholar

  2. Francesco Manigrasso
    View author publications

    Search author on:PubMed Google Scholar

  3. Federica Moro
    View author publications

    Search author on:PubMed Google Scholar

  4. Fabrizio Lamberti
    View author publications

    Search author on:PubMed Google Scholar

Contributions

Federica Moro and Francesco Manigrasso contributed to the design and development of the dataset, as well as the implementation of the experimental studies. Alberto Cannavò contributed to the design of the experimental studies and to the initial drafting of the paper. Fabrizio Lamberti contributed to the design of the experimental studies, as well as to the writing and revision of the paper.

Corresponding author

Correspondence to Alberto Cannavò.

Ethics declarations

Competing interests

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Cannavò, A., Manigrasso, F., Moro, F. et al. BODIES: BOdy shape parameter and 3D meshes of Individuals basEd on SUPR. Sci Data (2026). https://doi.org/10.1038/s41597-026-06777-4

Download citation

  • Received: 18 August 2025

  • Accepted: 31 January 2026

  • Published: 24 March 2026

  • DOI: https://doi.org/10.1038/s41597-026-06777-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Download PDF

Advertisement

Explore content

  • Research articles
  • News & Comment
  • Collections
  • Follow us on X
  • Sign up for alerts
  • RSS feed

About the journal

  • Aims and scope
  • Editors & Editorial Board
  • Journal Metrics
  • Policies
  • Open Access Fees and Funding
  • Calls for Papers
  • Contact

Publish with us

  • Submission Guidelines
  • Language editing services
  • Open access funding
  • Submit manuscript

Search

Advanced search

Quick links

  • Explore articles by subject
  • Find a job
  • Guide to authors
  • Editorial policies

Scientific Data (Sci Data)

ISSN 2052-4463 (online)

nature.com footer links

About Nature Portfolio

  • About us
  • Press releases
  • Press office
  • Contact us

Discover content

  • Journals A-Z
  • Articles by subject
  • protocols.io
  • Nature Index

Publishing policies

  • Nature portfolio policies
  • Open access

Author & Researcher services

  • Reprints & permissions
  • Research data
  • Language editing
  • Scientific editing
  • Nature Masterclasses
  • Research Solutions

Libraries & institutions

  • Librarian service & tools
  • Librarian portal
  • Open research
  • Recommend to library

Advertising & partnerships

  • Advertising
  • Partnerships & Services
  • Media kits
  • Branded content

Professional development

  • Nature Awards
  • Nature Careers
  • Nature Conferences

Regional websites

  • Nature Africa
  • Nature China
  • Nature India
  • Nature Japan
  • Nature Middle East
  • Privacy Policy
  • Use of cookies
  • Legal notice
  • Accessibility statement
  • Terms & Conditions
  • Your US state privacy rights
Springer Nature

© 2026 Springer Nature Limited

Nature Briefing AI and Robotics

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing: AI and Robotics