Abstract
Today, an increasing number of applications in domains such as cultural heritage, healthcare, education, entertainment, and fashion require high-fidelity 3D avatars. However, generating avatars that faithfully reproduce users’ bodies through modeling or acquisition techniques remains challenging and time-consuming, particularly in applications where the accurate quantitative reproduction of body shape and precise anthropometric measurements is required. Thus, attention is shifting towards machine learning-based approaches, in particular those able to fit a parametric model representing the avatar to the intended body shape. Among these models, the Sparse Unified Part-Based Human Representation (SUPR) has been proven to offer superior performance compared to other representations. However, its adoption is primarily hindered by the lack of datasets built upon it. This paper addresses this gap by proposing BOdy shape parameter and 3D meshes of Individuals basEd on SUPR (BODIES), a dataset containing 84,000 synthetic-generated subjects described using the SUPR model with different numbers of parameters. The paper also presents the results of three experimental studies aimed at assessing the improvements brought by the SUPR model over the state-of-the-art when used to feed an existing framework for generating 3D avatar meshes.
Similar content being viewed by others
Data availability
The BODIES dataset is available for download at28.
Code availability
The source code that can be used to run the experiments described in this paper is available at https://tinyurl.com/kmrk4rwj.
References
Weidner, F. et al. A systematic review on the visualization of avatars and agents in AR and VR displayed using head-mounted displays. IEEE Trans. Vis. Comput. Graph. 29, 2596–2606, https://doi.org/10.1109/TVCG.2023.3247072 (2023).
Thota, K. S. P., Suh, S., Zhou, B. & Lukowicz, P. Estimation of 3D body shape and clothing measurements from frontal- and side-view images. Proc. IEEE Int. Conf. Image Process. 2631–2635 https://doi.org/10.1109/ICIP46576.2022.9897520 (2022).
Li, B., Deng, Y., Yang, Y. & Zhao, X. An embeddable implicit IUVD representation for part-based 3D human surface reconstruction. IEEE Trans. Image Process. 33, 4334–4347, https://doi.org/10.1109/TIP.2024.3430073 (2024).
Lin, J., Zeng, A., Wang, H., Zhang, L. & Li, Y. One-stage 3D whole-body mesh recovery with component-aware transformer. Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. 21159–21168, https://doi.org/10.1109/CVPR52729.2023.02027 (2023).
Black, M. J., Patel, P., Tesch, J. & Yang, J. BEDLAM: a synthetic dataset of bodies exhibiting detailed lifelike animated motion. Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. 8726–8737, https://doi.org/10.1109/CVPR52729.2023.00843 (2023).
Xiu, Y., Yang, J., Tzionas, D. & Black, M. ICON: implicit clothed humans obtained from normals. Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. 13286–13296, https://doi.org/10.1109/CVPR52688.2022.01294 (2022).
Peng, S. et al. Neural body: implicit neural representations with structured latent codes for novel view synthesis of dynamic humans. Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. 9054–9063, https://doi.org/10.1109/CVPR46437.2021.00894 (2021).
Xu, H. et al. GHUM & GHUML: generative 3D human shape and articulated pose models. Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. 6184–6193, https://doi.org/10.1109/CVPR42600.2020.00622 (2020).
Tian, Y., Zhang, H., Liu, Y. & Wang, L. Recovering 3D human mesh from monocular images: a survey. IEEE Trans. Pattern Anal. Mach. Intell. 45, 15406–15425, https://doi.org/10.1109/TPAMI.2023.3298850 (2023).
Loper, M., Mahmood, N., Romero, J., Pons-Moll, G. & Black, M. J. SMPL: a skinned multi-person linear model. ACM Trans. Graph. 34, 1–16, https://doi.org/10.1145/2816795.2818013 (2015).
Gu, D., Yun, Y., Tuan, T. T. & Ahn, H. Dense-Pose2SMPL: 3D human body shape estimation from single and multiple images. IEEE Access 10, 75859–75871, https://doi.org/10.1109/ACCESS.2022.3191644 (2022).
Li, X. et al. Learning to infer inner-body under clothing from monocular video. IEEE Trans. Vis. Comput. Graph. 29, 5083–5096, https://doi.org/10.1109/TVCG.2022.3202240 (2022).
Osman, A., Bolkart, T., Tzionas, D. & Black, M. J. SUPR: a sparse unified part-based human representation. Proc. European Conf. on Computer Vision, 568–585, https://doi.org/10.1007/978-3-031-20086-1_33 (2022).
Paquette, S. Anthropometric survey (ANSUR II) pilot study: methods and summary statistics. Anthrotch, US Army Natick Soldier Research, Development and Engineering Center (2009).
Cannavó, A., Pesando, R. & Lamberti, F. A framework for animating customized avatars from monocular videos in virtual try-on applications. Proc. Int. Conf. Extended Reality 69–88, https://doi.org/10.1007/978-3-031-43401-3_5 (2023).
Cannavó, A., Offre, G. & Lamberti, F. A semi-automated pipeline for the creation of virtual fitting room experiences featuring motion capture and cloth simulation. IEEE Comput. Graph. Appl. 45, 84–98, https://doi.org/10.1109/MCG.2024.3521716 (2024).
Bartol, K. & Gumhold, S. Protocols for high-quality indoor and outdoor scanning of clothed people. Proc. Int. Conf. and Exhibition on 3D Body Scanning and Processing Technologies 1–10 (2023).
Liu, Y. et al. Implicit-based collision-aware clothed human reconstruction from a single image. Comput. & Graph. 128, 104201, https://doi.org/10.1016/j.cag.2025.104201 (2025).
Tajdari, F. et al. 4D feet: registering walking foot shapes using attention-enhanced dynamic-synchronized graph convolutional LSTM network. IEEE Open J. Comput. Soc. 5, 343–355, https://doi.org/10.1109/OJCS.2024.3406645 (2024).
Pang, H. E. et al. Benchmarking and analyzing 3D human pose and shape estimation beyond algorithms. Proc. Int. Conf. Neural Inf. Process. Syst.35 (2022).
Gonzalez-Tejeda, Y. & Mayer, H. CALVIS: chest waist and pelvis circumference from 3D human body meshes as ground truth for deep learning. Proc. Int. Workshop Shape Motion Imaging Data https://doi.org/10.48550/arXiv.2003.00834, https://github.com/neoglez/calvis (2019).
Varol, G. et al. Learning from synthetic humans. Proc. IEEE Conf. Comput. Vis. Pattern Recognit. 109–117, https://doi.org/10.1109/CVPR.2017.492, https://github.com/gulvarol/surreal (2017).
Patel, P. et al. AGORA: avatars in geography optimized for regression analysis. Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. 13468–13478 https://doi.org/10.1109/CVPR46437.2021.01326, https://github.com/pixelite1201/agora_evaluation (2021).
Mahmood, N., Ghorbani, N., Troje, N. F., Pons-Moll, G. & Black, M. J. AMASS: archive of motion capture as surface shapes. Proc. IEEE/CVF Int. Conf. Comput. Vis. 5442–5451, https://doi.org/10.1109/ICCV.2019.00554, url: https://github.com/nghorbani/amass (2019).
Pons-Moll, G., Pujades, S., Hu, S. & Black, M. J. ClothCap: seamless 4D clothing capture and retargeting. ACM Trans. Graph. 36, 1–15, https://doi.org/10.1145/3072959.3073711 (2017).
Zhuang, Y. et al. IDOL: instant photorealistic 3D human creation from a single image. Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. 26308–26319 https://doi.org/10.1109/CVPR52734.2025.02450, https://github.com/yiyuzhuang/IDOL (2025).
Xiong, Z. et al. MVHumanNet: a large-scale dataset of multi-view daily dressing human captures. Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. 19801–19811 https://doi.org/10.1109/CVPR52733.2024.01872, https://github.com/GAP-LAB-CUHK-SZ/MVHumanNet (2024).
Cannavó, A., Manigrasso, F., Moro, F. & Lamberti, F. BODIES: Body shape parameters and 3D meshes of individuals based on SUPR. Zenodo https://doi.org/10.5281/zenodo.17912003 (2025).
Rumman, N. A.& Fratarcangeli, M. Skin deformation methods for interactive character animation. Proc. Int. Joint Conf. Comput. Vis. Imaging and Comput. Graph. 153–174, https://doi.org/10.1007/978-3-319-64870-5_8 (2017).
Tanner, J. M. Foetus into man: physical growth from conception to maturity. Harvard Univ. Press (1990).
Brožek, J., PařÍzková, J., Mendez, J. & Bartkett, H. The evaluation of body surface, body volume and body composition in human biology research. Anthropologie 25, 235–259 (1987).
Ronneberger, O., Fischer, P. & Brox, T. U-net: convolutional networks for biomedical image segmentation. Proc. Int. Conf on Medical Image Computing and Computer-Assisted Intervention 234–241, https://doi.org/10.1007/978-3-319-24574-4_28 (2015).
Hasler, N., Stoll, C., Sunkel, M., Rosenhahn, B. & Seidel, H. A statistical model of human pose and body shape. Comput. Graph. Forum 28, 337–346, https://doi.org/10.1111/j.1467-8659.2009.01373.x (2009).
Klambauer, G., Unterthiner, T., Mayr, A. & Hochreiter Self-normalizing neural networks. Proc. Int. Conf. Neural Inf. Process. Syst. 30, 972–981 (2017).
Santurkar, S., Tsipras, D., Ilyas, A. & Madry, A. How does batch normalization help optimization? Proc. Int. Conf. Neural Inf. Process. Syst. 31, 2488–2498 (2018).
Wang, X. et al. ESRGAN: enhanced super-resolution generative adversarial networks. Proc. European Conf. on Computer Vision - Workshops https://doi.org/10.1007/978-3-030-11021-5_5 (2018).
Chen, X. et al. Robust human matting via semantic guidance. Asian Conf. Comput. Vis. 2984–2999, https://doi.org/10.1007/978-3-031-26284-5_37 (2022).
Fischer, J. & Gumhold, S. Fast and accurate parameter conversion for parametric human body models. Proc. ACM Comput. Graph. Interact. Tech. 8, 1–21, https://doi.org/10.1145/3747869 (2025).
Liu, L. & Zhao, K. Report on methods and applications for crafting 3D humans. arXiv preprint arXiv:2406.01223 1–9 Preprint at https://doi.org/10.48550/arXiv.2406.01223 (2024).
Cao, Q., Yu, H., Charisse, P., Qiao, S. & Stevens, B. Is high-fidelity important for human-like virtual avatars in human computer interactions? Int. J. Netw. Dyn. Intell. 2, 15–23, https://doi.org/10.53941/ijndi0201008 (2023).
Restivo, S. et al. Interacting with ancient Egypt remains in high-fidelity virtual reality experiences. Proc. Eurographics Workshop Graph. Cult. Herit. https://doi.org/10.2312/gch.20231175 (2023).
Acknowledgements
This research was developed in collaboration with Protocube Reply and was supported by PON “Ricerca e Innovazione” 2014-2020 - DM 1062/2021 funds. The authors want to thank Kundan Sai Prabhu Thota, one of the authors of2, for his support through the configuration and usage of the framework that was then optimized and used for the experimental studies.
Author information
Authors and Affiliations
Contributions
Federica Moro and Francesco Manigrasso contributed to the design and development of the dataset, as well as the implementation of the experimental studies. Alberto Cannavò contributed to the design of the experimental studies and to the initial drafting of the paper. Fabrizio Lamberti contributed to the design of the experimental studies, as well as to the writing and revision of the paper.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Cannavò, A., Manigrasso, F., Moro, F. et al. BODIES: BOdy shape parameter and 3D meshes of Individuals basEd on SUPR. Sci Data (2026). https://doi.org/10.1038/s41597-026-06777-4
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s41597-026-06777-4


