Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Advertisement

Scientific Reports
  • View all journals
  • Search
  • My Account Login
  • Content Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • RSS feed
  1. nature
  2. scientific reports
  3. articles
  4. article
A meta learning framework for few shot personalized gait cycle generation and reconstruction
Download PDF
Download PDF
  • Article
  • Open access
  • Published: 29 January 2026

A meta learning framework for few shot personalized gait cycle generation and reconstruction

  • Ram Kumar Yadav1,
  • Avishek Nandi2,
  • Dr. Akhilesh Kumar Sharma1 &
  • …
  • Prof. Lalit Garg3 

Scientific Reports , Article number:  (2026) Cite this article

  • 86 Accesses

  • Metrics details

We are providing an unedited version of this manuscript to give early access to its findings. Before final publication, the manuscript will undergo further editing. Please note there may be errors present which affect the content, and all legal disclaimers apply.

Subjects

  • Computational biology and bioinformatics
  • Engineering
  • Health care
  • Mathematics and computing

Abstract

Human gait is a complex biometric pattern with high intra and inter subject variability. While deep learning models can generate and reconstruct gait, they often require extensive personalized data. This paper introduces MetaGait, a framework that uses meta learning to personalize gait models from only a few examples. MetaGait applies a Model Agnostic Meta Learning (MAML) strategy, training a base model on varied gait analysis tasks from the Human Gait Database (HuGaDB). Each task adapts the model to a specific walking condition with a small support set of gait cycles. This process teaches the model an optimal initialization for quick adaptation to new subjects. The base model uses a temporal convolutional network (TCN) to capture temporal dependencies in sequence data. We evaluated MetaGait on few-shot gait cycle generation and reconstruction. Quantitative results, measublue by Mean Square Error (MSE) and Dynamic Time Warping (DTW), show our model outperforms conventionally trained baselines in low-data scenarios [1 shot and 5 shot learning]. Qualitative assessments confirm that MetaGait produces more natural, subject specific gait patterns and achieves accurate reconstructions from sparse inputs. By blueucing the data requiblue for personalization, MetaGait offers a more practical solution for applications in robotics and clinical gait analysis.

Similar content being viewed by others

Utility of synthetic musculoskeletal gaits for generalizable healthcare applications

Article Open access 04 July 2025

Leveraging machine learning for digital gait analysis in ataxia using sensor-free motion capture

Article Open access 27 January 2026

Enhancing gait recognition by multimodal fusion of mobilenetv1 and xception features via PCA for OaA-SVM classification

Article Open access 26 July 2024

Data availability

The datasets analysed during the current study are available in the Human Gait Database (HuGaDB) repository, https://www.kaggle.com/datasets/romanchereshnev/hugadb-human-gait-database

References

  1. Boulgouris, N. V., Plataniotis, K. N. & Hatzinakos, D. Gait recognition: A challenging signal processing technology for biometric identification. IEEE Signal Process. Mag.22(6), 78–90. https://doi.org/10.1109/MSP.2005.1550191 (2005).

    Google Scholar 

  2. Hofmann, U. G. et al. Biomechanical and cognitive-motor determinants of human gait. Z. Gerontol. Geriatr.47, 25–31. https://doi.org/10.1007/s00391-013-0553-6 (2014).

    Google Scholar 

  3. Baker, R. The use of gait analysis in the assessment of child development. Gait Posture24(4), S6–S7. https://doi.org/10.1016/j.gaitpost.2006.09.009 (2006).

    Google Scholar 

  4. Murray, M. P., Drought, A. B. & Kory, R. C. Walking patterns of normal men. J. Bone Joint Surg.46(2), 335–360 (1964).

    Google Scholar 

  5. Holden, D., Saito, J. & Komura, T. Phase-functioned neural networks for character control. ACM Trans. Graph.36(4), 1–13. https://doi.org/10.1145/3072959.3073663 (2017).

    Google Scholar 

  6. Tucker, M. R., García-Cerezo, J. G., Villamil, G. P. & Diaz, C. A. M. H. Control of a lower-limb exoskeleton for human gait enhancement. Appl. Bion. Biomech.2015, 1–12. https://doi.org/10.1155/2015/635089 (2015).

    Google Scholar 

  7. Gui, L. Y., Wang, Y. X., Liang, X., & Moura, J. M Adversarial geometry-aware human motion prediction. In Computer Vision – ECCV 2018, Lecture Notes in Computer Science, vol. 11208 23–842 (2018). https://doi.org/10.1007/978-3-030-01225-0_48

  8. Martinez, J., Black, M. J. & Romero, J. On human motion prediction using recurrent neural networks. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 4674–4683. https://doi.org/10.1109/CVPR.2017.498 (2017).

  9. Whittle, M. W. Gait analysis: An introduction (2014).

  10. Chiu, H. K., Adeli, E., Wang, B., Huang, D. A., & Niebles, J. C. Action-agnostic human pose forecasting. In Proceedings of the IEEE Winter Conference on Applications of Computer Vision (WACV) 1423–1432. https://doi.org/10.1109/WACV.2019.00156 (2019).

  11. Chereshnev, R. & Kertész-Farkas, A. HuGaDB: Human gait database for activity recognition from wearable inertial sensor networks. CoRR. https://doi.org/10.48550/arXiv.1705.08506. https://arxiv.org/abs/1705.08506 (2017).

  12. Hodgins, J. K., Wooten, W. L., Brogan, D. C. & O’Brien, J. F. Animating human athletics. SIGGRAPH ’95 https://doi.org/10.1145/218380.218446 (1995).

    Google Scholar 

  13. Quiñonero-Candela, J., Rasmussen, C. E. & Williams, C. K. I. Gaussian processes for dynamical systems. J. Mach. Learn. Res.6, 1439–1469 (2005).

    Google Scholar 

  14. Fragkiadaki, K., Levine, S., Felsen, P., & Malik, J. Recurrent network models for human dynamics. 2015 IEEE International Conference on Computer Vision (ICCV) 4346–4354 (2015). https://doi.org/10.1109/ICCV.2015.494

  15. Goodfellow, I. J. et al. Generative adversarial nets. Adv. Neural Inf. Process. Syst.27, 2672–2680. https://doi.org/10.48550/arXiv.1406.2661 (2014).

    Google Scholar 

  16. Harvey, I., Gordon, T., & Ventura, J. Robust motion in-betweening. ArXiv (2020). https://doi.org/10.48550/arXiv.2005.01183. https://arxiv.org/abs/2005.01183

  17. Kingma, D. P. & Welling, M. Auto-encoding variational bayes.CoRR. http://arxiv.org/abs/1312.6114. https://doi.org/10.48550/arXiv.1312.6114 (2013).

  18. Zhang, J., Fablet, R., Grangier, D., Peirache, M. & Boutin, F. Generating and reconstructing human motion with conditional variational autoencoders. Signal Processing Conference (EUSIPCO), 2018 26th European 201–205 (2018). https://doi.org/10.23919/EUSIPCO.2018.8553256

  19. Holden, D., Saito, J. & Komura, T. Deep learning of locomotion skills. ACM Trans. Graph.35(4), 5980. https://doi.org/10.1145/2897824.2925980 (2016).

    Google Scholar 

  20. Starke, S., Zhao, Y., Komura, T. & Ziemke, K. S. Neural state machine for character-scene interactions. ACM Trans. Graph.38(4), 23026. https://doi.org/10.1145/3306346.3323026 (2019).

    Google Scholar 

  21. Glardon, V., Gauthier, M., Monnin, J., Beyeler, A. & Bleuler, H. Low-dimensional representation of human gait for the reconstruction of missing marker data. Comput. Methods Biomech. Biomed. Eng.7(4), 199–210. https://doi.org/10.1080/10255840410001715610 (2004).

    Google Scholar 

  22. Tautges, J., Ziemke, T., Cholidis, K. S. & Kompass, K. S. Motion reconstruction with kalman filter and em. KI - Künstliche Intelligenz25, 253–257. https://doi.org/10.1007/s13218-011-0103-y (2011).

    Google Scholar 

  23. Chai, J. & Hodgins, J. K. Performance animation from low-dimensional control signals. ACM Trans. Graph.24(3), 686–696. https://doi.org/10.1145/1073204.1073248 (2005).

    Google Scholar 

  24. Dabral, R., Gundavarapu, N. B., Mitra, R., Habib, A. H. & Abhishek, A. P. Learning to reconstruct missing markers in human motion capture. ArXiv (2018). https://doi.org/10.48550/arXiv.1806.01255. https://arxiv.org/abs/1806.01255.

  25. Mall, A. K., Kumar, P. & Singh, S. K. Missing data imputation in gait signals using deep learning. Biomed. Signal Process. Control43, 250–258. https://doi.org/10.1016/j.bspc.2018.03.003 (2018).

    Google Scholar 

  26. Schmidhuber, J. Evolutionary principles in self-referential learning, or on learning how to learn: The meta-meta-... hook. Diploma thesis, Technische Universität München, Germany (1987). https://www.semanticscholar.org/paper/Evolutionary-principles-in-self-referential-or-Schmidhuber/f1a5105a013997232223aa12d0bf764724b7ce6f.

  27. Thrun, S. & Pratt, L. Learning to learn: Introduction and overview. Learning to Learn (1998). https://doi.org/10.1007/978-1-4615-5529-2_1

  28. Finn, C., Abbeel, P. & Levine, S. Model-agnostic meta-learning for fast adaptation of deep networks. Proceedings of the 34th International Conference on Machine Learning, PMLR70, 1126–1135. https://doi.org/10.48550/arXiv.1703.03400 URL: http://proceedings.mlr.press/v70/finn17a.html (2017).

  29. Snell, J., Swersky, K. & Zemel, R. S. Prototypical networks for few-shot learning. Adv. Neural Inf. Process. Syst.30, 05175. https://doi.org/10.48550/arXiv.1703.05175 (2017).

    Google Scholar 

  30. Koch, G. R., Zemel, R. S. & Salakhutdinov, R. Siamese neural networks for one-shot image recognition. ICML Deep Learning Workshop https://www.cs.cmu.edu/rsalakhu/papers/oneshot.pdf. (2015).

  31. Ravi, S. & Larochelle, H. Optimization as a model for few-shot learning. International Conference on Learning Representations (ICLR) (2017). https://openreview.net/forum?id=rJY0-Kcll.

  32. Duan, Y. et al. \(\text{Rl}\,\hat{\,}\,{2}\): Fast reinforcement learning via slow reinforcement learning. ArXiv (2016). https://doi.org/10.48550/arXiv.1611.02779. https://arxiv.org/abs/1611.02779

  33. Yin, W. Meta-learning for few-shot natural language processing: A survey. ArXiv (2020). https://doi.org/10.48550/arXiv.2007.09604.

  34. Zhang, Y. Z., Sadeghi, F. & Levine, S. Learning task-relevant representations for generalization in reinforcement learning. ArXiv (2020). https://doi.org/10.48550/arXiv.2010.04639.

  35. Yoon, J. H., Yoon, S., Kim, V. G., Ravela, S., Durand, F. & Hwang, S. J. Style-less neural style transfer. ArXiv (2020). https://doi.org/10.48550/arXiv.2009.09424.

  36. Bai, S., Kolter, J. Z. & Koltun, V. An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. ArXiv (2018). https://doi.org/10.48550/arXiv.1803.01271. arxiv: 1803.01271.

  37. Nichol, A. & Schulman, J. On first-order meta-learning algorithms. ArXiv (2018). https://doi.org/10.48550/arXiv.1803.02999

Download references

Acknowledgements

We thank the creators of the HuGaDB dataset for making their data publicly available.

Funding

Open access funding provided by Manipal University Jaipur. No specific funding was received for this study.

Author information

Authors and Affiliations

  1. Department of Data Science and Engineering, Manipal University Jaipur, Dehmi Kalan, Off Jaipur-Ajmer Expressway, Jaipur, Rajasthan, 303007, India

    Ram Kumar Yadav & Dr. Akhilesh Kumar Sharma

  2. Department of Computer Application, Manipal University Jaipur, Dehmi Kalan, Off Jaipur-Ajmer Expressway, Jaipur, Rajasthan, 303007, India

    Avishek Nandi

  3. Faculty of Commerce and Tourism, Industrial University of Ho Chi Minh City, Ho Chi Minh City, Viet Nam

    Prof. Lalit Garg

Authors
  1. Ram Kumar Yadav
    View author publications

    Search author on:PubMed Google Scholar

  2. Avishek Nandi
    View author publications

    Search author on:PubMed Google Scholar

  3. Dr. Akhilesh Kumar Sharma
    View author publications

    Search author on:PubMed Google Scholar

  4. Prof. Lalit Garg
    View author publications

    Search author on:PubMed Google Scholar

Contributions

Ram Kumar Yadav conceived the idea, designed the methodology, conducted conceptual experiments and wrote the manuscript. Avishek Nandi, Dr. Akhilesh Kumar Sharma and Prof. Lalit Garg provided supervision, contributed to the discussion and revised the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Dr. Akhilesh Kumar Sharma.

Ethics declarations

Competing interests

The authors declare no competing interests.

Consent for publication

All authors consent to the publication of this manuscript.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yadav, R.K., Nandi, A., Sharma, D.A.K. et al. A meta learning framework for few shot personalized gait cycle generation and reconstruction. Sci Rep (2026). https://doi.org/10.1038/s41598-026-35121-4

Download citation

  • Received: 10 September 2025

  • Accepted: 02 January 2026

  • Published: 29 January 2026

  • DOI: https://doi.org/10.1038/s41598-026-35121-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Keywords

  • Gait generation
  • Gait reconstruction
  • Meta learning
  • Few-shot learning
  • Deep learning
Download PDF

Advertisement

Explore content

  • Research articles
  • News & Comment
  • Collections
  • Subjects
  • Follow us on Facebook
  • Follow us on Twitter
  • Sign up for alerts
  • RSS feed

About the journal

  • About Scientific Reports
  • Contact
  • Journal policies
  • Guide to referees
  • Calls for Papers
  • Editor's Choice
  • Journal highlights
  • Open Access Fees and Funding

Publish with us

  • For authors
  • Language editing services
  • Open access funding
  • Submit manuscript

Search

Advanced search

Quick links

  • Explore articles by subject
  • Find a job
  • Guide to authors
  • Editorial policies

Scientific Reports (Sci Rep)

ISSN 2045-2322 (online)

nature.com sitemap

About Nature Portfolio

  • About us
  • Press releases
  • Press office
  • Contact us

Discover content

  • Journals A-Z
  • Articles by subject
  • protocols.io
  • Nature Index

Publishing policies

  • Nature portfolio policies
  • Open access

Author & Researcher services

  • Reprints & permissions
  • Research data
  • Language editing
  • Scientific editing
  • Nature Masterclasses
  • Research Solutions

Libraries & institutions

  • Librarian service & tools
  • Librarian portal
  • Open research
  • Recommend to library

Advertising & partnerships

  • Advertising
  • Partnerships & Services
  • Media kits
  • Branded content

Professional development

  • Nature Awards
  • Nature Careers
  • Nature Conferences

Regional websites

  • Nature Africa
  • Nature China
  • Nature India
  • Nature Japan
  • Nature Middle East
  • Privacy Policy
  • Use of cookies
  • Legal notice
  • Accessibility statement
  • Terms & Conditions
  • Your US state privacy rights
Springer Nature

© 2026 Springer Nature Limited

Nature Briefing AI and Robotics

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing: AI and Robotics