Abstract
Human gait is a complex biometric pattern with high intra and inter subject variability. While deep learning models can generate and reconstruct gait, they often require extensive personalized data. This paper introduces MetaGait, a framework that uses meta learning to personalize gait models from only a few examples. MetaGait applies a Model Agnostic Meta Learning (MAML) strategy, training a base model on varied gait analysis tasks from the Human Gait Database (HuGaDB). Each task adapts the model to a specific walking condition with a small support set of gait cycles. This process teaches the model an optimal initialization for quick adaptation to new subjects. The base model uses a temporal convolutional network (TCN) to capture temporal dependencies in sequence data. We evaluated MetaGait on few-shot gait cycle generation and reconstruction. Quantitative results, measublue by Mean Square Error (MSE) and Dynamic Time Warping (DTW), show our model outperforms conventionally trained baselines in low-data scenarios [1 shot and 5 shot learning]. Qualitative assessments confirm that MetaGait produces more natural, subject specific gait patterns and achieves accurate reconstructions from sparse inputs. By blueucing the data requiblue for personalization, MetaGait offers a more practical solution for applications in robotics and clinical gait analysis.
Similar content being viewed by others
Data availability
The datasets analysed during the current study are available in the Human Gait Database (HuGaDB) repository, https://www.kaggle.com/datasets/romanchereshnev/hugadb-human-gait-database
References
Boulgouris, N. V., Plataniotis, K. N. & Hatzinakos, D. Gait recognition: A challenging signal processing technology for biometric identification. IEEE Signal Process. Mag.22(6), 78–90. https://doi.org/10.1109/MSP.2005.1550191 (2005).
Hofmann, U. G. et al. Biomechanical and cognitive-motor determinants of human gait. Z. Gerontol. Geriatr.47, 25–31. https://doi.org/10.1007/s00391-013-0553-6 (2014).
Baker, R. The use of gait analysis in the assessment of child development. Gait Posture24(4), S6–S7. https://doi.org/10.1016/j.gaitpost.2006.09.009 (2006).
Murray, M. P., Drought, A. B. & Kory, R. C. Walking patterns of normal men. J. Bone Joint Surg.46(2), 335–360 (1964).
Holden, D., Saito, J. & Komura, T. Phase-functioned neural networks for character control. ACM Trans. Graph.36(4), 1–13. https://doi.org/10.1145/3072959.3073663 (2017).
Tucker, M. R., García-Cerezo, J. G., Villamil, G. P. & Diaz, C. A. M. H. Control of a lower-limb exoskeleton for human gait enhancement. Appl. Bion. Biomech.2015, 1–12. https://doi.org/10.1155/2015/635089 (2015).
Gui, L. Y., Wang, Y. X., Liang, X., & Moura, J. M Adversarial geometry-aware human motion prediction. In Computer Vision – ECCV 2018, Lecture Notes in Computer Science, vol. 11208 23–842 (2018). https://doi.org/10.1007/978-3-030-01225-0_48
Martinez, J., Black, M. J. & Romero, J. On human motion prediction using recurrent neural networks. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 4674–4683. https://doi.org/10.1109/CVPR.2017.498 (2017).
Whittle, M. W. Gait analysis: An introduction (2014).
Chiu, H. K., Adeli, E., Wang, B., Huang, D. A., & Niebles, J. C. Action-agnostic human pose forecasting. In Proceedings of the IEEE Winter Conference on Applications of Computer Vision (WACV) 1423–1432. https://doi.org/10.1109/WACV.2019.00156 (2019).
Chereshnev, R. & Kertész-Farkas, A. HuGaDB: Human gait database for activity recognition from wearable inertial sensor networks. CoRR. https://doi.org/10.48550/arXiv.1705.08506. https://arxiv.org/abs/1705.08506 (2017).
Hodgins, J. K., Wooten, W. L., Brogan, D. C. & O’Brien, J. F. Animating human athletics. SIGGRAPH ’95 https://doi.org/10.1145/218380.218446 (1995).
Quiñonero-Candela, J., Rasmussen, C. E. & Williams, C. K. I. Gaussian processes for dynamical systems. J. Mach. Learn. Res.6, 1439–1469 (2005).
Fragkiadaki, K., Levine, S., Felsen, P., & Malik, J. Recurrent network models for human dynamics. 2015 IEEE International Conference on Computer Vision (ICCV) 4346–4354 (2015). https://doi.org/10.1109/ICCV.2015.494
Goodfellow, I. J. et al. Generative adversarial nets. Adv. Neural Inf. Process. Syst.27, 2672–2680. https://doi.org/10.48550/arXiv.1406.2661 (2014).
Harvey, I., Gordon, T., & Ventura, J. Robust motion in-betweening. ArXiv (2020). https://doi.org/10.48550/arXiv.2005.01183. https://arxiv.org/abs/2005.01183
Kingma, D. P. & Welling, M. Auto-encoding variational bayes.CoRR. http://arxiv.org/abs/1312.6114. https://doi.org/10.48550/arXiv.1312.6114 (2013).
Zhang, J., Fablet, R., Grangier, D., Peirache, M. & Boutin, F. Generating and reconstructing human motion with conditional variational autoencoders. Signal Processing Conference (EUSIPCO), 2018 26th European 201–205 (2018). https://doi.org/10.23919/EUSIPCO.2018.8553256
Holden, D., Saito, J. & Komura, T. Deep learning of locomotion skills. ACM Trans. Graph.35(4), 5980. https://doi.org/10.1145/2897824.2925980 (2016).
Starke, S., Zhao, Y., Komura, T. & Ziemke, K. S. Neural state machine for character-scene interactions. ACM Trans. Graph.38(4), 23026. https://doi.org/10.1145/3306346.3323026 (2019).
Glardon, V., Gauthier, M., Monnin, J., Beyeler, A. & Bleuler, H. Low-dimensional representation of human gait for the reconstruction of missing marker data. Comput. Methods Biomech. Biomed. Eng.7(4), 199–210. https://doi.org/10.1080/10255840410001715610 (2004).
Tautges, J., Ziemke, T., Cholidis, K. S. & Kompass, K. S. Motion reconstruction with kalman filter and em. KI - Künstliche Intelligenz25, 253–257. https://doi.org/10.1007/s13218-011-0103-y (2011).
Chai, J. & Hodgins, J. K. Performance animation from low-dimensional control signals. ACM Trans. Graph.24(3), 686–696. https://doi.org/10.1145/1073204.1073248 (2005).
Dabral, R., Gundavarapu, N. B., Mitra, R., Habib, A. H. & Abhishek, A. P. Learning to reconstruct missing markers in human motion capture. ArXiv (2018). https://doi.org/10.48550/arXiv.1806.01255. https://arxiv.org/abs/1806.01255.
Mall, A. K., Kumar, P. & Singh, S. K. Missing data imputation in gait signals using deep learning. Biomed. Signal Process. Control43, 250–258. https://doi.org/10.1016/j.bspc.2018.03.003 (2018).
Schmidhuber, J. Evolutionary principles in self-referential learning, or on learning how to learn: The meta-meta-... hook. Diploma thesis, Technische Universität München, Germany (1987). https://www.semanticscholar.org/paper/Evolutionary-principles-in-self-referential-or-Schmidhuber/f1a5105a013997232223aa12d0bf764724b7ce6f.
Thrun, S. & Pratt, L. Learning to learn: Introduction and overview. Learning to Learn (1998). https://doi.org/10.1007/978-1-4615-5529-2_1
Finn, C., Abbeel, P. & Levine, S. Model-agnostic meta-learning for fast adaptation of deep networks. Proceedings of the 34th International Conference on Machine Learning, PMLR70, 1126–1135. https://doi.org/10.48550/arXiv.1703.03400 URL: http://proceedings.mlr.press/v70/finn17a.html (2017).
Snell, J., Swersky, K. & Zemel, R. S. Prototypical networks for few-shot learning. Adv. Neural Inf. Process. Syst.30, 05175. https://doi.org/10.48550/arXiv.1703.05175 (2017).
Koch, G. R., Zemel, R. S. & Salakhutdinov, R. Siamese neural networks for one-shot image recognition. ICML Deep Learning Workshop https://www.cs.cmu.edu/rsalakhu/papers/oneshot.pdf. (2015).
Ravi, S. & Larochelle, H. Optimization as a model for few-shot learning. International Conference on Learning Representations (ICLR) (2017). https://openreview.net/forum?id=rJY0-Kcll.
Duan, Y. et al. \(\text{Rl}\,\hat{\,}\,{2}\): Fast reinforcement learning via slow reinforcement learning. ArXiv (2016). https://doi.org/10.48550/arXiv.1611.02779. https://arxiv.org/abs/1611.02779
Yin, W. Meta-learning for few-shot natural language processing: A survey. ArXiv (2020). https://doi.org/10.48550/arXiv.2007.09604.
Zhang, Y. Z., Sadeghi, F. & Levine, S. Learning task-relevant representations for generalization in reinforcement learning. ArXiv (2020). https://doi.org/10.48550/arXiv.2010.04639.
Yoon, J. H., Yoon, S., Kim, V. G., Ravela, S., Durand, F. & Hwang, S. J. Style-less neural style transfer. ArXiv (2020). https://doi.org/10.48550/arXiv.2009.09424.
Bai, S., Kolter, J. Z. & Koltun, V. An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. ArXiv (2018). https://doi.org/10.48550/arXiv.1803.01271. arxiv: 1803.01271.
Nichol, A. & Schulman, J. On first-order meta-learning algorithms. ArXiv (2018). https://doi.org/10.48550/arXiv.1803.02999
Acknowledgements
We thank the creators of the HuGaDB dataset for making their data publicly available.
Funding
Open access funding provided by Manipal University Jaipur. No specific funding was received for this study.
Author information
Authors and Affiliations
Contributions
Ram Kumar Yadav conceived the idea, designed the methodology, conducted conceptual experiments and wrote the manuscript. Avishek Nandi, Dr. Akhilesh Kumar Sharma and Prof. Lalit Garg provided supervision, contributed to the discussion and revised the manuscript. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Consent for publication
All authors consent to the publication of this manuscript.
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
About this article
Cite this article
Yadav, R.K., Nandi, A., Sharma, D.A.K. et al. A meta learning framework for few shot personalized gait cycle generation and reconstruction. Sci Rep (2026). https://doi.org/10.1038/s41598-026-35121-4
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s41598-026-35121-4


