Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Advertisement

Scientific Reports
  • View all journals
  • Search
  • My Account Login
  • Content Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • RSS feed
  1. nature
  2. scientific reports
  3. articles
  4. article
Knowledge-grounded large language model for personalized sports training plan generation
Download PDF
Download PDF
  • Article
  • Open access
  • Published: 31 January 2026

Knowledge-grounded large language model for personalized sports training plan generation

  • Zhongliang He1,
  • Jiacheng Wang1,
  • Binggang Zhang1 &
  • …
  • Yang Li1 

Scientific Reports , Article number:  (2026) Cite this article

We are providing an unedited version of this manuscript to give early access to its findings. Before final publication, the manuscript will undergo further editing. Please note there may be errors present which affect the content, and all legal disclaimers apply.

Subjects

  • Computational biology and bioinformatics
  • Mathematics and computing

Abstract

The growing demand for scientifically grounded and highly personalized fitness plans reveals the huge shortcomings of traditional recommender systems, which cannot overcome template-oriented methods and effectively cope with complex, dynamic user data. As a remedy for this shortcoming, this work utilizes a Large Language Model (LLM) augmented with a domain-specific knowledge graph to develop LLM-SPTRec, a novel framework for intelligent sports training plan generation. This model successfully integrates multi-source heterogeneous user data and enhances the personalization and scientific validity of recommendations by grounding the LLM’s generative process in an expert-elicited Sports Science Knowledge Graph (SSKG). Empirical results on a real-world dataset demonstrate that LLM-SPTRec surpasses traditional baselines—including collaborative filtering, sequential models, and general-purpose LLMs—on fundamental measures of plan coherence, goal relevance, and predicted user satisfaction. The findings of this research provide a new paradigm for the discipline of intelligent health by bridging the gap between big data analysis and expert knowledge in addition to providing a new direction for the overall field of applied AI by demonstrating that knowledge-based LLMs are capable of generating safe, effective, and scientific personal health recommendations.

Data availability

The datasets generated and/or analysed during the current study are not publicly available due data privacy but are available from the corresponding author on reasonable request.

References

  1. Kumar, P. Large language models (LLMs): Survey, technical frameworks, and future challenges. Artif. Intell. Rev. 57 (1), 1–41 (2024).

    Google Scholar 

  2. Agüera y Arcas, B. Do large language models understand us? Daedalus 151 (2), 173–185 (2022).

    Google Scholar 

  3. Acharya, M. & Mohbey, K. K. Advanced Optimization for Big Data Streams with Quantum Insights for Real-time Big Data Analytics. Vol. 14. e32876 (ADCAIJ, 2025).

  4. Jin, L. et al. Big data, machine learning, and digital twin assisted additive manufacturing: A review. Mater. Design. 244, 113086 (2024).

    Google Scholar 

  5. Fan, W. et al. Recommender systems in the era of large language models (LLMs). IEEE Trans. Knowl. Data Eng. 1–15 (2023).

  6. Wang, W. et al. Pre-trained models for search and recommendation: Introduction to the special issue—Part 1. ACM TOIS. 43 (2), 1–4 (2025).

    Google Scholar 

  7. Elmoghazy, S. S. et al. Comparative analysis of methodologies and approaches in recommender systems utilizing large language models. Artif. Intell. Rev. 1–34 (2025).

  8. Wu, L. et al. A Survey on Large Language Models for Recommendation. arXiv:2305.19860 (2023).

  9. Lin, J. et al. Large Language Models Make Sample-Efficient Recommender Systems. arXiv:2406.02368 (2024).

  10. Lin, J. et al. How Can Recommender Systems Benefit from Large Language Models: A Survey. 1–45 (ACM TOIS, 2023).

  11. Zhang, J. et al. Recommendation as instruction following: A large language model empowered recommendation approach. ACM TOIS. 42 (3), 1–27 (2023).

    Google Scholar 

  12. Huang, X. et al. Recommender AI agent: Integrating large language models for interactive recommendations. ACM TOIS. 46 (4), 1–26 (2023).

    Google Scholar 

  13. Shi, W. et al. Large language models are learnable planners for long-term recommendation. In SIGIR 2024. 1–11 (2024).

  14. Li, L. et al. Large language models for generative recommendation: A survey and visionary discussions. arXiv:2309.01157 (2023).

  15. Lv, Z. et al. Collaboration of large language models and small recommendation models for device-cloud recommendation. In SIGKDD 2025. 1–11 (2025).

  16. Wang, Y. et al. Enhancing recommender systems with large language model reasoning graphs. arXiv: 2308.10835 (2023).

  17. Zhang, Y. et al. Collm: Integrating Collaborative Embeddings into Large Language Models for Recommendation. 1–12 (IEEE TKDE, 2023).

  18. Yin, B. et al. Heterogeneous knowledge fusion: A novel approach for personalized recommendation via Llm. RecSys 2023, 1–11 (2023).

    Google Scholar 

  19. Bao, K. et al. Large language models for recommendation: Past, present, and future. SIGIR 2024. 4105, 4108 (2024).

    Google Scholar 

  20. Ding, Z. Personalized optimization of sports training plans based on big data and intelligent computing. SCPE 26 (3), 1–15 (2025).

    Google Scholar 

  21. Ahmed, A. et al. Leveraging LLMs and wearables to provide personalized recommendations for enhancing student well-being and academic performance through a proof of concept. Sci. Rep. 1–12 (2025).

  22. Alhozali, A. Personalized diabetes management using large language models and CGM data. JCMS 11 (2), 1–6 (2025).

    Google Scholar 

  23. Connor, M. & O’Neill, M. Large language models in sport science & medicine: Opportunities, risks and considerations. arXiv:2305.03851 (2023).

  24. Ma, Z. & Wu, J. Design of an LLM-driven personalized learning resource recommendation system. FCIS 5, 1–10 (2025).

    Google Scholar 

  25. Shool, S. et al. A systematic review of large language model (LLM) evaluations in clinical medicine. BMC Med. Inf. 25 (1), 1–15 (2025).

    Google Scholar 

  26. Freyer, O. et al. A future role for health applications of large language models depends on regulators enforcing safety standards. Lancet Digit. Health. 6 (9), e731–e732 (2024).

    Google Scholar 

  27. Ishii, N. & Higuchi, K. A Framework for Personalized Recommendation Based on LLMs with Constrained Combinatorial Optimization. CHI 2025. (2025).

  28. Wei, W. et al. Llmrec: Large language models with graph augmentation for recommendation. In WSDM 2023. 1–11. (2023).

  29. Chen, H. et al. Lkpnr: Llm and kg for personalized news recommendation framework. arXiv:2308.12028 (2023).

  30. Guo, N. et al. Integrating large language models with graphical session-based recommendation. arXiv:2402.16539 (2024).

  31. Acharya, M. & Mohbey, K. K. Recency-based spatio-temporal similarity exploration for POI recommendation in location-based social networks. Front. Sustainable Cities. 6, 1331642 (2024).

    Google Scholar 

  32. Saravanakumar, R. et al. Big data processing using hybrid Gaussian mixture model with salp swarm algorithm. J. Big Data. 11 (1), 167 (2024).

    Google Scholar 

  33. Xu, L. et al. Tapping the Potential of Large Language Models as Recommender Systems: A Comprehensive Framework and Empirical Analysis. 1–23 (ACM TKDD, 2024).

  34. Bao, K. et al. Tallrec: An effective and efficient tuning framework to align large language model with recommendation. In RecSys 2023. 1–12 (2023).

  35. Qiao, S. et al. LLM4SBR: A lightweight and effective framework. arXiv:2402.13840 (2024).

  36. Zheng, B. et al. Adapting Large Language Models by Integrating Collaborative Semantics. In ICDE 2024. 1–12 (2023).

  37. Deldjoo, Y. & Noia, T. D. CFaiRLLM: Consumer fairness evaluation. ACM TIST. 15 (3), 1–28 (2024).

    Google Scholar 

Download references

Disclaimer

LLM-SPTRec is a decision-support framework intended for educational and fitness enhancement purposes. It is not a replacement for clinical diagnosis or individualized medical advice from qualified healthcare professionals.

Author information

Authors and Affiliations

  1. Physical Education Department, China Agricultural University, Beijing, 100193, China

    Zhongliang He, Jiacheng Wang, Binggang Zhang & Yang Li

Authors
  1. Zhongliang He
    View author publications

    Search author on:PubMed Google Scholar

  2. Jiacheng Wang
    View author publications

    Search author on:PubMed Google Scholar

  3. Binggang Zhang
    View author publications

    Search author on:PubMed Google Scholar

  4. Yang Li
    View author publications

    Search author on:PubMed Google Scholar

Contributions

Zhongliang He: Writing, Methodology, Formal analysis, Supervision, ValidationJiacheng Wang: Formal analysis, Supervision, ValidationBinggang Zhang: Supervision, ValidationYang Li: Supervision, Validation.

Corresponding author

Correspondence to Binggang Zhang.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

He, Z., Wang, J., Zhang, B. et al. Knowledge-grounded large language model for personalized sports training plan generation. Sci Rep (2026). https://doi.org/10.1038/s41598-026-37075-z

Download citation

  • Received: 08 September 2025

  • Accepted: 19 January 2026

  • Published: 31 January 2026

  • DOI: https://doi.org/10.1038/s41598-026-37075-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Keywords

  • Large language models
  • Recommender systems
  • Personalized health
  • Knowledge graph
  • Sports science
  • Dynamic user profiling
Download PDF

Advertisement

Explore content

  • Research articles
  • News & Comment
  • Collections
  • Subjects
  • Follow us on Facebook
  • Follow us on Twitter
  • Sign up for alerts
  • RSS feed

About the journal

  • About Scientific Reports
  • Contact
  • Journal policies
  • Guide to referees
  • Calls for Papers
  • Editor's Choice
  • Journal highlights
  • Open Access Fees and Funding

Publish with us

  • For authors
  • Language editing services
  • Open access funding
  • Submit manuscript

Search

Advanced search

Quick links

  • Explore articles by subject
  • Find a job
  • Guide to authors
  • Editorial policies

Scientific Reports (Sci Rep)

ISSN 2045-2322 (online)

nature.com sitemap

About Nature Portfolio

  • About us
  • Press releases
  • Press office
  • Contact us

Discover content

  • Journals A-Z
  • Articles by subject
  • protocols.io
  • Nature Index

Publishing policies

  • Nature portfolio policies
  • Open access

Author & Researcher services

  • Reprints & permissions
  • Research data
  • Language editing
  • Scientific editing
  • Nature Masterclasses
  • Research Solutions

Libraries & institutions

  • Librarian service & tools
  • Librarian portal
  • Open research
  • Recommend to library

Advertising & partnerships

  • Advertising
  • Partnerships & Services
  • Media kits
  • Branded content

Professional development

  • Nature Awards
  • Nature Careers
  • Nature Conferences

Regional websites

  • Nature Africa
  • Nature China
  • Nature India
  • Nature Japan
  • Nature Middle East
  • Privacy Policy
  • Use of cookies
  • Legal notice
  • Accessibility statement
  • Terms & Conditions
  • Your US state privacy rights
Springer Nature

© 2026 Springer Nature Limited

Nature Briefing AI and Robotics

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing: AI and Robotics