Abstract
The growing demand for scientifically grounded and highly personalized fitness plans reveals the huge shortcomings of traditional recommender systems, which cannot overcome template-oriented methods and effectively cope with complex, dynamic user data. As a remedy for this shortcoming, this work utilizes a Large Language Model (LLM) augmented with a domain-specific knowledge graph to develop LLM-SPTRec, a novel framework for intelligent sports training plan generation. This model successfully integrates multi-source heterogeneous user data and enhances the personalization and scientific validity of recommendations by grounding the LLM’s generative process in an expert-elicited Sports Science Knowledge Graph (SSKG). Empirical results on a real-world dataset demonstrate that LLM-SPTRec surpasses traditional baselines—including collaborative filtering, sequential models, and general-purpose LLMs—on fundamental measures of plan coherence, goal relevance, and predicted user satisfaction. The findings of this research provide a new paradigm for the discipline of intelligent health by bridging the gap between big data analysis and expert knowledge in addition to providing a new direction for the overall field of applied AI by demonstrating that knowledge-based LLMs are capable of generating safe, effective, and scientific personal health recommendations.
Data availability
The datasets generated and/or analysed during the current study are not publicly available due data privacy but are available from the corresponding author on reasonable request.
References
Kumar, P. Large language models (LLMs): Survey, technical frameworks, and future challenges. Artif. Intell. Rev. 57 (1), 1–41 (2024).
Agüera y Arcas, B. Do large language models understand us? Daedalus 151 (2), 173–185 (2022).
Acharya, M. & Mohbey, K. K. Advanced Optimization for Big Data Streams with Quantum Insights for Real-time Big Data Analytics. Vol. 14. e32876 (ADCAIJ, 2025).
Jin, L. et al. Big data, machine learning, and digital twin assisted additive manufacturing: A review. Mater. Design. 244, 113086 (2024).
Fan, W. et al. Recommender systems in the era of large language models (LLMs). IEEE Trans. Knowl. Data Eng. 1–15 (2023).
Wang, W. et al. Pre-trained models for search and recommendation: Introduction to the special issue—Part 1. ACM TOIS. 43 (2), 1–4 (2025).
Elmoghazy, S. S. et al. Comparative analysis of methodologies and approaches in recommender systems utilizing large language models. Artif. Intell. Rev. 1–34 (2025).
Wu, L. et al. A Survey on Large Language Models for Recommendation. arXiv:2305.19860 (2023).
Lin, J. et al. Large Language Models Make Sample-Efficient Recommender Systems. arXiv:2406.02368 (2024).
Lin, J. et al. How Can Recommender Systems Benefit from Large Language Models: A Survey. 1–45 (ACM TOIS, 2023).
Zhang, J. et al. Recommendation as instruction following: A large language model empowered recommendation approach. ACM TOIS. 42 (3), 1–27 (2023).
Huang, X. et al. Recommender AI agent: Integrating large language models for interactive recommendations. ACM TOIS. 46 (4), 1–26 (2023).
Shi, W. et al. Large language models are learnable planners for long-term recommendation. In SIGIR 2024. 1–11 (2024).
Li, L. et al. Large language models for generative recommendation: A survey and visionary discussions. arXiv:2309.01157 (2023).
Lv, Z. et al. Collaboration of large language models and small recommendation models for device-cloud recommendation. In SIGKDD 2025. 1–11 (2025).
Wang, Y. et al. Enhancing recommender systems with large language model reasoning graphs. arXiv: 2308.10835 (2023).
Zhang, Y. et al. Collm: Integrating Collaborative Embeddings into Large Language Models for Recommendation. 1–12 (IEEE TKDE, 2023).
Yin, B. et al. Heterogeneous knowledge fusion: A novel approach for personalized recommendation via Llm. RecSys 2023, 1–11 (2023).
Bao, K. et al. Large language models for recommendation: Past, present, and future. SIGIR 2024. 4105, 4108 (2024).
Ding, Z. Personalized optimization of sports training plans based on big data and intelligent computing. SCPE 26 (3), 1–15 (2025).
Ahmed, A. et al. Leveraging LLMs and wearables to provide personalized recommendations for enhancing student well-being and academic performance through a proof of concept. Sci. Rep. 1–12 (2025).
Alhozali, A. Personalized diabetes management using large language models and CGM data. JCMS 11 (2), 1–6 (2025).
Connor, M. & O’Neill, M. Large language models in sport science & medicine: Opportunities, risks and considerations. arXiv:2305.03851 (2023).
Ma, Z. & Wu, J. Design of an LLM-driven personalized learning resource recommendation system. FCIS 5, 1–10 (2025).
Shool, S. et al. A systematic review of large language model (LLM) evaluations in clinical medicine. BMC Med. Inf. 25 (1), 1–15 (2025).
Freyer, O. et al. A future role for health applications of large language models depends on regulators enforcing safety standards. Lancet Digit. Health. 6 (9), e731–e732 (2024).
Ishii, N. & Higuchi, K. A Framework for Personalized Recommendation Based on LLMs with Constrained Combinatorial Optimization. CHI 2025. (2025).
Wei, W. et al. Llmrec: Large language models with graph augmentation for recommendation. In WSDM 2023. 1–11. (2023).
Chen, H. et al. Lkpnr: Llm and kg for personalized news recommendation framework. arXiv:2308.12028 (2023).
Guo, N. et al. Integrating large language models with graphical session-based recommendation. arXiv:2402.16539 (2024).
Acharya, M. & Mohbey, K. K. Recency-based spatio-temporal similarity exploration for POI recommendation in location-based social networks. Front. Sustainable Cities. 6, 1331642 (2024).
Saravanakumar, R. et al. Big data processing using hybrid Gaussian mixture model with salp swarm algorithm. J. Big Data. 11 (1), 167 (2024).
Xu, L. et al. Tapping the Potential of Large Language Models as Recommender Systems: A Comprehensive Framework and Empirical Analysis. 1–23 (ACM TKDD, 2024).
Bao, K. et al. Tallrec: An effective and efficient tuning framework to align large language model with recommendation. In RecSys 2023. 1–12 (2023).
Qiao, S. et al. LLM4SBR: A lightweight and effective framework. arXiv:2402.13840 (2024).
Zheng, B. et al. Adapting Large Language Models by Integrating Collaborative Semantics. In ICDE 2024. 1–12 (2023).
Deldjoo, Y. & Noia, T. D. CFaiRLLM: Consumer fairness evaluation. ACM TIST. 15 (3), 1–28 (2024).
Disclaimer
LLM-SPTRec is a decision-support framework intended for educational and fitness enhancement purposes. It is not a replacement for clinical diagnosis or individualized medical advice from qualified healthcare professionals.
Author information
Authors and Affiliations
Contributions
Zhongliang He: Writing, Methodology, Formal analysis, Supervision, ValidationJiacheng Wang: Formal analysis, Supervision, ValidationBinggang Zhang: Supervision, ValidationYang Li: Supervision, Validation.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
About this article
Cite this article
He, Z., Wang, J., Zhang, B. et al. Knowledge-grounded large language model for personalized sports training plan generation. Sci Rep (2026). https://doi.org/10.1038/s41598-026-37075-z
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s41598-026-37075-z