Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Advertisement

Scientific Reports
  • View all journals
  • Search
  • My Account Login
  • Content Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • RSS feed
  1. nature
  2. scientific reports
  3. articles
  4. article
An explainable real time sensor graph transformer for dance recognition
Download PDF
Download PDF
  • Article
  • Open access
  • Published: 14 January 2026

An explainable real time sensor graph transformer for dance recognition

  • Jinying Han1,2,
  • Shan Wang1 &
  • Jiayin Gao2,3 

Scientific Reports , Article number:  (2026) Cite this article

  • 323 Accesses

  • Metrics details

We are providing an unedited version of this manuscript to give early access to its findings. Before final publication, the manuscript will undergo further editing. Please note there may be errors present which affect the content, and all legal disclaimers apply.

Subjects

  • Engineering
  • Mathematics and computing

Abstract

The automatic interpretation of dance motion sequences from inertial and pressure-based sensors requires models that maintain strict temporal fidelity, preserve orientation invariance, and produce verifiable reasoning suitable for real-time deployment on constrained hardware. Existing approaches frequently lose accuracy when confronted with heterogeneous choreography, variable sensor placement, or shifting performer-specific kinematics, and they offer limited access to the decision-relevant evidence that underlies their classifications. This study introduces an explainable sensor-driven dance recognition framework constructed around an Adaptive Sensor Normalisation module that performs quaternion-based orientation correction with drift-aware Kalman refinement, a Multi-Scale Motion Feature Extractor that applies tempo-conditioned dilation schedules to capture micro-step transitions and phrase-level rhythmic structures, and a Spatio-Temporal Graph Attention Core that integrates edge-weighted graph convolutions with dual spatial–temporal attention to quantify sensor saliency and temporal concentration. A final Explainable Decision and Feedback Layer links prototype-anchored latent representations with gradient-resolved saliency vectors to expose class-specific motion determinants. The system is optimized for edge-class execution through kernel-level compression and causal attention windows operating on a 512-sample sliding segment. Experiments on three inertial datasets indicate classification accuracy up to 94.2 percent, movement-quality estimation of 92.8 percent, and sub-8.5 millisecond per-frame latency that confirms stability under tempo variation, sensor drift, and partial channel loss.

Similar content being viewed by others

Optimizing dance motion reconstruction using a two-dimensional matrix approach with hybrid genetic and fuzzy logic differential evolution

Article Open access 13 August 2025

A deep reinforcement learning approach to dance movement analysis

Article Open access 23 January 2026

Latent brain state dynamics distinguish behavioral variability, impaired decision-making, and inattention

Article Open access 15 February 2021

Data availability

The datasets supporting the results of this manuscript are publicly available as follows: ImperialDance dataset: Available at https://github.com/YunZhongNikki/ImperialDance-Dataset?tab=readme-ov-file. CMU-MoCap dataset: Available at https://mocap.cs.cmu.edu/. AIST++ dataset: Available at https://google.github.io/aistplusplus_dataset/factsfigures.html.

References

  1. Li, Y., Yang, G., Su, Z., Li, S. & Wang, Y. Human activity recognition based on multienvironment sensor data. Inf. Fusion 91, 47–63 (2023).

    Google Scholar 

  2. Luptáková, I. D., Kubovčík, M. & Pospíchal, J. Wearable sensor-based human activity recognition with transformer model. Sensors 22, 1911 (2022).

    Google Scholar 

  3. Wang, T. et al. Reslnet: deep residual lstm network with longer input for action recognition. Front. Comp. Sci. 16, 166334 (2022).

    Google Scholar 

  4. Zhao, M. et al. Bidirectional transformer gan for long-term human motion prediction. ACM Trans. Multimed. Comput. Commun. Appl. 19, 1–19 (2023).

    Google Scholar 

  5. Siyao, L. et al. Bailando++: 3d dance gpt with choreographic memory. IEEE Trans. Pattern Anal. Mach. Intell. 45, 14192–14207 (2023).

    Google Scholar 

  6. Chen, D. et al. Two-stream spatio-temporal gcn-transformer networks for skeleton-based action recognition. Sci. Rep. 15, 4982 (2025).

    Google Scholar 

  7. Kishore, P. et al. Machine interpretation of ballet dance: alternating wavelet spatial and channel attention based learning model. IEEE Access (2024).

  8. Baker, B. et al. Computational kinematics of dance: distinguishing hip hop genres. Front. Robotics AI 11, 1295308 (2024).

    Google Scholar 

  9. Ya, L. Research on data collection and posture optimization in dance training using smart wearable devices. Int. J. High Speed Electron. Syst. 2025, 2540492 (2025).

    Google Scholar 

  10. Adalarasu, K., Chetty, R. K., Begum, K. G., Harini, S. & Janardhanan, M. An explainable machine learning (xai) framework for classification of intricate dancing posture among indian bharatanatyam dancers. Appl. Soft Comput. 2025, 112817 (2025).

    Google Scholar 

  11. Li, Y., Peng, G., Du, T., Jiang, L. & Kong, X.-Z. Advancing fractured geothermal system modeling with artificial neural network and bidirectional gated recurrent unit. Appl. Energy 372, 123826 (2024).

    Google Scholar 

  12. Saral, G. B., Bose, K. R. et al. Move match: live dance motion monitoring and feedback system. In 2025 International Conference on Computational, Communication and Information Technology (ICCCIT) 80–85 (IEEE, 2025).

  13. Chen, J., Ye, H., Ying, Z., Sun, Y. & Xu, W. Dynamic trend fusion module for traffic flow prediction. Appl. Soft Comput. 174, 112979 (2025).

    Google Scholar 

  14. Chen, J., Zhang, S. & Xu, W. Scalable prediction of heterogeneous traffic flow with enhanced non-periodic feature modeling. Expert Syst. Appl. 2025, 128847 (2025).

    Google Scholar 

  15. Liang, J. et al. Interaction-aware trajectory prediction for safe motion planning in autonomous driving: a transformer-transfer learning approach. IEEE Trans. Intell. Transp. Syst. 26, 17080–17095. https://doi.org/10.1109/TITS.2025.3588228 (2025).

    Google Scholar 

  16. Zhang, L. et al. Smooth path planning and dynamic contact force regulation for robotic ultrasound scanning. IEEE Robot. Autom. Lett. 10, 10570–10577. https://doi.org/10.1109/LRA.2025.3604746 (2025).

    Google Scholar 

  17. Li, L., Cherouat, A., Snoussi, H. & Wang, T. Grasping with occlusion-aware ally method in complex scenes. IEEE Trans. Autom. Sci. Eng. 22, 5944–5954. https://doi.org/10.1109/TASE.2024.3434610 (2025).

    Google Scholar 

  18. Li, M. Ethnic dance movement recognition based on motion capture sensor and machine learning. Int. J. Inf. Commun. Technol. 25, 81–96 (2024).

    Google Scholar 

  19. Xie, L. Rural folk dance movement recognition based on an improved mcm-svm model in wireless sensing environment. J. Sens. 2023, 9213689 (2023).

    Google Scholar 

  20. Du, L. Design of dance movement recognition algorithm based on 3d motion capture data. In 2023 2nd International Conference on 3D Immersion, Interaction and Multi-sensory Experiences (ICDIIME) 308–312 (IEEE, 2023).

  21. Lin, Z. Dance movement recognition method based on convolutional neural network. In 2023 4th International Conference on Computer Vision, Image and Deep Learning (CVIDL) 255–258 (IEEE, 2023).

  22. Jiang, J. Dance movement recognition technology based on multi feature fusion. In 2023 4th International Conference for Emerging Technology (INCET) 1–5 (IEEE, 2023).

  23. Qu, J. A dance movement quality evaluation model using transformer encoder and convolutional neural network. Sci. Rep. 14, 32058 (2024).

    Google Scholar 

  24. Zhen, N. & Jiang, Y. Dance video action recognition algorithm based on improved hypergraph convolutional networks. Syst. Soft Comput. 2025, 200247 (2025).

    Google Scholar 

  25. Ning, T., Zhang, T. & Huang, G. Fctnet: fusion of 3d cnn and transformer dance action recognition network. J. Intell. Fuzzy Syst. 48, 23–31 (2025).

    Google Scholar 

  26. Li, H. & Huang, X. Intelligent dance motion evaluation: an evaluation method based on keyframe acquisition according to musical beat features. Sensors 24, 6278 (2024).

    Google Scholar 

  27. Zheng, D. & Yuan, Y. Pose recognition of dancing images using fuzzy deep learning technique in an iot environment. Expert. Syst. 42, e13422 (2025).

    Google Scholar 

  28. Wang, X. Prediction algorithm of dance movement rhythm and coordination based on time series analysis. In 2024 International Conference on Power, Electrical Engineering, Electronics and Control (PEEEC) 556–561 (IEEE, 2024).

  29. Jiao, Y. Optimizing dance training programs using deep learning: Exploring motion feedback mechanisms based on pose recognition and prediction. Int. J. Adv. Comput. Sci. Appl. 15, 256 (2024).

  30. Chen, J. Motion decomposition and guidance technology in dance teaching based on motion feedback system. In 2023 International Conference on Intelligent Computing, Communication & Convergence (ICI3C) 162–166 (IEEE, 2023).

  31. Cai, X., Lu, Q., Li, F., Liu, S. & Hu, Y. Hand movement recognition and analysis based on deep learning in classical hand dance videos. In Computer Graphics International Conference 53–64 (Springer, 2023).

  32. Zhong, Y. & Demiris, Y. Dancemvp: Self-supervised learning for multi-task primitive-based dance performance assessment via transformer text prompting. In Proceedings of the AAAI Conference on Artificial Intelligence, vol. 38 10270–10278 (2024).

  33. CMU Graphics Lab. Cmu graphics lab motion capture database. http://mocap.cs.cmu.edu/ (2003). Funded by NSF Grant EIA-0196217.

  34. Li, R., Yang, S., Ross, D. A. & Kanazawa, A. Ai choreographer: music conditioned 3d dance generation with aist++. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) 13401–13412 (2021).

Download references

Funding

This research was funded by special research topic of cultural exchange of Ministry of Education (Grant Number CCIPE-YXSJ-20240060), key topics of open online course guidance for undergraduate universities in Guangdong Province (Grant Number 2022ZXKC361), and Guangzhou Musicians Association “Music Culture research” and “Primary and secondary school music education reform project” (Grant Number 24GZYX003).

Author information

Authors and Affiliations

  1. College of Music and Dance, Guangzhou University, Guangzhou, 510006, China

    Jinying Han & Shan Wang

  2. Department of Integrated Education, Anyang University, Anyang, 430714, South Korea

    Jinying Han & Jiayin Gao

  3. School of Music, Northeast Normal University, Changchun, 130117, China

    Jiayin Gao

Authors
  1. Jinying Han
    View author publications

    Search author on:PubMed Google Scholar

  2. Shan Wang
    View author publications

    Search author on:PubMed Google Scholar

  3. Jiayin Gao
    View author publications

    Search author on:PubMed Google Scholar

Contributions

Jinying Han contributed to the design of the framework, the development of the methodology, and the writing of the J.H. designed the framework, developed the methodological components, performed the data analysis, and prepared the main manuscript text. Shan-Wang-S.W. assisted with supervision, coordinated data collection, prepared the experimental setup, implemented the X-DanceNet framework, reviewed, and edited the manuscript. All authors reviewed and edited the manuscript. Jiayin Gao contributed to the conceptualization and conceptual development of the study, supported the theoretical formulation, and reviewed the manuscript.

Corresponding author

Correspondence to Shan Wang.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Han, J., Wang, S. & Gao, J. An explainable real time sensor graph transformer for dance recognition. Sci Rep (2026). https://doi.org/10.1038/s41598-025-34691-z

Download citation

  • Received: 01 October 2025

  • Accepted: 30 December 2025

  • Published: 14 January 2026

  • DOI: https://doi.org/10.1038/s41598-025-34691-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Keywords

  • Dance recognition
  • Dance movement
  • Pose estimation
  • Real-time detection
  • Sensor-graph transformer
  • Multi-scale convolutional extractor
  • Saliency heatmaps
Download PDF

Advertisement

Explore content

  • Research articles
  • News & Comment
  • Collections
  • Subjects
  • Follow us on Facebook
  • Follow us on Twitter
  • Sign up for alerts
  • RSS feed

About the journal

  • About Scientific Reports
  • Contact
  • Journal policies
  • Guide to referees
  • Calls for Papers
  • Editor's Choice
  • Journal highlights
  • Open Access Fees and Funding

Publish with us

  • For authors
  • Language editing services
  • Open access funding
  • Submit manuscript

Search

Advanced search

Quick links

  • Explore articles by subject
  • Find a job
  • Guide to authors
  • Editorial policies

Scientific Reports (Sci Rep)

ISSN 2045-2322 (online)

nature.com sitemap

About Nature Portfolio

  • About us
  • Press releases
  • Press office
  • Contact us

Discover content

  • Journals A-Z
  • Articles by subject
  • protocols.io
  • Nature Index

Publishing policies

  • Nature portfolio policies
  • Open access

Author & Researcher services

  • Reprints & permissions
  • Research data
  • Language editing
  • Scientific editing
  • Nature Masterclasses
  • Research Solutions

Libraries & institutions

  • Librarian service & tools
  • Librarian portal
  • Open research
  • Recommend to library

Advertising & partnerships

  • Advertising
  • Partnerships & Services
  • Media kits
  • Branded content

Professional development

  • Nature Awards
  • Nature Careers
  • Nature Conferences

Regional websites

  • Nature Africa
  • Nature China
  • Nature India
  • Nature Japan
  • Nature Middle East
  • Privacy Policy
  • Use of cookies
  • Legal notice
  • Accessibility statement
  • Terms & Conditions
  • Your US state privacy rights
Springer Nature

© 2026 Springer Nature Limited

Nature Briefing AI and Robotics

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing: AI and Robotics