Learning and embodiment are intertwined, resulting in a mutually reinforcing effect. Research should aim not only for learning to enhance embodiment, but also, more importantly, for embodiment to facilitate learning. Achieving synergy between these two aspects remains an ongoing challenge.
This is a preview of subscription content, access via your institution
Access options
Subscribe to this journal
Receive 12 digital issues and online access to articles
$119.00 per year
only $9.92 per issue
Buy this article
- Purchase on SpringerLink
- Instant access to the full article PDF.
USD 39.95
Prices may be subject to local taxes which are calculated during checkout

References
Howard, D. et al. Evolving embodied intelligence from materials to machines. Nat. Mach. Intell. 1, 12–19 (2019).
Intelligence, P. et al. π0.5: a vision-language-action model with open-world generalization. Preprint at https://arxiv.org/abs/2504.16054 (2025).
Ha, S. et al. Learning-based legged locomotion: state of the art and future perspectives. Int. J. Rob. Res. 44, 1396–1427 (2025).
Szot, A. et al. From multimodal LLMs to generalist embodied agents: methods and lessons, Proc. IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 10644–10655 (2025).
Huang, K., Guo, D., Zhang, X., Ji, X. & Liu, H. CompetEvo: towards morphological evolution from competition. Proc. International Joint Conference on Artificial Intelligence (IJCAI) 85–93 (2024).
Gao, R., Chang, Y., Mall, S., Li, F. & Wu, J. ObjectFolder: a dataset of objects with implicit visual, auditory, and tactile representations. Proc. Annual Conference on Robot Learning (CoRL) 466–476 (2022).
Kirillov, A. et al. Segment anything. Proc. IEEE/CVF International Conference on Computer Vision (ICCV) 4015–4026 (2023).
Tang, C. et al. Deep reinforcement learning for robotics: a survey of real-world successes. Proc. AAAI Conference on Artificial Intelligence (AAAI) 28694–28698 (2025).
Chaplot, D., Jiang, H., Gupta, S. & Gupta, A. Semantic curiosity for active visual learning. Proc. European Conference on Computer Vision (ECCV) 309–326 (2020).
Pinto, L., Gandhi, D., Han, Y., Park, Y., Gupta, A. The curious robot: learning visual representations via physical interactions. Proc. European Conference on Computer Vision (ECCV) 3–18 (2016).
Xu, Z., Wu, J., Zeng, A., Tenenbaum, J. B. & Song, S. Densephysnet: learning dense physical object representations via multi-step dynamic interactions. Proc. Robotics Science and Systems (RSS) 1–8 (2019).
Prabhakar, A. & Murphey, T. Mechanical intelligence for learning embodied sensor-object relationships. Nat. Commun. 13, 4108 (2022).
Liang, X., Han, A., Yan, W., Raghunathan, A. & Abbeel, P. ALP: action-aware embodied learning for perception. Preprint at https://arxiv.org/abs/2306.10190 (2023).
Gupta, A., Savarese, S., Ganguli, S. & Li, F. Embodied intelligence via learning and evolution. Nat. Commun. 12, 5721 (2021).
Liu, H., Guo, D. & Angelo, C. Embodied intelligence: a synergy of morphology, action, perception and learning. ACM Comput. Surv. 57, 186 (2025).
Acknowledgements
The authors acknowledge the project Intelligent Perception and Learning for Robots funded by the National Natural Science Fund for Distinguished Young Scholars under grant no. 62025304. The views and opinions expressed are solely those of the authors and do not necessarily reflect those of the National Natural Science Fund.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Rights and permissions
About this article
Cite this article
Liu, H., Guo, D. & Huang, K. Learning for embodiment and embodiment for learning. Nat Rev Electr Eng 2, 651–653 (2025). https://doi.org/10.1038/s44287-025-00203-4
Published:
Version of record:
Issue date:
DOI: https://doi.org/10.1038/s44287-025-00203-4