Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Advertisement

Scientific Reports
  • View all journals
  • Search
  • My Account Login
  • Content Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • RSS feed
  1. nature
  2. scientific reports
  3. articles
  4. article
Bayesian reinforcement learning for adaptive control of energy recuperation in hydraulic excavator arms
Download PDF
Download PDF
  • Article
  • Open access
  • Published: 25 January 2026

Bayesian reinforcement learning for adaptive control of energy recuperation in hydraulic excavator arms

  • Peng Hu1,
  • Tao Wen1,
  • Daqing Zhang2,
  • Haifei Chen1 &
  • …
  • Jun Gong3 

Scientific Reports , Article number:  (2026) Cite this article

  • 678 Accesses

  • 1 Altmetric

  • Metrics details

We are providing an unedited version of this manuscript to give early access to its findings. Before final publication, the manuscript will undergo further editing. Please note there may be errors present which affect the content, and all legal disclaimers apply.

Subjects

  • Energy science and technology
  • Engineering
  • Mathematics and computing

Abstract

Hydraulic excavators are among the most energy-intensive machines in construction and mining, with conventional hydraulic systems often operating under fixed pressure and flow settings that lead to significant energy loss. Improving energy efficiency while ensuring safety and adaptability under uncertain operating conditions remains a critical challenge. This study proposes a novel adaptive control framework that integrates Bayesian inference with reinforcement learning (RL) to enhance energy recuperation in hydraulic excavator arms. The framework explicitly models system dynamics, including hydraulic cylinders, pumps, valves, and accumulators, while accounting for uncertainties from soil resistance, temperature-dependent viscosity, component wear, and sensor noise. A Bayesian particle filter is employed to continuously estimate latent states such as soil resistance multipliers and accumulator pre-charge offsets, enabling belief-space reinforcement learning to make informed control decisions. The learned control policy adjusts pump pressure and valve commands in real time, while a safety-projection layer enforces strict operational constraints (5–35 MPa hydraulic pressure, 12–28 MPa accumulator window, valve rate limits, and section-level relief protections).

Similar content being viewed by others

Research on the high precision hydraulic column stress monitoring method

Article Open access 04 January 2025

Machine learning-enhanced fully coupled fluid–solid interaction models for proppant dynamics in hydraulic fractures

Article Open access 20 August 2025

Bayesian deep reinforcement learning for uncertainty quantification and adaptive support optimization in deep foundation pit engineering

Article Open access 09 October 2025

Data availability

Code and co-simulation assets, together with configuration files, uncertainty trajectories, evaluation logs, and figure scripts are archived on Zenodo: https://zenodo.org/records/17072083 (DOI: 10.5281/zenodo.17072083/) and https://zenodo.org/records/17375877 (DOI: 10.5281/zenodo.17375877/).

References

  1. Zhang, S., Minav, T., Pietola, M., Kauranne, H. & Kajaste, J. The effects of control methods on energy efficiency and position tracking of an electro-hydraulic excavator equipped with zonal hydraulics. Autom. Constr. 100, 129–144 (2019).

    Google Scholar 

  2. Khan, A. U. & Huang, L. Toward zero emission construction: a comparative life cycle impact assessment of diesel, hybrid, and electric excavators. Energies 16, 6025 (2023).

    Google Scholar 

  3. Mahato, A. C. & Ghoshal, S. K. Energy-saving strategies on power hydraulic system: An overview. Proc. Instit. Mech. Engineers, Part I: J. Syst. Control Eng. 235, 147–169 (2021).

    Google Scholar 

  4. Hou, H. et al. Energy-related carbon emissions mitigation potential for the construction sector in China. Environ. Impact Assess. Rev. 89, 106599 (2021).

    Google Scholar 

  5. Chughtai, A., Uqaili, M. A., Hussain Mirjat, N., Shaikh, F. & Khatri, S. A. Demand-side management scenario analysis for the energy-efficient future of Pakistan: Bridging the gap between market interests and national priorities. Front. Energy Res. 12, 1391973 (2024).

    Google Scholar 

  6. Mulholland, E., Miller, J., Bernard, Y., Lee, K. & Rodríguez, F. The role of NOx emission reductions in Euro 7/VII vehicle emission standards to reduce adverse health impacts in the EU27 through 2050. Transport. Eng. 9, 100133 (2022).

    Google Scholar 

  7. Aridi, R. et al. Energy recovery in air conditioning systems: Comprehensive review, classifications, critical analysis, and potential recommendations. Energies 14, 5869 (2021).

    Google Scholar 

  8. Bhagwat, P. et al. Enhancing hydraulic system performance through intelligent control and energy efficiency. Asian Rev. Mech. Eng. 13, 17–26 (2024).

    Google Scholar 

  9. Azzam, I., Pate, K., Garcia-Bravo, J. & Breidi, F. Energy savings in hydraulic hybrid transmissions through digital hydraulics technology. Energies 15, 1348 (2022).

    Google Scholar 

  10. Li, R. et al. Review of the progress of energy saving of hydraulic control systems. Processes 11, 3304 (2023).

    Google Scholar 

  11. Xie, P. et al. Optimization-based power and energy management system in shipboard microgrid: A review. IEEE Syst. J. 16, 578–590 (2021).

    Google Scholar 

  12. Erhueh, O. V., Nwakile, C., Akano, O. A., Aderamo, A. T. & Hanson, E. Advanced maintenance strategies for energy infrastructure: Lessons for optimizing rotating machinery. Global J. Res. Sci. Technol. 2, 065–093 (2024).

    Google Scholar 

  13. Einola K & Kivi A. (2024) Hydraulic hybrid cut-to-length forest harvester—evaluation of effects on Productivity and Fuel Efficiency. In: Actuators: MDPI, 126.

  14. Rosário, A. T. & Dias, J. C. How has data-driven marketing evolved: Challenges and opportunities with emerging technologies. Int. J. Inform. Manag. Data Insights 3, 100203 (2023).

    Google Scholar 

  15. Liu W, Luo X, Zhang J, Niu D, Deng J, Sun W & Kang J. (2022) Review on control systems and control strategies for excavators. In: Journal of Physics: Conference Series: IOP Publishing. 012023.

  16. Coronato, A., Naeem, M., De Pietro, G. & Paragliola, G. Reinforcement learning for intelligent healthcare applications: A survey. Artif. Intell. Med. 109, 101964 (2020).

    Google Scholar 

  17. Afsar, M. M., Crump, T. & Far, B. Reinforcement learning based recommender systems: A survey. ACM Comput. Surv. 55, 1–38 (2022).

    Google Scholar 

  18. Khetarpal, K., Riemer, M., Rish, I. & Precup, D. Towards continual reinforcement learning: A review and perspectives. J. Artif. Intell. Res. 75, 1401–1476 (2022).

    Google Scholar 

  19. Ibarz, J. et al. How to train your robot with deep reinforcement learning: lessons we have learned. Int. J. Robotics Res. 40, 698–721 (2021).

    Google Scholar 

  20. Singh, B., Kumar, R. & Singh, V. P. Reinforcement learning in robotic applications: a comprehensive survey. Artif. Intell. Rev. 55, 945–990 (2022).

    Google Scholar 

  21. Prudencio, R. F., Maximo, M. R. & Colombini, E. L. A survey on offline reinforcement learning: Taxonomy, review, and open problems. IEEE Transact. Neural Netw. Learning Syst. 35, 10237 (2023).

    Google Scholar 

  22. Hambly, B., Xu, R. & Yang, H. Recent advances in reinforcement learning in finance. Math. Financ. 33, 437–503 (2023).

    Google Scholar 

  23. Cao, D. et al. Reinforcement learning and its applications in modern power and energy systems: A review. J. Modern Power Syst. Clean Energy 8, 1029–1042 (2020).

    Google Scholar 

  24. Pourahari, A., Amini, R. & Yousefi-Khoshqalb, E. Advancing nodal leakage estimation in decentralized water networks: Integrating Bayesian optimization, realistic hydraulic modeling, and data-driven approaches. Sustain. Cities Soc. 112, 105612 (2024).

    Google Scholar 

  25. Ladosz, P., Weng, L., Kim, M. & Oh, H. Exploration in deep reinforcement learning: A survey. Inform. Fusion 85, 1–22 (2022).

    Google Scholar 

  26. Ding Z, Huang Y, Yuan H & Dong H. Introduction to reinforcement learning. Deep reinforcement learning: fundamentals, research and applications, 47–123 (2020).

  27. Shakya, A. K., Pillai, G. & Chakrabarty, S. Reinforcement learning algorithms: A brief survey. Expert Syst. Appl. 231, 120495 (2023).

    Google Scholar 

  28. Le, N., Rathour, V. S., Yamazaki, K., Luu, K. & Savvides, M. Deep reinforcement learning in computer vision: a comprehensive survey. Artif. Intell. Rev. https://doi.org/10.48550/arXiv.2108.11510 (2022).

    Google Scholar 

  29. Pateria, S., Subagdja, B., A-h, T. & Quek, C. Hierarchical reinforcement learning: A comprehensive survey. ACM Comput. Surveys (CSUR). 54, 1–35 (2021).

    Google Scholar 

  30. Wang, Z. & Hong, T. Reinforcement learning for building controls: The opportunities and challenges. Appl. Energy 269, 115036 (2020).

    Google Scholar 

  31. Hou, D., Hassan, I. & Wang, L. Review on building energy model calibration by Bayesian inference. Renew. Sustain. Energy Rev. 143, 110930 (2021).

    Google Scholar 

  32. Nott D J, Drovandi C & Frazier D T. Bayesian inference for misspecified generative models. Annual Review of Statistics and Its Application, 11 (2023).

  33. Gundry, L. et al. Recent advances and future perspectives for automated parameterisation, Bayesian inference and machine learning in voltammetry. Chem. Commun. 57, 1855–1870 (2021).

    Google Scholar 

  34. McLachlan, G., Majdak, P., Reijniers, J. & Peremans, H. Towards modelling active sound localisation based on Bayesian inference in a static environment. Acta Acustica 5, 45 (2021).

    Google Scholar 

  35. Rappel, H., Beex, L. A., Hale, J. S., Noels, L. & Bordas, S. A tutorial on Bayesian inference to identify material parameters in solid mechanics. Archives Comput. Meth. Eng. 27, 361–385 (2020).

    Google Scholar 

  36. Lin, C.-H.S. & Garrido, M. I. Towards a cross-level understanding of Bayesian inference in the brain. Neurosci. Biobehav. Rev. 137, 104649 (2022).

    Google Scholar 

  37. Pandey, A., Singh, A. & Gardoni, P. A review of information field theory for Bayesian inference of random fields. Struct. Saf. 99, 102225 (2022).

    Google Scholar 

Download references

Funding

This research was funded by the National Natural Science Foundation of China (62303493), the Furong Plan Young Talent Project (2025RC3177), the National Key Research and Development Program (2022YFD2202103), and the Hunan Provincial Natural Science Foundation Project under (2024JJ6720).

Author information

Authors and Affiliations

  1. School of Mechanical and Electrical Engineering, Central South University of Forestry & Technology, Changsha, 410004, Hunan, China

    Peng Hu, Tao Wen & Haifei Chen

  2. The National Enterprise Research and Develop Center, Sunward Intelligence Equipment Co., Ltd, Changsha, China

    Daqing Zhang

  3. Engineering Research Center of Advanced Mining Equipment Ministry of Education, Hunan University of Science and Technology, Xiangtan, China

    Jun Gong

Authors
  1. Peng Hu
    View author publications

    Search author on:PubMed Google Scholar

  2. Tao Wen
    View author publications

    Search author on:PubMed Google Scholar

  3. Daqing Zhang
    View author publications

    Search author on:PubMed Google Scholar

  4. Haifei Chen
    View author publications

    Search author on:PubMed Google Scholar

  5. Jun Gong
    View author publications

    Search author on:PubMed Google Scholar

Contributions

Peng Hu : conceptualization, methodology, software, experiments, writing—original draft. Tao Wen : system modeling, experiment design, writing—review & editing. Daqing Zhang : industrial input, parameter validation, writing—review & editing. Haifei Chen: data processing, evaluation/visualization, writing—review & editing. Jun Gong : supervision, safety/constraint design, writing—review & editing. All authors approved the final manuscript.

Corresponding author

Correspondence to Peng Hu.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hu, P., Wen, T., Zhang, D. et al. Bayesian reinforcement learning for adaptive control of energy recuperation in hydraulic excavator arms. Sci Rep (2026). https://doi.org/10.1038/s41598-026-35391-y

Download citation

  • Received: 04 June 2025

  • Accepted: 05 January 2026

  • Published: 25 January 2026

  • DOI: https://doi.org/10.1038/s41598-026-35391-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Keywords

  • Bayesian reinforcement learning
  • Adaptive control
  • Hydraulic excavator
  • Energy recuperation
  • Hydraulic accumulator
  • Belief-state estimation
  • POMDP
  • Uncertainty modeling
Download PDF

Advertisement

Explore content

  • Research articles
  • News & Comment
  • Collections
  • Subjects
  • Follow us on Facebook
  • Follow us on Twitter
  • Sign up for alerts
  • RSS feed

About the journal

  • About Scientific Reports
  • Contact
  • Journal policies
  • Guide to referees
  • Calls for Papers
  • Editor's Choice
  • Journal highlights
  • Open Access Fees and Funding

Publish with us

  • For authors
  • Language editing services
  • Open access funding
  • Submit manuscript

Search

Advanced search

Quick links

  • Explore articles by subject
  • Find a job
  • Guide to authors
  • Editorial policies

Scientific Reports (Sci Rep)

ISSN 2045-2322 (online)

nature.com sitemap

About Nature Portfolio

  • About us
  • Press releases
  • Press office
  • Contact us

Discover content

  • Journals A-Z
  • Articles by subject
  • protocols.io
  • Nature Index

Publishing policies

  • Nature portfolio policies
  • Open access

Author & Researcher services

  • Reprints & permissions
  • Research data
  • Language editing
  • Scientific editing
  • Nature Masterclasses
  • Research Solutions

Libraries & institutions

  • Librarian service & tools
  • Librarian portal
  • Open research
  • Recommend to library

Advertising & partnerships

  • Advertising
  • Partnerships & Services
  • Media kits
  • Branded content

Professional development

  • Nature Awards
  • Nature Careers
  • Nature Conferences

Regional websites

  • Nature Africa
  • Nature China
  • Nature India
  • Nature Japan
  • Nature Middle East
  • Privacy Policy
  • Use of cookies
  • Legal notice
  • Accessibility statement
  • Terms & Conditions
  • Your US state privacy rights
Springer Nature

© 2026 Springer Nature Limited

Nature Briefing AI and Robotics

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing: AI and Robotics