Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Advertisement

Communications Engineering
  • View all journals
  • Search
  • My Account Login
  • Content Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • RSS feed
  1. nature
  2. communications engineering
  3. articles
  4. article
Machine learning-facilitated real-time acoustic trapping in time-varying multi-medium environments toward magnetic resonance imaging-guided microbubble manipulation
Download PDF
Download PDF
  • Article
  • Open access
  • Published: 07 February 2026

Machine learning-facilitated real-time acoustic trapping in time-varying multi-medium environments toward magnetic resonance imaging-guided microbubble manipulation

  • Mengjie Wu  ORCID: orcid.org/0000-0002-2372-86731,
  • Xiaohan Li  ORCID: orcid.org/0000-0002-9737-20082 &
  • Tianquan Tang  ORCID: orcid.org/0000-0002-5343-74551,3 

Communications Engineering , Article number:  (2026) Cite this article

  • 479 Accesses

  • 2 Altmetric

  • Metrics details

We are providing an unedited version of this manuscript to give early access to its findings. Before final publication, the manuscript will undergo further editing. Please note there may be errors present which affect the content, and all legal disclaimers apply.

Subjects

  • Acoustics
  • Mechanical engineering

Abstract

Magnetic resonance imaging-guided acoustic trapping is expected to manipulate drug carriers (e.g., microbubbles) within the body, potentially improving carrier concentration at tumor sites and thereby enhancing targeted therapy outcomes. However, accurate trap generation remains challenging due to complex wave propagation through multiple tissue materials. Moreover, respiration-induced tissue motion imposes stringent requirements on computational efficiency for rapid phase updates. Here we propose a machine learning-based model and a closed-loop control scheme to modulate phase patterns rapidly. The model delivers precise time-of-flight prediction (mean err. ≤ 0.24 μs) within 26 ms for 196 transducer elements. In proof-of-concept experiments, computer vision feedback permits fast (about 15 frames per second) position adjustment of a trapped polystyrene ball (Ø2.7 mm). This control scheme helps lessen the ball’s spatial drift induced by time-varying multi-medium environments. These experiments on robotic manipulation support our model’s potential for future magnetic resonance imaging-guided targeted therapy.

Similar content being viewed by others

Dynamic tracking of a magnetic micro-roller using ultrasound phase analysis

Article Open access 01 December 2021

Dynamic and rapid deep synthesis of chemical exchange saturation transfer and semisolid magnetization transfer MRI signals

Article Open access 25 October 2023

Multi-omic machine learning predictor of breast cancer therapy response

Article Open access 07 December 2021

Data availability

The detailed data sets are available at a public GitHub repository (https://github.com/mengjwu/acoustictrap3D).

Code availability

The source code is available at a public GitHub repository (https://github.com/mengjwu/acoustictrap3D).

References

  1. Wu, M. & Liao, W. Machine learning-empowered real-time acoustic trapping: an enabling technique for increasing MRI-guided microbubble accumulation. Sensors 24, 6342 (2024).

    Google Scholar 

  2. Marzo, A. et al. Holographic acoustic elements for manipulation of levitated objects. Nat. Commun. 6, 8661 (2015).

    Google Scholar 

  3. Lo, W.-C., Fan, C.-H., Ho, Y.-J., Lin, C.-W. & Yeh, C.-K. Tornado-inspired acoustic vortex tweezer for trapping and manipulating microbubbles. Proc. Natl. Acad. Sci. 118, e2023188118 (2021).

    Google Scholar 

  4. Zhou, Q., Zhang, J., Ren, X., Xu, Z. & Liu, X. Multi-bottle beam generation using acoustic holographic lens. Appl. Phys. Lett. 116, 133502 (2020).

    Google Scholar 

  5. Ozcelik, A. et al. Acoustic tweezers for the life sciences. Nat. Methods 15, 1021–1028 (2018).

    Google Scholar 

  6. Tang, T., Shen, C. & Huang, L. Propagation of acoustic waves and determined radiation effects on axisymmetric objects in heterogeneous medium with irregular interfaces. Phys. Fluids 36, 012023 (2024).

    Google Scholar 

  7. Baudoin, M. et al. Spatially selective manipulation of cells with single-beam acoustical tweezers. Nat. Commun. 11, 4244 (2020).

    Google Scholar 

  8. Yang, Y. et al. 3D acoustic manipulation of living cells and organisms based on 2D array. IEEE Trans. Biomed. Eng. 69, 2342–2352 (2022).

    Google Scholar 

  9. Hammarström, B., Laurell, T. & Nilsson, J. Seed particle-enabled acoustic trapping of bacteria and nanoparticles in continuous flow systems. Lab Chip 12, 4296 (2012).

    Google Scholar 

  10. Rizzitelli, S. et al. Sonosensitive theranostic liposomes for preclinical in vivo MRI-guided visualization of doxorubicin release stimulated by pulsed low intensity non-focused ultrasound. J. Control. Rel. 202, 21–30 (2015).

    Google Scholar 

  11. Dai, J. et al. Learning-based efficient phase-amplitude modulation and hybrid control for MRI-guided focused ultrasound treatment. IEEE Robot. Autom. Lett. 9, 995 (2024).

    Google Scholar 

  12. Cheung, C. L. et al. Omnidirectional monolithic marker for intra-operative MR-based positional sensing in closed MRI. IEEE Trans Med. Imaging 43, 439–448 (2024).

    Google Scholar 

  13. Marzo, A. & Drinkwater, B. W. Holographic acoustic tweezers. Proc. Natl. Acad. Sci. 116, 84–89 (2019).

    Google Scholar 

  14. Tang, T. & Huang, L. Soundiation: a software in evaluation of acoustophoresis driven by radiation force and torque on axisymmetric objects. J. Acoust. Soc. Am. 152, 2934–2945 (2022).

    Google Scholar 

  15. Tang, T., Shen, C. & Huang, L. Acoustic rotation of non-spherical micro-objects: Characterization of acoustophoresis and quantification of rotational stability. J. Sound Vib. 554, 117694 (2023).

    Google Scholar 

  16. Yang, Y. et al. In-vivo programmable acoustic manipulation of genetically engineered bacteria. Nat. Commun. 14, 3297 (2023).

    Google Scholar 

  17. Cao, H. X. et al. Holographic acoustic tweezers for 5-DoF manipulation of nanocarrier clusters toward targeted drug delivery. Pharmaceutics 14, 1490 (2022).

    Google Scholar 

  18. Yang, Y. et al. Self-navigated 3D acoustic tweezers in complex media based on time reversal. Research 2021, 9781394 (2021).

    Google Scholar 

  19. Zhong, C., Jia, Y., Jeong, D. C., Guo, Y. & Liu, S. AcousNet: a deep learning based approach to dynamic 3D holographic acoustic field generation from phased transducer array. IEEE Robot. Autom. Lett. 7, 666–673 (2022).

    Google Scholar 

  20. Schoen, S. & Arvanitis, C. D. Heterogeneous angular spectrum method for trans-skull imaging and focusing. IEEE Trans. Med. Imaging 39, 1605–1614 (2020).

    Google Scholar 

  21. Wu, F., Thomas, J. L. & Fink, M. Time reversal of ultrasonic fields. Il. Experimental results. Phys. Rev. Appl. 39, 567–578 (1992).

    Google Scholar 

  22. Ley, M. W. & Bruus, H. Three-dimensional numerical modeling of acoustic trapping in glass capillaries. Phys. Rev. Appl. 8, 024020 (2017).

    Google Scholar 

  23. Zhong, C. et al. Real-time acoustic holography with physics-based deep learning for robotic manipulation. IEEE Trans. Autom. Sci. Eng. 21, 1–10 (2023).

    Google Scholar 

  24. Gor’kov, L. P. On the forces acting on a small particle in an acoustical field in an ideal fluid. J. Dokl. Akad. Nauk SSS 140, 88–91 (1961).

    Google Scholar 

  25. Wu, M. et al. A method to detect circle based on Hough transform. In Proceedings of the First International Conference on Information Sciences, Machinery, Materials and Energy, 2013–2016, (Atlantis Press, 2015).

  26. Treeby, B. E. & Cox, B. T. k-Wave: MATLAB toolbox for the simulation and reconstruction of photoacoustic wave fields. J. Biomed. Opt. 15, 021314 (2010).

    Google Scholar 

  27. Rumelhart, D. E., Hinton, G. E. & Williams, R. J. Learning representations by back-propagating errors. Nature 323, 533–536 (1986).

    Google Scholar 

  28. Acorda, J. A., Yamada, H. & Ghamsari, S. M. Evaluation of fatty infiltration of the liver in dairy cattle through digital analysis of hepatic ultrasonograms. Vet. Radiol. Ultrasound. 35, 120–123 (1994).

    Google Scholar 

  29. Tempany, C., MacDonold, N., Stewart, E. A., & Hynynen, K. Tumor Ablation: Principles and Practice (Springer, 2005).

  30. Gierga, D. P. et al. Quantification of respiration-induced abdominal tumor motion and its impact on IMRT dose distributions. Int. J. Radiat. Oncol. Biol. Phys 58, 1584–1595 (2004).

    Google Scholar 

  31. Im, K. & Park, Q.-H. Omni-directional and broadband acoustic anti-reflection and universal acoustic impedance matching. Nanophotonics 11, 2191–2198 (2022).

    Google Scholar 

  32. Suchenek, M. & Borowski, T. Measuring sound speed in gas mixtures using a photoacoustic generator. Int. J. Thermophys. 39, 11 (2018).

    Google Scholar 

  33. Schoen, S. Jr et al. Towards controlled drug delivery in brain tumors with microbubble-enhanced focused ultrasound. Adv. Drug Deliv. Rev. 180, 114043 (2022).

    Google Scholar 

  34. Auboiroux, V. et al. ARFI-prepared MRgHIFU in liver: simultaneous mapping of ARFI-displacement and temperature elevation, using a fast GRE-EPI sequence. Magn. Reson. Med. 68, 932–946 (2012).

    Google Scholar 

Download references

Acknowledgements

This work is supported in part by National Natural Science Foundation of China (Grant No. 12504533), Guangdong Basic and Applied Basic Research Foundation (Grant No. 2023A1515110927) and Guangdong University Featured Innovation Project (Grant No. 2025KTSCX129); in part by Shaanxi Provincial Department of Education (Grant No. 24JS024). We sincerely thank Prof. Huang Lixi for his valuable suggestions on the experiments. We are also grateful to Prof. Kwok Ka-Wai and Dr. Dai Jing for their early insights that contributed to the development of the idea.

Author information

Authors and Affiliations

  1. Department of Mechanical Engineering, The University of Hong Kong, Hong Kong SAR, China

    Mengjie Wu & Tianquan Tang

  2. School of Information and Control Engineering, Xi’An University of Architecture and Technology, Xi’An, China

    Xiaohan Li

  3. School of Mechatronic Engineering and Automation, Foshan University, Foshan, China

    Tianquan Tang

Authors
  1. Mengjie Wu
    View author publications

    Search author on:PubMed Google Scholar

  2. Xiaohan Li
    View author publications

    Search author on:PubMed Google Scholar

  3. Tianquan Tang
    View author publications

    Search author on:PubMed Google Scholar

Contributions

Mengjie Wu is with Department of Mechanical Engineering, The University of Hong Kong, Hong Kong SAR, China. M. Wu conceptualized the study, designed the learning model and data analysis, developed the stereo vision algorithm, designed the chamber and printed circuit board, conducted the COMSOL study and manipulation experiments, and drafted and edited the manuscript. Xiaohan Li is with School of Information and Control Engineering, Xi’An University of Architecture and Technology, Xi’An, China. X. Li designed the stereo vision algorithm, recommended camera modules, and revised the manuscript. Tianquan Tang is with Department of Mechanical Engineering, The University of Hong Kong, Hong Kong SAR, China, and also with School of Mechatronic Engineering and Automation, Foshan University, Foshan, China. T. Tang is the corresponding author (tianquan@connect.hku.hk). T. Tang conceptualized the study, designed the chamber, advised on experiments, and edited the manuscript.

Corresponding author

Correspondence to Tianquan Tang.

Ethics declarations

Competing interests

The authors declare no competing interests.

Peer review

Peer review information

Communications Engineering thanks Gordon Dobie and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Primary Handling Editors: [Liangfei Tian] and [Philip Coatsworth]. A peer review file is available.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Transparent Peer Review file

Description of Additional Supplementary Files

Supplementary Movie 1

Supplementary Movie 2

Supplementary Movie 3

Supplementary Movie 4

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wu, M., Li, X. & Tang, T. Machine learning-facilitated real-time acoustic trapping in time-varying multi-medium environments toward magnetic resonance imaging-guided microbubble manipulation. Commun Eng (2026). https://doi.org/10.1038/s44172-026-00600-z

Download citation

  • Received: 22 July 2025

  • Accepted: 26 January 2026

  • Published: 07 February 2026

  • DOI: https://doi.org/10.1038/s44172-026-00600-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Download PDF

Advertisement

Explore content

  • Research articles
  • Reviews & Analysis
  • News & Comment
  • Collections
  • Sign up for alerts
  • RSS feed

About the journal

  • Aims & Scope
  • Journal Information
  • Editors
  • Editorial Board
  • Calls for Papers
  • Editorial Policies
  • Open Access
  • Journal Metrics
  • Article Processing Charges
  • Contact
  • Conferences

Publish with us

  • For authors
  • For referees
  • Language editing services
  • Open access funding
  • Submit manuscript

Search

Advanced search

Quick links

  • Explore articles by subject
  • Find a job
  • Guide to authors
  • Editorial policies

Communications Engineering (Commun Eng)

ISSN 2731-3395 (online)

nature.com sitemap

About Nature Portfolio

  • About us
  • Press releases
  • Press office
  • Contact us

Discover content

  • Journals A-Z
  • Articles by subject
  • protocols.io
  • Nature Index

Publishing policies

  • Nature portfolio policies
  • Open access

Author & Researcher services

  • Reprints & permissions
  • Research data
  • Language editing
  • Scientific editing
  • Nature Masterclasses
  • Research Solutions

Libraries & institutions

  • Librarian service & tools
  • Librarian portal
  • Open research
  • Recommend to library

Advertising & partnerships

  • Advertising
  • Partnerships & Services
  • Media kits
  • Branded content

Professional development

  • Nature Awards
  • Nature Careers
  • Nature Conferences

Regional websites

  • Nature Africa
  • Nature China
  • Nature India
  • Nature Japan
  • Nature Middle East
  • Privacy Policy
  • Use of cookies
  • Legal notice
  • Accessibility statement
  • Terms & Conditions
  • Your US state privacy rights
Springer Nature

© 2026 Springer Nature Limited

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing