Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Advertisement

Scientific Reports
  • View all journals
  • Search
  • My Account Login
  • Content Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • RSS feed
  1. nature
  2. scientific reports
  3. articles
  4. article
A framework of large language model commander agent for spatial reasoning in combat simulation
Download PDF
Download PDF
  • Article
  • Open access
  • Published: 13 March 2026

A framework of large language model commander agent for spatial reasoning in combat simulation

  • Yi-bo Chen1,
  • Yang Ping1,
  • Shuhang Zhou2 &
  • …
  • Caleb Jojo3 

Scientific Reports , Article number:  (2026) Cite this article

  • 1321 Accesses

  • Metrics details

We are providing an unedited version of this manuscript to give early access to its findings. Before final publication, the manuscript will undergo further editing. Please note there may be errors present which affect the content, and all legal disclaimers apply.

Subjects

  • Engineering
  • Mathematics and computing

Abstract

Large language models (LLMs) demonstrate strong reasoning and planning capabilities in static textual contexts, yet they struggle significantly with dynamic decision-making tasks involving spatial elements, such as point selection in military simulations. These limitations arise from their reduced capacity to integrate real-time geographic data and adapt to spatial conditions, which can lead to crucial errors in positioning decisions. Such deficiencies may result in missed opportunities for tactical advantages, increased vulnerability, and diminished overall effectiveness in combat scenarios. To mitigate these issues, this paper presents the Geo-Commander framework, an innovative multi-task agent to combat simulations by integrate the ReAct reasoning mechanism and spatial encoding. The Geo-Choice module of this framework employs hexagonal grid encoding for preliminary location screening, enabling the agent to establish spatial constraints early in the decision-making process. The ReAct chain of this framework incorporates detailed geographic insights into the reasoning loop, yielding interpretable decisions for point selection. We validate the framework through experiments that reveal substantial performance improvements in both static point selections and real-time dynamic command tasks within a tank detachment combat simulation environment. Results indicate that Geo-Commander consistently surpasses control groups across various metrics, including selection quality, win rate, and overall combat effectiveness. These performance metrics highlight the framework’s potential to meet the demands of dynamic combat environments, ultimately confirming the feasibility of integrating spatial reasoning within LLM frameworks and opening avenues for advancements in multi-agent geospatial intelligence systems and battlefield decision-making support.

Data availability

The data that support the findings of this study are openly available in ScienceDB at https://doi.org/10.57760/sciencedb.32513, reference number24.

References

  1. Yao, S. et al. React: SYnergizing reasoning and acting in language models. In: 11th International Conference on Learning Representations, (ICLR, 2023).

  2. Shinn, N. et al. Reflexion: language agents with verbal reinforcement learning: In: 37th conference on neural information processing systems. (NeurIPS, 2023).

  3. Wang, G. et al. VOYAGER: An open-ended embodied agent with large language models. arXiv, (2023).

  4. Jafarnejad, S. MapLLM - A blueprint for improving geospatial reasoning in LLMs. (2025).

  5. Cheng, A. et al. SpatialRGPT: Grounded spatial reasoning in vision-language models. In: 38th Conference on Neural Information Processing Systems, NeurIPS 2024.

  6. Headquarters, D. & O T A. FM3-34.230-Topographic Operations. (2000).

  7. Goecks, V. G. & Waytowich, N. C. O. A. G. P. T. Generative pre-trained transformers for accelerated course of action development in military operations. In: 2024 International Conference on Military Communication and Information Systems (ICMCIS), 2024.

  8. Cao, X. et al. MAPLM: A real-world large-scale vision-language benchmark for map and traffic scene understanding. In: 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2024.

  9. Zhang, Y. et al. GeoGPT: understanding and processing geospatial tasks through an autonomous GPT. arXiv, (2023).

  10. Chen, B. et al. SpatialVLM: Endowing vision-language models with spatial reasoning capabilities. In: 2024 IEEE/CVF Conference on Computer Vision and (CVPR), 2024.

  11. Huang, C. et al. Visual language maps for robot navigation.In: 2023 IEEE International Conference on Robotics and (ICRA), 2023.

  12. Silver, D. et al. Mastering the game of Go with deep neural networks and tree search. Nature 529 (7587), 484–489 (2016).

    Google Scholar 

  13. Wu, D. J. Accelerating Self-Play Learning in Go. Statistics,2. (2019)

  14. Birch, C. P., D C, Oom, S. P. & Beecham, J. A. Rectangular and hexagonal grids used for observation, experiment and simulation in ecology. Ecol. Model. 206 (3–4), 347–359 (2007).

    Google Scholar 

  15. Tang, F. Zhang Xin,You Xiong,et al. Design Method of Tactical Level Hexagonal Wargame Map. J. Syst. Simul. 31 (5), 869–878 (2019).

    Google Scholar 

  16. Srivastava, V. et al. MapIQ: Benchmarking multimodal large language models for map question answering. (2025).

  17. R A I, P. S. et al. Light-MLLMAD: A lightweight multimodal large language model for one-shot industrial visual anomaly detection. (2025).

  18. Ruan, J. et al. MME-SCI: A comprehensive and challenging science benchmark for multimodal large language models. (2025).

  19. Headquarters, D. & O T A. FM 3-21.8 - The Infantry Rifle Platoon and Squad. (2007).

  20. Headquarters, D. & O T A. FM 3-06.11 - Camouflage, Concealment, and Cover. (2002).

  21. Headquarters, D. & O T A. FM 3-05.222 / TC 23 – 14 - Sniper Training and Employment. (2003).

  22. Zhou, Z. & Zhao, H. Design and implementation of B/S architecture-based wargaming system.J. Equip. Acad. 27 (2), 68–72 (2016).

    Google Scholar 

  23. Yin, Q. et al. Intelligent decision-making technologies and challenges in wargaming. Acta Automatica Sinica. 49 (5), 913–928 (2023).

    Google Scholar 

  24. Chen, Y. Experimental data of geo-commander[DS/OL]. Sci. Data Bank.,(2025). https://doi.org/10.57760/sciencedb.32513

    Google Scholar 

Download references

Acknowledgements

We would like to express our sincere gratitude to the military experts who contributed their professional knowledge and time to this study. Specifically, we thank the Army Commander, the Army Staff Officer, and the two wargaming experts for their in-depth analysis of the combat simulation scenarios and for their crucial role in developing the grid point quality rating table through discussion. Their expertise ensured the tactical relevance and validity of our experimental evaluation metrics.

Author information

Authors and Affiliations

  1. PLA Academy of Military Science, Beijing, 100091, China

    Yi-bo Chen & Yang Ping

  2. North University of China, Taiyuan, 030051, China

    Shuhang Zhou

  3. Northwestern University, Evanston, IL, 60208, USA

    Caleb Jojo

Authors
  1. Yi-bo Chen
    View author publications

    Search author on:PubMed Google Scholar

  2. Yang Ping
    View author publications

    Search author on:PubMed Google Scholar

  3. Shuhang Zhou
    View author publications

    Search author on:PubMed Google Scholar

  4. Caleb Jojo
    View author publications

    Search author on:PubMed Google Scholar

Contributions

Conceptualization, Yibo Chen; methodology, Caleb Jojo; software, Yibo Chen; validation, Shuhang Zhou; formal analysis, Shuhang Zhou; investigation, Caleb Jojo; resources, Yang Ping; data curation, Shuhang Zhou; writing—original draft preparation, Yibo Chen; writing—review and editing, Yang Ping; visualization, Caleb Jojo; supervision, Yang Ping; project administration, Yang Ping; funding acquisition, Yang Ping. All authors agree to be accountable for all aspects of the work.

Corresponding author

Correspondence to Yi-bo Chen.

Ethics declarations

Competing interests

The authors declare no competing interests.

Informed Consent

All participants (the four military experts) employed in this study were adults. Before their participation, all individuals were fully informed about the purpose of the study, the procedures involved, and how their input would be used. Written informed consent was obtained from all participants before their involvement in the study.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chen, Yb., Ping, Y., Zhou, S. et al. A framework of large language model commander agent for spatial reasoning in combat simulation. Sci Rep (2026). https://doi.org/10.1038/s41598-026-43365-3

Download citation

  • Received: 08 December 2025

  • Accepted: 04 March 2026

  • Published: 13 March 2026

  • DOI: https://doi.org/10.1038/s41598-026-43365-3

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Keywords

  • VLLM agent
  • ReAct
  • Spatial reasoning
  • Combat simulation
  • Commander agent
Download PDF

Advertisement

Explore content

  • Research articles
  • News & Comment
  • Collections
  • Subjects
  • Follow us on Facebook
  • Follow us on X
  • Sign up for alerts
  • RSS feed

About the journal

  • About Scientific Reports
  • Contact
  • Journal policies
  • Guide to referees
  • Calls for Papers
  • Editor's Choice
  • Journal highlights
  • Open Access Fees and Funding

Publish with us

  • For authors
  • Language editing services
  • Open access funding
  • Submit manuscript

Search

Advanced search

Quick links

  • Explore articles by subject
  • Find a job
  • Guide to authors
  • Editorial policies

Scientific Reports (Sci Rep)

ISSN 2045-2322 (online)

nature.com footer links

About Nature Portfolio

  • About us
  • Press releases
  • Press office
  • Contact us

Discover content

  • Journals A-Z
  • Articles by subject
  • protocols.io
  • Nature Index

Publishing policies

  • Nature portfolio policies
  • Open access

Author & Researcher services

  • Reprints & permissions
  • Research data
  • Language editing
  • Scientific editing
  • Nature Masterclasses
  • Research Solutions

Libraries & institutions

  • Librarian service & tools
  • Librarian portal
  • Open research
  • Recommend to library

Advertising & partnerships

  • Advertising
  • Partnerships & Services
  • Media kits
  • Branded content

Professional development

  • Nature Awards
  • Nature Careers
  • Nature Conferences

Regional websites

  • Nature Africa
  • Nature China
  • Nature India
  • Nature Japan
  • Nature Middle East
  • Privacy Policy
  • Use of cookies
  • Legal notice
  • Accessibility statement
  • Terms & Conditions
  • Your US state privacy rights
Springer Nature

© 2026 Springer Nature Limited

Nature Briefing AI and Robotics

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing: AI and Robotics