Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Advertisement

Nature Communications
  • View all journals
  • Search
  • My Account Login
  • Content Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • RSS feed
  1. nature
  2. nature communications
  3. articles
  4. article
A conversational multi-agent AI system for automated plant phenotyping
Download PDF
Download PDF
  • Article
  • Open access
  • Published: 03 April 2026

A conversational multi-agent AI system for automated plant phenotyping

  • Feng Chen  ORCID: orcid.org/0000-0003-2915-599X1,
  • Ilias Stogiannidis  ORCID: orcid.org/0009-0005-5803-11381,
  • Andrew Wood  ORCID: orcid.org/0000-0003-1343-07741,
  • Danilo Bueno  ORCID: orcid.org/0009-0008-3036-70101,
  • Dominic Williams  ORCID: orcid.org/0000-0001-8908-66962,
  • Fraser Macfarlane  ORCID: orcid.org/0000-0002-7411-14462,
  • Bruce D. Grieve  ORCID: orcid.org/0000-0002-5130-35923,
  • Darren Wells  ORCID: orcid.org/0000-0002-4246-49094,
  • Jonathan A. Atkinson  ORCID: orcid.org/0000-0003-2815-08124,
  • Malcolm J. Hawkesford  ORCID: orcid.org/0000-0001-8759-39695,
  • Stephen A. Rolfe  ORCID: orcid.org/0000-0003-2141-47076,
  • Tracy Lawson  ORCID: orcid.org/0000-0002-4073-72217,8,9,
  • Tony Pridmore10,
  • Sotirios A. Tsaftaris  ORCID: orcid.org/0000-0002-8795-92941,11 &
  • …
  • Mario Valerio Giuffrida  ORCID: orcid.org/0000-0002-5232-677X10 

Nature Communications , Article number:  (2026) Cite this article

We are providing an unedited version of this manuscript to give early access to its findings. Before final publication, the manuscript will undergo further editing. Please note there may be errors present which affect the content, and all legal disclaimers apply.

Subjects

  • Information technology
  • Machine learning
  • Natural variation in plants
  • Optical imaging

Abstract

Plant phenotyping increasingly relies on (semi-)automated image-based analysis workflows to improve its accuracy and scalability. However, many existing solutions remain overly complex, difficult to reimplement and maintain, and pose high barriers for users without substantial computational expertise. To address these challenges, we introduce PhenoAssistant: a pioneering AI-driven system that streamlines plant phenotyping via intuitive natural language interaction. PhenoAssistant leverages a large language model to orchestrate a curated toolkit supporting tasks including automated phenotype extraction, data visualisation and automated model training. We validate PhenoAssistant through several representative case studies and a set of evaluation tasks. By lowering technical hurdles, PhenoAssistant underscores the promise of AI-driven methodologies to democratising AI adoption in plant biology.

Data availability

The Arabidopsis thaliana data used in case study 1 have been deposited in the Zenodo repository [https://doi.org/10.5281/zenodo.18940282]76. The data used for training and evaluating the computer vision model used in case study 1 are publicly available from the CVPPP2017 Leaf Segmentation Challenge dataset (A1 and A4 subsets) at CodaLab [https://codalab.lisn.upsaclay.fr/competitions/8970]. The potato data used in case study 2 are publicly available in the Zenodo repository [https://doi.org/10.5281/zenodo.7938231]77. The winter wheat data used in case study 3 are publicly available from the CVPPA@ICCV'23: image classification of nutrient deficiencies in winter wheat and winter rye dataset (WW2020 subset) at CodaLab [https://codalab.lisn.upsaclay.fr/competitions/13833]. Source data are provided with this paper.

Code availability

The code for this research, as well as the chat logs and generated outputs of the case studies and evaluations, are available at Github [https://github.com/vios-s/PhenoAssistant/]78.

References

  1. Fiorani, F. & Schurr, U. Future scenarios for plant phenotyping. Annu. Rev. Plant Biol. 64, 267–291 (2013).

    Google Scholar 

  2. United Nations. Population. https://www.un.org/en/global-issues/population.

  3. Lesk, C., Rowhani, P. & Ramankutty, N. Influence of extreme weather disasters on global crop production. Nature 529, 84–87 (2016).

    Google Scholar 

  4. Murphy, K. M., Ludwig, E., Gutierrez, J. & Gehan, M. A. Deep learning in image-based plant phenotyping. Annu. Rev. Plant Biol. 75, 771–795 (2024).

  5. Coppens, F., Wuyts, N., Inzé, D. & Dhondt, S. Unlocking the potential of plant phenotyping data through integration and data-driven approaches. Curr. Opin. Syst. Biol. 4, 58–63 (2017).

    Google Scholar 

  6. Achiam, J. et al. GPT-4 Technical Report. Preprint at https://arxiv.org/abs/2303.08774 (2024).

  7. Touvron, H. et al. LLaMA: open and efficient foundation language models. Preprint at https://arxiv.org/abs/2302.13971 (2023).

  8. Shen, J., Tenenholtz, N., Hall, J. B., Alvarez-Melis, D. & Fusi, N. Tag-LLM: repurposing general-purpose LLMs for specialized domains. In Proc. 41st International Conference on Machine Learning Vol. 235, 44759–44773 (JMLR.org, 2024).

  9. Yildiz, O. & Peterka, T. Do large language models speak scientific workflows? In Proc. SC ’25 Workshops of the International Conference for High Performance Computing, Networking, Storage and Analysis, 2225–2233 (Association for Computing Machinery, 2025).

  10. Sado, F., Loo, C. K., Liew, W. S., Kerzel, M. & Wermter, S. Explainable goal-driven agents and robots—a comprehensive review. ACM Comput. Surv. 55, 1–41 (2023).

    Google Scholar 

  11. Acharya, D. B., Kuppan, K. & Divya, B. Agentic AI: autonomous intelligence for complex goals—a comprehensive survey. IEEE Access 13, 18912–18936 (2025).

    Google Scholar 

  12. Sapkota, R., Roumeliotis, K. I. & Karkee, M. AI agents vs. agentic AI: a conceptual taxonomy, applications and challenges. Inf. Fusion 126, 103599 (2026).

    Google Scholar 

  13. Hughes, L. et al. AI agents and agentic systems: a multi-expert analysis. J. Comput. Inf. Syst. 65, 1–29 (2025).

    Google Scholar 

  14. Borghoff, U. M., Bottoni, P. & Pareschi, R. Human-artificial interaction in the age of agentic AI: a system-theoretical approach. Front. Hum. Dyn. 7, 1579166 (2025).

    Google Scholar 

  15. M. Bran, A. et al. Augmenting large language models with chemistry tools. Nat. Mach. Intell. 6, 525–535 (2024).

    Google Scholar 

  16. Boiko, D. A., MacKnight, R., Kline, B. & Gomes, G. Autonomous chemical research with large language models. Nature 624, 570–578 (2023).

    Google Scholar 

  17. Kang, Y. & Kim, J. ChatMOF: an artificial intelligence system for predicting and generating metal-organic frameworks using large language models. Nat. Commun. 15, 4705 (2024).

    Google Scholar 

  18. Ghafarollahi, A. & Buehler, M. J. Automating alloy design and discovery with physics-aware multimodal multiagent AI. Proc. Natl. Acad. Sci. USA 122, e2414074122 (2025).

    Google Scholar 

  19. Lei, W. et al. Chatbot: a community-driven AI assistant for integrative computational bioimaging. Nat. Methods 21, 1368–1370 (2024).

    Google Scholar 

  20. Royer, L. A. Omega—harnessing the power of large language models for bioimage analysis. Nat. Methods 21, 1371–1373 (2024).

    Google Scholar 

  21. Tu, T. et al. Towards conversational diagnostic artificial intelligence. Nature 642, 442–450 (2025).

    Google Scholar 

  22. Singhal, K. et al. Toward expert-level medical question answering with large language models. Nat. Med. 31, 943–950 (2025).

    Google Scholar 

  23. Gottweis, J. et al. Towards an AI co-scientist. Preprint at https://arxiv.org/abs/2502.18864 (2025).

  24. Yang, X., Gao, J., Xue, W. & Alexandersson, E. PLLaMA: an open-source large language model for plant science. Preprint at https://arxiv.org/abs/2401.01600 (2024).

  25. Awais, M., Salem Abdulla Alharthi, A. H., Kumar, A., Cholakkal, H. & Anwer, R. M. AgroGPT: efficient agricultural vision-language model with expert tuning. In Proc. 2025 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) 5687–5696, https://doi.org/10.1109/WACV61041.2025.00555 (2025).

  26. Yang, S. et al. ShizishanGPT: an agricultural large language model integrating tools and resources. In Proc. International Conference on Web Information Systems Engineering, 284–298 (Springer, 2024).

  27. Zhang, Y. et al. IPM-AgriGPT: a large language model for pest and disease management with a G-EA framework and agricultural contextual reasoning. Mathematics 13, 566 (2025).

    Google Scholar 

  28. Ravindran, D. J. S., Skarga-Bandurova, I., V, S., Awais, M. & S, M. AgroLLM: connecting farmers and agricultural practices through large language models for enhanced knowledge transfer and practical application. AgriEngineering 8, 38 (2026).

    Google Scholar 

  29. Arshad, M. A. et al. Leveraging vision language models for specialized agricultural tasks. In Proc. 2025 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) 6320–6329, https://doi.org/10.1109/WACV61041.2025.00616 (2025).

  30. Zhao, X. et al. Implementation of large language models and agricultural knowledge graphs for efficient plant disease detection. Agriculture 14, 1359 (2024).

    Google Scholar 

  31. Roumeliotis, K. I., Sapkota, R., Karkee, M. & Tselikas, N. D. Agentic AI with orchestrator-agent trust: a modular visual classification framework with trust-aware orchestration and rag-based reasoning. IEEE Access 14, 26965–26982 (2026).

    Google Scholar 

  32. Sapkota, R., Roumeliotis, K. I. & Karkee, M. UAVs meet agentic AI: a multidomain survey of autonomous aerial intelligence and agentic UAVs. Preprint at https://arxiv.org/abs/2506.08045 (2025).

  33. Benegas, G., Batra, S. S. & Song, Y. S. DNA language models are powerful predictors of genome-wide variant effects. Proc. Natl. Acad. Sci. USA 120, e2311219120 (2023).

    Google Scholar 

  34. Mendoza-Revilla, J. et al. A foundational large language model for edible plant genomes. Commun. Biol. 7, 835 (2024).

    Google Scholar 

  35. Zhang, R. et al. PlantGPT: an Arabidopsis-based intelligent agent that answers questions about plant functional genomics. Adv. Sci. 12, e03926 (2025).

    Google Scholar 

  36. Team, G. et al. Gemini: a family of highly capable multimodal models. Preprint at https://doi.org/10.48550/arXiv.2312.11805 (2025).

  37. Liu, H., Li, C., Wu, Q. & Lee, Y. J. Visual instruction tuning. Adv. Neural Inf. Process. Syst. 36, 34892–34916 (2023).

    Google Scholar 

  38. Minervini, M., Giuffrida, M. V., Perata, P. & Tsaftaris, S. A. Phenotiki: an open software and hardware platform for affordable and easy image-based phenotyping of rosette-shaped plants. Plant J. 90, 204–216 (2017).

    Google Scholar 

  39. Williams, D., Macfarlane, F. & Britten, A. Leaf only SAM: a segment anything pipeline for zero-shot automated leaf segmentation. Smart Agric. Technol. 8, 100515 (2024).

    Google Scholar 

  40. Hu, E. J. et al. Lora: low-rank adaptation of large language models. In Proc. International Conference on Learning Representations (OpenReview.net, 2022).

  41. Yi, J. et al. Deep learning for non-invasive diagnosis of nutrient deficiencies in sugar beet using RGB images. Sensors 20, 5893 (2020).

    Google Scholar 

  42. Yi, J. et al. Non-invasive diagnosis of nutrient deficiencies in winter wheat and winter rye using UAV-based RGB images. Comput. Electron. Agric. 239, 110865 (2025).

    Google Scholar 

  43. DeepSeek-AI et al. DeepSeek LLM: scaling open-source language models with longtermism. Preprint at https://doi.org/10.48550/arXiv.2401.02954 (2024).

  44. Guo, D. et al. DeepSeek-R1 incentivizes reasoning in LLMs through reinforcement learning. Nature 645, 633–638 (2025).

    Google Scholar 

  45. Gu, J. et al. A survey on LLM-as-a-Judge. The Innovation 101253 https://doi.org/10.48550/arXiv.2411.15594 (2026).

  46. Golechha, S. & Garriga-Alonso, A. Among Us: a sandbox for measuring and detecting agentic deception. In Proc. the Thirty-Ninth Annual Conference on Neural Information Processing Systems (NeurIPS, 2025).

  47. Sapkota, R., Roumeliotis, K. I. & Karkee, M. Vibe coding vs. agentic coding: fundamentals and practical implications of agentic AI. Preprint at https://arxiv.org/abs/2505.19443 (2025).

  48. Averly, R. & Chao, W.-L. Unified out-of-distribution detection: a model-specific perspective. In Proc. 2023 IEEE/CVF International Conference on Computer Vision (ICCV) 1453–1463, https://doi.org/10.1109/ICCV51070.2023.00140 (2023).

  49. Miyai, A. et al. Generalized out-of-distribution detection and beyond in vision language model era: a survey. Transactions on Machine Learning Research https://openreview.net/forum?id=FO3IA4lUEY (2025).

  50. Tonmoy, M. R., Hossain, M. M., Dey, N. & Mridha, M. MobilePlantViT: a mobile-friendly hybrid ViT for generalized plant disease image classification. Preprint at https://arxiv.org/abs/2503.16628 (2025).

  51. Han, B. et al. FoMo4Wheat: toward reliable crop vision foundation models with globally curated data. Preprint at https://doi.org/10.48550/arXiv.2509.06907 (2025).

  52. Hugging Face. Hugging Face–The AI community building the future https://huggingface.co/.

  53. Kaggle. Kaggle: Your Machine Learning and Data Science Community https://www.kaggle.com/.

  54. Model Context Protocol. What is the Model Context Protocol (MCP)? https://modelcontextprotocol.io/docs/getting-started/intro.

  55. Geerling, W., Mateer, G. D., Wooten, J. & Damodaran, N. ChatGPT has aced the test of understanding in college economics: now what? Am. Econ. 68, 233–245 (2023).

    Google Scholar 

  56. Chen, Y., Liu, T. X., Shan, Y. & Zhong, S. The emergence of economic rationality of GPT. Proc. Natl. Acad. Sci. USA 120, e2316205120 (2023).

    Google Scholar 

  57. Shah, D., Osiński, B., Ichter, B. & Levine, S. LM-Nav: robotic navigation with large pre-trained models of language, vision, and action. In Proc. 6th Conference on Robot Learning, 492–504 (PMLR, 2023).

  58. Cui, C., Ma, Y., Cao, X., Ye, W. & Wang, Z. Drive as you speak: enabling human-like interaction with large language models in autonomous vehicles. In Proc. 2024 IEEE/CVF Winter Conference on Applications of Computer Vision Workshops (WACVW) 902–909, https://doi.org/10.1109/WACVW60836.2024.00101 (2024).

  59. Gao, C. et al. Large language models empowered agent-based modeling and simulation: a survey and perspectives. Humanit. Soc. Sci. Commun. 11, 1259 (2024).

    Google Scholar 

  60. Han, K., Kuang, K., Zhao, Z., Ye, J. & Wu, F. Causal agent based on large language model. Preprint at https://arxiv.org/abs/2408.06849 (2024).

  61. Wang, X. et al. Causal-copilot: an autonomous causal analysis agent. Preprint at https://arxiv.org/abs/2504.13263 (2025).

  62. Chi, H. et al. Unveiling causal reasoning in large language models: reality or mirage? Adv. Neural Inf. Process. Syst. 37, 96640–96670 (2024).

    Google Scholar 

  63. Qian, C. et al. ChatDev: communicative agents for software development. In Proc. 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (eds. Ku, L.-W., Martins, A. & Srikumar, V.) 15174–15186 (Association for Computational Linguistics, 2024).

  64. Yang, Y. et al. AgentNet: decentralized evolutionary coordination for LLM-based multi-agent systems. In Proc. the Thirty-Ninth Annual Conference on Neural Information Processing Systems (NeurIPS, 2025).

  65. Wu, Q. et al. AutoGen: enabling next-gen LLM applications via multi-agent conversations. In Proc. First Conference on Language Modeling (OpenReview.net, 2024).

  66. CrewAI. CrewAI: the leading multi-agent platform https://www.crewai.com/.

  67. Li, H. et al. Advancing collaborative debates with role differentiation through multi-agent reinforcement learning. In Proc. 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (eds. Che, W., Nabende, J., Shutova, E. & Pilehvar, M. T.) 22655–22666 (Association for Computational Linguistics, 2025).

  68. Lu, S., Shao, J., Luo, B. & Lin, T. MorphAgent: empowering agents through self-evolving profiles and decentralized collaboration. Preprint at https://doi.org/10.48550/arXiv.2410.15048 (2025).

  69. Raza, S., Sapkota, R., Karkee, M. & Emmanouilidis, C. TRiSM for agentic AI: a review of trust, risk, and security management in LLM-based agentic multi-agent systems. AI Open 7, 71–95 (2026).

    Google Scholar 

  70. Google Cloud. Announcing the Agent2Agent Protocol (A2A) https://developers.googleblog.com/en/a2a-a-new-era-of-agent-interoperability/.

  71. Cheng, B., Misra, I., Schwing, A. G., Kirillov, A. & Girdhar, R. Masked-attention mask transformer for universal image segmentation. In Proc. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 1280–1289, https://doi.org/10.1109/CVPR52688.2022.00135 (2022).

  72. Chen, F., Tsaftaris, S. A. & Giuffrida, M. V. GMT: Guided mask transformer for leaf instance segmentation. In Proc. 2025 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) 1217–1226, https://doi.org/10.1109/WACV61041.2025.00126 (2025).

  73. Minervini, M., Fischbach, A., Scharr, H. & Tsaftaris, S. A. Finely-grained annotated datasets for image-based plant phenotyping. Pattern Recognit. Lett. 81, 80–89 (2016).

    Google Scholar 

  74. Chen, F., Giuffrida, M. V. & Tsaftaris, S. A. Adapting vision foundation models for plant phenotyping. In Proc. 2023 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW) 604–613, https://doi.org/10.1109/ICCVW60793.2023.00067 (2023).

  75. Oquab, M. et al. DINOv2: Learning Robust Visual Features without Supervision. Transactions on Machine Learning Research https://openreview.net/forum?id=a68SUt6zFt (2024).

  76. Chen, F., Tsaftaris, S. A. & Giuffrida, M. V. Arabidopsis thaliana data for a conversational multi-agent AI system for automated plant phenotyping [Data set]. https://doi.org/10.5281/zenodo.18940283 (2026).

  77. Williams, D., Macfarlane, F. & Britten, A. Potato leaf data set [Data set]. https://doi.org/10.5281/zenodo.7938231 (2023).

  78. Chen, F. vios-s/PhenoAssistant: PhenoAssistant-nc-release (nc-release). https://doi.org/10.5281/zenodo.18334981 (2026).

Download references

Acknowledgements

This research was funded by the Biotechnology and Biological Sciences Research Council (BBSRC) through PhenomUK-RI: The UK Plant and Crop Phenotyping Infrastructure (grant no. BB/Y512333/1). F.C. acknowledges support from the Engineering and Physical Sciences Research Council (EPSRC) through Real-time Digital Twin Assisted Surgery (grant no. EP/X033686/1). M.J.H. acknowledges support from the Biotechnology and Biological Sciences Research Council (BBSRC) through Delivering Sustainable Wheat (grant no. BB/X011003/1). S.A.T. acknowledges support of the UKRI AI programme, and the Engineering and Physical Sciences Research Council (EPSRC), for CHAI-EPSRC Causality in Healthcare AI Hub (grant no. EP/Y028856/1). F.C., S.A.T., and M.V.G. acknowledge support from the Microsoft Accelerating Foundation Models Research (AFMR) for Agricultural Foundation Models via Domain-Specific Pre-Training. We thank Jingyu Sun for exploring foundation models and Yuyang Xue for technical support during the early stages of this research.

Author information

Authors and Affiliations

  1. Institute for Imaging, Data and Communications (IDCOM), School of Engineering, University of Edinburgh, Edinburgh, UK

    Feng Chen, Ilias Stogiannidis, Andrew Wood, Danilo Bueno & Sotirios A. Tsaftaris

  2. James Hutton Institute, Dundee, UK

    Dominic Williams & Fraser Macfarlane

  3. Department of Electrical and Electronic Engineering, University of Manchester, Manchester, UK

    Bruce D. Grieve

  4. School of Biosciences, University of Nottingham, Nottingham, UK

    Darren Wells & Jonathan A. Atkinson

  5. Rothamsted Research, Harpenden, UK

    Malcolm J. Hawkesford

  6. School of Biosciences, University of Sheffield, Sheffield, UK

    Stephen A. Rolfe

  7. Department of Plant Biology and Department of Crop Sciences, University of Illinois Urbana-Champaign, Urbana, IL, USA

    Tracy Lawson

  8. Institute for Genomic Biology, University of Illinois Urbana-Champaign, Urbana, IL, USA

    Tracy Lawson

  9. School of Life Sciences, University of Essex, Colchester, UK

    Tracy Lawson

  10. School of Computer Science, University of Nottingham, Nottingham, UK

    Tony Pridmore & Mario Valerio Giuffrida

  11. Causality in Healthcare AI Hub (CHAI), Edinburgh, UK

    Sotirios A. Tsaftaris

Authors
  1. Feng Chen
    View author publications

    Search author on:PubMed Google Scholar

  2. Ilias Stogiannidis
    View author publications

    Search author on:PubMed Google Scholar

  3. Andrew Wood
    View author publications

    Search author on:PubMed Google Scholar

  4. Danilo Bueno
    View author publications

    Search author on:PubMed Google Scholar

  5. Dominic Williams
    View author publications

    Search author on:PubMed Google Scholar

  6. Fraser Macfarlane
    View author publications

    Search author on:PubMed Google Scholar

  7. Bruce D. Grieve
    View author publications

    Search author on:PubMed Google Scholar

  8. Darren Wells
    View author publications

    Search author on:PubMed Google Scholar

  9. Jonathan A. Atkinson
    View author publications

    Search author on:PubMed Google Scholar

  10. Malcolm J. Hawkesford
    View author publications

    Search author on:PubMed Google Scholar

  11. Stephen A. Rolfe
    View author publications

    Search author on:PubMed Google Scholar

  12. Tracy Lawson
    View author publications

    Search author on:PubMed Google Scholar

  13. Tony Pridmore
    View author publications

    Search author on:PubMed Google Scholar

  14. Sotirios A. Tsaftaris
    View author publications

    Search author on:PubMed Google Scholar

  15. Mario Valerio Giuffrida
    View author publications

    Search author on:PubMed Google Scholar

Contributions

F.C. contributed to study conceptualisation, model development, case studies, model evaluation, funding acquisition, manuscript preparation and revision. I.S. contributed to model development and evaluation. A.W., D.B. contributed to technical supports for computational resources. D. Williams, F.M. contributed to advice and insights for case study 2. B.G., D. Wells, J.A.A., M.J.H., S.A.R., T.L., and T.P. contributed to manuscript revision and funding acquisition. M.V.G., S.A.T. contributed to study conceptualisation, funding acquisition, manuscript preparation and revision, and project supervision. All authors contributed to the manuscript and approved the submission.

Corresponding authors

Correspondence to Feng Chen, Sotirios A. Tsaftaris or Mario Valerio Giuffrida.

Ethics declarations

Competing interests

The authors declare no competing interests.

Peer review

Peer review information

Nature Communications thanks the anonymous reviewers for their contribution to the peer review of this work. A peer review file is available.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplementary Information (download PDF )

Peer Review File (download PDF )

Reporting Summary (download PDF )

Source data

Source Data (download XLSX )

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chen, F., Stogiannidis, I., Wood, A. et al. A conversational multi-agent AI system for automated plant phenotyping. Nat Commun (2026). https://doi.org/10.1038/s41467-026-71090-y

Download citation

  • Received: 01 May 2025

  • Accepted: 12 March 2026

  • Published: 03 April 2026

  • DOI: https://doi.org/10.1038/s41467-026-71090-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Download PDF

Advertisement

Explore content

  • Research articles
  • Reviews & Analysis
  • News & Comment
  • Videos
  • Collections
  • Subjects
  • Follow us on Facebook
  • Follow us on X
  • Sign up for alerts
  • RSS feed

About the journal

  • Aims & Scope
  • Editors
  • Journal Information
  • Open Access Fees and Funding
  • Calls for Papers
  • Editorial Values Statement
  • Journal Metrics
  • Editors' Highlights
  • Contact
  • Editorial policies
  • Top Articles

Publish with us

  • For authors
  • For Reviewers
  • Language editing services
  • Open access funding
  • Submit manuscript

Search

Advanced search

Quick links

  • Explore articles by subject
  • Find a job
  • Guide to authors
  • Editorial policies

Nature Communications (Nat Commun)

ISSN 2041-1723 (online)

nature.com footer links

About Nature Portfolio

  • About us
  • Press releases
  • Press office
  • Contact us

Discover content

  • Journals A-Z
  • Articles by subject
  • protocols.io
  • Nature Index

Publishing policies

  • Nature portfolio policies
  • Open access

Author & Researcher services

  • Reprints & permissions
  • Research data
  • Language editing
  • Scientific editing
  • Nature Masterclasses
  • Research Solutions

Libraries & institutions

  • Librarian service & tools
  • Librarian portal
  • Open research
  • Recommend to library

Advertising & partnerships

  • Advertising
  • Partnerships & Services
  • Media kits
  • Branded content

Professional development

  • Nature Awards
  • Nature Careers
  • Nature Conferences

Regional websites

  • Nature Africa
  • Nature China
  • Nature India
  • Nature Japan
  • Nature Middle East
  • Privacy Policy
  • Use of cookies
  • Legal notice
  • Accessibility statement
  • Terms & Conditions
  • Your US state privacy rights
Springer Nature

© 2026 Springer Nature Limited

Nature Briefing AI and Robotics

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing: AI and Robotics