Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Advertisement

Scientific Reports
  • View all journals
  • Search
  • My Account Login
  • Content Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • RSS feed
  1. nature
  2. scientific reports
  3. articles
  4. article
A comprehensive evaluation of lightweight deep learning models for tomato disease classification on edge computing environments
Download PDF
Download PDF
  • Article
  • Open access
  • Published: 05 March 2026

A comprehensive evaluation of lightweight deep learning models for tomato disease classification on edge computing environments

  • Trong-Minh Hoang1,
  • Van-Hau Bui1,2,
  • Van-Son Nguyen3,
  • Duc-Thang Doan1,
  • Hoang-Anh Dang3 &
  • …
  • Anh-Thu Pham1 

Scientific Reports , Article number:  (2026) Cite this article

  • 926 Accesses

  • 1 Citations

  • Metrics details

We are providing an unedited version of this manuscript to give early access to its findings. Before final publication, the manuscript will undergo further editing. Please note there may be errors present which affect the content, and all legal disclaimers apply.

Subjects

  • Computational biology and bioinformatics
  • Engineering
  • Mathematics and computing
  • Plant sciences

Abstract

To achieve agricultural automation, deep learning applications for early and accurate disease detection in tomato plants have been extensively developed. However, there is a fundamental trade-off between computational efficiency and diagnostic accuracy in resource-constrained agricultural edge environments. This paper proposes an evaluation framework for seven architectures that represent standard, efficient, and hybrid CNN structures to assess their implementation potential. Through evaluations of explainability, computational efficiency, and diagnostic performance, seven lightweight architectures (ShuffleNetV2, MobileNetV3-Small, SqueezeNet, MobilePlantViT, DenseNet121, ResNet50, and VGG16) are thoroughly examined. Three significant findings are derived from experiments conducted on a subset of tomato diseases in the PlantVillage dataset. First, the MobilePlantViT architecture accurately strikes the ideal balance between efficiency and performance. Second, in order to quantitatively assess the explainability of XAI models (Grad-CAM, SHAP, and LIME) and identify the best option for edge devices, we propose the perturbation stability score (PSS) metric. Third, we test CPU inference measurements to better reflect the actual scenario and find that the hybrid design effectively leverages parallel computing. According to these findings, MobilePlantViT is the ideal architecture for applications that require operation on edge devices with limited resources and achieve high diagnosis accuracy (above 99.5%).

Data availability

The datasets generated and/or analysed during the current study were obtained from two publicly available datasets (PlantVillage and PlantDoc) to create subsets of tomato data. We enhanced the quality of the PlantDoc dataset by collaborating with experts to identify and crop regions containing disease-specific symptoms, while eliminating irrelevant image content. For long-term preservation and ease of access, we have stored copies of the datasets in the published repository. The datasets are available at the following links: https://www.kaggle.com/datasets/cthngon/tomato-plantvillage-datasets, https://www.kaggle.com/datasets/cthngon/tomato-only.

References

  1. Ahmed, T., Noman, M., Shahid, M., Hameed, A. & Li, B. Pathogenesis and disease control in crops: The key to global food security. Plants 12, 3266. https://doi.org/10.3390/plants12183266 (2023).

    Google Scholar 

  2. Alkhaled, A. & Mayhoub, M. Smart detection of tomato leaf diseases using transfer learning-based convolutional neural networks. Agriculture 13, 139. https://doi.org/10.3390/agriculture13010139 (2023).

    Google Scholar 

  3. Kasera, R. K., Nath, S., Das, B., Kumar, A. & Acharjee, T. IOT enabled smart agriculture system for detection and classification of tomato and brinjal plant leaves disease. Scalable Comput. Pract. Exp. 26, 96–113 (2025).

    Google Scholar 

  4. Yasin, A. & Fatima, R. On the image-based detection of tomato and corn leaves diseases: An in-depth comparative experiments. arXiv preprint 10.48550/arXiv:2312.08659 (2023).

  5. Prince, R. H. et al. Csxai: A lightweight 2d CNN-SVM model for detection and classification of various crop diseases with explainable AI visualization. Front. Plant Sci. 15, 1412988 (2024).

    Google Scholar 

  6. Pal, C., Karmakar, S., Mukherjee, I. & Chakrabarti, P. P. A lightweight and explainable cnn model for empowering plant disease diagnosis. Sci. Rep. 15. https://doi.org/10.1038/s41598-025-94083-1 (2025).

  7. Karim, M. J. et al. Enhancing agriculture through real-time grape leaf disease classification via an edge device with a lightweight cnn architecture and grad-cam. Sci. Rep. 14, 16022 (2024).

    Google Scholar 

  8. Howard, A. et al. Searching for mobilenetv3. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). 1314–1324 (2019).

  9. Zhang, X., Zhou, X., Lin, M. & Sun, J. Shufflenet: An extremely efficient convolutional neural network for mobile devices. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 6848–6856 (2018).

  10. Iandola, F. N. et al. Squeezenet: Alexnet-level accuracy with 50x fewer parameters and< 0.5 MB model size. arXiv preprint arXiv:1602.07360 (2016).

  11. Zhang, J., Yang, X., Fu, X., Wang, B. & Li, H. Ldl-mobilenetv3s: An enhanced lightweight mobilenetv3-small model for potato leaf disease diagnosis through multi-module fusion. Front. Plant Sci. 16, 1656731 (2025).

    Google Scholar 

  12. Zhou, H. et al. Identification of leaf diseases in field crops based on improved shufflenetv2. Front. Plant Sci. 15, 1342123 (2024).

    Google Scholar 

  13. Tegegne, A. G., Walle, Y. M., Haile, M. B., Yehulu, G. T. & Yohannes, S. T. Comparative evaluation of cnn architectures for wheat rust diseases classification. Discov. Appl. Sci. 7, 1070 (2025).

    Google Scholar 

  14. Albahli, S. Agrifusionnet: A lightweight deep learning model for multisource plant disease diagnosis. Agriculture 15, 1523 (2025).

    Google Scholar 

  15. Dosovitskiy, A. et al. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representations (ICLR) (2021).

  16. Mehta, S. & Rastegari, M. Mobilevit: Light-weight, general-purpose, and mobile-friendly vision transformer. In International Conference on Learning Representations (ICLR) (2022).

  17. Jia, S. et al. Convtransnet-s: A cnn-transformer hybrid disease recognition model for complex field environments. Plants 14, 2252 (2025).

    Google Scholar 

  18. Borhani, Y., Khoramdel, J. & Najafi, E. A deep learning based approach for automated plant disease classification using vision transformer. Sci. Rep. 12, 11554 (2022).

    Google Scholar 

  19. Han, Z. & Sun, J. Tomato leaf diseases recognition model based on improved mobilevit. In 2024 IEEE 4th International Conference on Information Technology, Big Data and Artificial Intelligence (ICIBA). 1205–1209. https://doi.org/10.1109/ICIBA62489.2024.10868553 (IEEE, 2024)

  20. Ding, Y. & Yang, W. Classification of apple leaf diseases based on mobilevit transfer learning. In International Conference on Image Processing and Artificial Intelligence (ICIPAl 2024). Vol. 13213. 384–390 (SPIE, 2024).

  21. Sharma, V. et al. Soyatrans: A novel transformer model for soybean leaf disease classification. Exp. Syst. Appl. 260. https://doi.org/10.1016/j.eswa.2024.125385 (2025) .

  22. Sharma, V. et al. Clgannet: A novel method for maize leaf disease identification using clgan and deep cnn. Signal Process. Image Commun. 120, 117074. https://doi.org/10.1016/j.image.2023.117074 (2024).

    Google Scholar 

  23. Selvaraju, R. R. et al. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE International Conference on Computer Vision (ICCV). 618–626. https://doi.org/10.1109/ICCV.2017.74 (2017).

  24. Zhong, Y., Huang, B. & Tang, C. Classification of cassava leaf disease based on a non-balanced dataset using transformer-embedded resnet. Agriculture 12, 1360 (2022).

    Google Scholar 

  25. Alhammad, S. M., Khafaga, D. S., El-Hady, W. M., Samy, F. M. & Hosny, K. M. Deep learning and explainable AI for classification of potato leaf diseases. Front. Artif. Intell. 7, 1449329 (2025).

    Google Scholar 

  26. Hughes, D. P. & Salathé, M. An open access repository of images on plant health to enable the development of mobile disease diagnostics. arXiv preprint arXiv:1511.08060 (2015).

  27. Ferentinos, K. P. Deep learning models for plant disease detection and diagnosis. Comput. Electron. Agric. 145, 311–322. https://doi.org/10.1016/j.compag.2018.01.009 (2018).

    Google Scholar 

  28. Mohanty, S. P., Hughes, D. P. & Salathé, M. Using deep learning for image-based plant disease detection. Front. Plant Sci. 7, 1419. https://doi.org/10.3389/fpls.2016.01419 (2016).

    Google Scholar 

  29. Barbedo, J. G. A. Factors influencing the use of deep learning for plant disease recognition. Biosyst. Eng. 172, 84–91 (2018).

    Google Scholar 

  30. Picon, A. et al. Deep convolutional neural networks for mobile capture device-based crop disease classification in the wild. Comput. Electron. Agric. 161, 280–290. https://doi.org/10.1016/j.compag.2018.09.037 (2019).

    Google Scholar 

  31. Gao, Y. et al. Benchmarking yolov8 to yolov13 for robust hand gesture recognition in human-robot interaction. Sci. Rep. 15, 40043 (2025).

    Google Scholar 

  32. Ultralytics. Ultralytics yolo: Model architectures and multi-task vision framework. In Technical Documentation (2025). Accessed 2024–2025.

  33. Roboflow Team. Rf-detr: A real-time detection transformer. In Roboflow Technical Report / Blog (2024).

  34. Simonyan, K. & Zisserman, A. Very deep convolutional networks for large-scale image recognition. In International Conference on Learning Representations (ICLR) (2015).

  35. He, K., Zhang, X., Ren, S. & Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 770–778 (2016).

  36. Huang, G., Liu, Z., Van Der Maaten, L. & Weinberger, K. Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 4700–4708 (2017).

  37. Ma, N., Zhang, X., Zheng, H.-T. & Sun, J. Shufflenet v2: Practical guidelines for efficient cnn architecture design. In Proceedings of the European Conference on Computer Vision (ECCV). 116–131 (2018).

  38. Kingma, D. P. & Ba, J. Adam: A method for stochastic optimization. In International Conference on Learning Representations (ICLR) (2015).

  39. Microsoft. Onnx runtime: Cross-platform, high performance ml inferencing and training accelerator. https://onnxruntime.ai/ (2023).

  40. Ribeiro, M. T., Singh, S. & Guestrin, C. why should i trust you?: Explaining the predictions of any classifier. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 1135–1144. https://doi.org/10.1145/2939672.2939778 (2016).

  41. Lundberg, S. M. & Lee, S.-I. A unified approach to interpreting model predictions. In Advances in Neural Information Processing Systems (NeurIPS). Vol. 30 (2017).

  42. Yeh, C.-K., Hsieh, C.-Y., Suggala, A., Inouye, D. I. & Ravikumar, P. K. On the (in)fidelity and sensitivity of explanations. In Advances in Neural Information Processing Systems. 10965–10976 (2019).

  43. Ghorbani, A., Abid, A. & Zou, J. Interpretation of neural networks is fragile. Proc. AAAI Conf. Artif. Intell. 33, 3681–3688 (2019).

    Google Scholar 

  44. Wang, Z., Bovik, A. C., Sheikh, H. R. & Simoncelli, E. P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 13, 600–612 (2004).

    Google Scholar 

Download references

Acknowledgements

This work was supported by the project B2026-MHN01.01 and Posts and Telecommunications Institute of Technology (PTIT).

Funding

This work was supported by the project B2026-MHN01.01 and PTIT.

Author information

Authors and Affiliations

  1. Posts and Telecommunications Institute of Technology, Hanoi, Vietnam

    Trong-Minh Hoang, Van-Hau Bui, Duc-Thang Doan & Anh-Thu Pham

  2. University of Economics-Technology for Industries, Hanoi, Vietnam

    Van-Hau Bui

  3. Hanoi Open University, Hanoi, Vietnam

    Van-Son Nguyen & Hoang-Anh Dang

Authors
  1. Trong-Minh Hoang
    View author publications

    Search author on:PubMed Google Scholar

  2. Van-Hau Bui
    View author publications

    Search author on:PubMed Google Scholar

  3. Van-Son Nguyen
    View author publications

    Search author on:PubMed Google Scholar

  4. Duc-Thang Doan
    View author publications

    Search author on:PubMed Google Scholar

  5. Hoang-Anh Dang
    View author publications

    Search author on:PubMed Google Scholar

  6. Anh-Thu Pham
    View author publications

    Search author on:PubMed Google Scholar

Contributions

T.M. Hoang gives the main concept. T.M. Hoang and A.T. Pham conceived the experiments and revised the manuscript. The experiments were conducted by V.H. Bui, V.S.Nguyen, and D.T. Doan, while H.A.Dang analyzed the results. All authors reviewed the manuscript.

Corresponding author

Correspondence to Anh-Thu Pham.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hoang, TM., Bui, VH., Nguyen, VS. et al. A comprehensive evaluation of lightweight deep learning models for tomato disease classification on edge computing environments. Sci Rep (2026). https://doi.org/10.1038/s41598-026-42439-6

Download citation

  • Received: 28 November 2025

  • Accepted: 25 February 2026

  • Published: 05 March 2026

  • DOI: https://doi.org/10.1038/s41598-026-42439-6

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Keywords

  • Edge deployment
  • Explainable AI
  • Grad-CAM
  • Hybrid vision transformer
  • Lightweight deep learning
  • MobilePlantViT
  • Model benchmarking
  • Plant disease detection
  • PlantVillage dataset
  • Precision agriculture
  • Tomato leaf disease
Download PDF

Advertisement

Explore content

  • Research articles
  • News & Comment
  • Collections
  • Subjects
  • Follow us on Facebook
  • Follow us on X
  • Sign up for alerts
  • RSS feed

About the journal

  • About Scientific Reports
  • Contact
  • Journal policies
  • Guide to referees
  • Calls for Papers
  • Editor's Choice
  • Journal highlights
  • Open Access Fees and Funding

Publish with us

  • For authors
  • Language editing services
  • Open access funding
  • Submit manuscript

Search

Advanced search

Quick links

  • Explore articles by subject
  • Find a job
  • Guide to authors
  • Editorial policies

Scientific Reports (Sci Rep)

ISSN 2045-2322 (online)

nature.com footer links

About Nature Portfolio

  • About us
  • Press releases
  • Press office
  • Contact us

Discover content

  • Journals A-Z
  • Articles by subject
  • protocols.io
  • Nature Index

Publishing policies

  • Nature portfolio policies
  • Open access

Author & Researcher services

  • Reprints & permissions
  • Research data
  • Language editing
  • Scientific editing
  • Nature Masterclasses
  • Research Solutions

Libraries & institutions

  • Librarian service & tools
  • Librarian portal
  • Open research
  • Recommend to library

Advertising & partnerships

  • Advertising
  • Partnerships & Services
  • Media kits
  • Branded content

Professional development

  • Nature Awards
  • Nature Careers
  • Nature Conferences

Regional websites

  • Nature Africa
  • Nature China
  • Nature India
  • Nature Japan
  • Nature Middle East
  • Privacy Policy
  • Use of cookies
  • Legal notice
  • Accessibility statement
  • Terms & Conditions
  • Your US state privacy rights
Springer Nature

© 2026 Springer Nature Limited

Nature Briefing AI and Robotics

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing: AI and Robotics