Abstract
The application of machine learning (ML) in geology has gained significant momentum over the past decade. Given the importance of visual interpretation in geological tasks such as lithological classification and trace fossil identification, automating these processes through ML has offered substantial benefits to both researchers and industry professionals, including reduced human error and increased efficiency. While machine learning methodologies commonly yield highly accurate results, the opacity of decision-making processes can present challenges, as they often provide “black box” responses. To address this issue and improve user trust and interpretability, the integration of Explainable Artificial Intelligence (XAI) has emerged as a promising solution. XAI techniques provide visual explanations, typically in the form of heat maps, indicating the specific regions within an image that the model prioritized in reaching its classification. Despite its growing relevance, the application of XAI within the geosciences remains in its infancy. This study evaluates the effectiveness of XAI in the context of bioturbation intensity classification using a pre-trained deep-learning model. The model classifies core and outcrop images into three categories: (1) unbioturbated, (2) moderately bioturbated, and (3) intensely bioturbated. XAI visualizations reveal strong alignment with an experienced ichnologist’s interpretations. Our findings highlight the utility of XAI in enhancing the transparency, and reliability, as well as ability to conduct a consistent, rapid assessment of images to increase efficiency of ML-based geological interpretation. In addition to its potential for improving confidence, XAI may serve as a valuable educational tool, especially in specialized domains such as ichnology, wherein expert knowledge is typically limited in academic and industry settings.
Similar content being viewed by others
Data availability
The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.
References
Dalrymple, R. W. & James, N. P. Facies Models 4 (Geological Association of Canada, 2010).
Pemberton, S. G. et al. Ichnology and Sedimentology of Shallow to Marginal Marine Systems: Ben Nevis and Avalon Reservoirs, Jeanne d’Arc Basin Vol. 15 (Geological Association of Canada, 2001).
de Pires, R., Bonar, A., Coronado, D. D., Marfurt, K. & Nicholson, C. Deep convolutional neural networks as a geological image classification tool. Sediment. Record. 17, 4–9 (2019).
Yu, C. et al. Artificial intelligence in paleontology. Earth Sci. Rev. 252, 104765. https://doi.org/10.1016/j.earscirev.2024.104765 (2024).
Knutsen, E. M. & Konovalov, D. A. Accelerating segmentation of fossil CT scans through deep learning. Sci. Rep. 14, 20943. https://doi.org/10.1038/s41598-024-71245-1 (2024).
de Pires, R., Duarte, D., Nicholson, C., Slatt, R. & Marfurt, K. J. Petrographic microfacies classification with deep convolutional neural networks. Comput. Geosci. 142, 104481. https://doi.org/10.1016/j.cageo.2020.104481 (2020).
Ayranci, K., Yildirim, I. E., Waheed, U. & MacEachern, J. A. Deep learning applications in geosciences: insights into ichnological analysis. 11, 7736 (2021).
Timmer, E., Knudson, C. & Gingras, M. Applying deep learning for identifying bioturbation from core photographs. AAPG Bull. 105, 631–638. https://doi.org/10.1306/08192019051 (2021).
Kikuchi, K. & Naruse, H. Abundance of trace fossil Phycosiphon incertum in core sections measured using a convolutional neural network. Sed. Geol. 461, 106570. https://doi.org/10.1016/j.sedgeo.2023.106570 (2024).
Qi, R., Zheng, Y., Yang, Y., Cao, C. C. & Hsiao, J. H. Explanation strategies in humans versus current explainable artificial intelligence: Insights from image classification. Br. J. Psychol. https://doi.org/10.1111/bjop.12714 (2024).
Rudin, C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 1, 206–215. https://doi.org/10.1038/s42256-019-0048-x (2019).
Ghnemat, R., Alodibat, S. & Abu Al-Haija, Q. Explainable artificial intelligence (XAI) for deep learning based medical imaging classification. J. Imaging. 9, 177 (2023).
Patrício, C., Neves, J. C. & Teixeira, L. F. Explainable deep learning methods in medical image classification: A survey. ACM Comput. Surv. 56, 85. https://doi.org/10.1145/3625287 (2023).
Shivhare, I., Jogani, V., Purohit, J. & Shrawne, S. C. Analysis of explainable artificial intelligence methods on medical image classification. In 2023 Third International Conference on Advances in Electrical, Computing, Communication and Sustainable Technologies (ICAECT), 1–5. https://doi.org/10.1109/ICAECT57570.2023.10118312 (2023).
Wyatt, L. S., van Karnenbeek, L. M., Wijkhuizen, M., Geldof, F. & Dashtbozorg, B. Explainable artificial intelligence (XAI) for oncological ultrasound image analysis: a systematic review. Appl. Sci. 14, 8108 (2024).
Zheng, H. et al. Enhancing gastrointestinal submucosal tumor recognition in endoscopic ultrasonography: A novel multi-attribute guided contextual attention network. Expert Syst. Appl. 242, 122725. https://doi.org/10.1016/j.eswa.2023.122725 (2024).
Dahal, A. & Lombardo, L. Explainable artificial intelligence in geoscience: A glimpse into the future of landslide susceptibility modeling. Comput. Geosci. 176, 105364. https://doi.org/10.1016/j.cageo.2023.105364 (2023).
Jena, R. et al. Earthquake spatial probability and hazard estimation using various explainable AI (XAI) models at the Arabian peninsula. Remote Sens. Applications: Soc. Environ. 31, 101004. https://doi.org/10.1016/j.rsase.2023.101004 (2023).
Krell, E., Kamangir, H., Collins, W., King, S. A. & Tissot, P. Aggregation strategies to improve XAI for geoscience models that use correlated, high-dimensional rasters. Environ. Data Sci. 2, e45. https://doi.org/10.1017/eds.2023.39 (2023).
Taylor, A. M. & Goldring, R. Description and analysis of bioturbation and ichnofabric. J. Geol. Soc. 150, 141–148. https://doi.org/10.1144/gsjgs.150.1.0141 (1993).
Bromley, R. G. Trace Fossils: Biology, Taphonomy and Applications, 2nd ed, Vol. 361 (Chapman and Hall, 1996).
Gingras, M. K., Bann, K. L., MacEachern, J. A., Waldron, J. & Pemberton, S. G. In Applied Ichnology Vol. Short Course Notes 52 (eds MacEachern, J. A. et al.) 1–26 (SEPM Society for Sedimentary Geology, 2007).
Gingras, M. K. et al. In Developments in Sedimentology. Vol. 64, 463–505 (eds Knaust, D. & Bromley, R. G.) (Elsevier, 2012).
Carmona, N. B., Buatois, L. A., Ponce, J. J. & Mángano, M. G. Ichnology and sedimentology of a tide-influenced delta, Lower Miocene Chenque Formation, Patagonia, Argentina: Trace-fossil distribution and response to environmental stresses. Palaeogeogr., Palaeoclimatol. Palaeoecol. 273, 75–86. https://doi.org/10.1016/j.palaeo.2008.12.003 (2009).
MacEachern, J. A., Bann, K. L., Bhattacharya, J. P. & Howell, C. D. J. In River Deltas - Concepts, Models, and Examples Vol. Special Publication 83 (eds Giosan, L. & Bhattacharya, J. P.) 49–85 (2005).
MacEachern, J. A. & Bann, K. L. Departures from the archetypal deltaic ichnofacies. Geol. Soc. Lond. Special Publications. 522, 175–213. https://doi.org/10.1144/SP522-2022-56 (2023).
MacEachern, J. A., Bann, K. L., Pemberton, S. G. & Gingras, M. K. In Applied Ichnology Vol. Short Course Notes No. 52 (eds MacEachern, J. A. et al.) 27–64 (SEPM, 2007).
Gani, M. R., Bhattacharya, J. P. & MacEachern, J. A. In Applied Ichnology Vol. Short Course Notes 52 (eds MacEachern, J. A. et al.) 209–225 (SEPM, Society of Sedimentary Geology, 2007).
Borys, K. et al. Explainable AI in medical imaging: An overview for clinical practitioners – Saliency-based XAI approaches. Eur. J. Radiol. 162, 110787. https://doi.org/10.1016/j.ejrad.2023.110787 (2023).
Selvaraju, R. R. et al. Grad-CAM: Visual explanations from deep networks via gradient-based localization. Int. J. Comput. Vision. 128, 336–359. https://doi.org/10.1007/s11263-019-01228-7 (2020).
Adebayo, J. et al. Sanity checks for saliency maps. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, 9525–9536 (2018).
Acknowledgements
The authors would like to acknowledge the support received from Saudi Data and AI Authority (SDAIA) and King Fahd University of Petroleum and Minerals (KFUPM) under SDAIA-KFUPM Joint Research Center for Artificial Intelligence Grant no. JRC-AI-RG-03. We also thank the two anonymous reviewers whose comments helped to improve the quality of this manuscript.
Funding
Declaration.
The support for this research received from Saudi Data and AI Authority (SDAIA) and King Fahd University of Petroleum and Minerals (KFUPM) under SDAIA-KFUPM Joint Research Center for Artificial Intelligence Grant no. JRC-AI-RG-03.
Author information
Authors and Affiliations
Contributions
Korhan Ayranci: Conceptualization, Data curation, Formal analysis, Writing - original draft. Isa Yildirim: Methodology, Data curation, Formal analysis, review & editing. Umut Yildirim: Methodology, Data curation, Formal analysis, review & editing. Umair bin Waheed: Conceptualization, Data curation, Formal analysis, Writing - original draft. James A. MacEachern: Data curation, Formal analysis, Writing - original draft.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
About this article
Cite this article
Ayranci, K., Yildirim, I.E., Yildirim, E.U. et al. Opening the black box: explainable AI for automated bioturbation analysis in cores and outcrops. Sci Rep (2026). https://doi.org/10.1038/s41598-026-40747-5
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s41598-026-40747-5


