Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Advertisement

Scientific Reports
  • View all journals
  • Search
  • My Account Login
  • Content Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • RSS feed
  1. nature
  2. scientific reports
  3. articles
  4. article
Deep neural network-based coupling model of inter-organizational knowledge flow and agent collaborative decision-making
Download PDF
Download PDF
  • Article
  • Open access
  • Published: 02 February 2026

Deep neural network-based coupling model of inter-organizational knowledge flow and agent collaborative decision-making

  • Menglin Li1,
  • Wenwen Yu2 &
  • Yiming Li1 

Scientific Reports , Article number:  (2026) Cite this article

We are providing an unedited version of this manuscript to give early access to its findings. Before final publication, the manuscript will undergo further editing. Please note there may be errors present which affect the content, and all legal disclaimers apply.

Subjects

  • Engineering
  • Mathematics and computing

Abstract

Inter-organizational knowledge flow and agent collaborative decision-making constitute mutually interdependent processes critical for organizational performance in complex environments. This study proposes a novel deep neural network-based framework that explicitly models the bidirectional coupling mechanism between knowledge propagation dynamics and multi-agent coordination. The architecture integrates graph attention networks for knowledge transfer modeling with multi-agent reinforcement learning for decision coordination, establishing coupling interfaces that enable dynamic adaptation between these subsystems. The model incorporates temporal decay mechanisms, attention-based knowledge path optimization, and closed-loop feedback that propagates decision outcomes back to reshape knowledge transfer patterns. Experimental validation on synthetic and real-world datasets demonstrates substantial performance improvements of 8–24% over state-of-the-art baselines across knowledge transfer accuracy, decision success rates, and coordination efficiency metrics. Deployment in a supply chain coordination scenario achieved 18.5% cost reduction, 71% stockout frequency decrease, and 42.7% inventory turnover improvement. The coupling quality correlation coefficient reached 0.812, confirming strong interdependencies between knowledge evolution and decision outcomes. This work advances theoretical understanding of organizational knowledge systems while providing practical tools for enhancing inter-organizational collaboration.

Data availability

The synthetic experimental datasets, model implementation code, baseline implementations, training scripts, and evaluation procedures generated during the current study are provided in Supplementary File S1 to enable full replication of reported results. The supplementary materials include: (a) synthetic dataset generation scripts with configurable parameters for organizational network size and knowledge characteristics, (b) complete PyTorch implementation of the proposed coupling model following the ODD protocol description, (c) implementations of all five baseline methods with identical preprocessing pipelines, (d) hyperparameter configuration files and training logs, and (e) sensitivity analysis scripts and visualization code. Real-world organizational data from the supply chain case study are subject to confidentiality agreements with participating enterprises and cannot be made publicly available; however, aggregated statistical summaries and anonymized network structure characteristics are included in the supplementary materials to facilitate understanding of real-world application contexts.

Abbreviations

DNN:

Deep neural networks

GNN:

Graph neural networks

MARL:

Multi-agent reinforcement learning

CNN:

Convolutional neural networks

RNN:

Recurrent neural networks

API:

Application programming interface

ROI:

Return on investment

GPU:

Graphics processing unit

CUDA:

Compute unified device architecture

References

  1. Scuotto, V., Santoro, G., Bresciani, S. & Del Giudice, M. Shifting intra-and inter-organizational innovation processes towards digital business: an empirical analysis of SMEs. Creativity Innov. Manage. 26 (3), 247–255. https://doi.org/10.1111/caim.12221 (2017).

    Google Scholar 

  2. LeCun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature 521 (7553), 436–444. https://doi.org/10.1038/nature14539 (2015).

    Google Scholar 

  3. Wooldridge, M. An Introduction To MultiAgent Systems 2nd edn (Wiley, 2009).

  4. Cohen, W. M. & Levinthal, D. A. Absorptive capacity: A new perspective on learning and innovation. Adm. Sci. Q. 35 (1), 128–152. https://doi.org/10.2307/2393553 (1990).

    Google Scholar 

  5. Bonabeau, E. Agent-based modeling: methods and techniques for simulating human systems. Proc. Natl. Acad. Sci. 99 (suppl 3), 7280–7287. https://doi.org/10.1073/pnas.082080899 (2002).

    Google Scholar 

  6. Huggins, R., Johnston, A. & Thompson, P. Network capital, social capital and knowledge flow: how the nature of inter-organizational networks impacts on innovation. Ind. Innovat. 19 (3), 203–232. https://doi.org/10.1080/13662716.2012.669615 (2012).

    Google Scholar 

  7. Adomavicius, G., Bockstedt, J. C., Curley, S. P. & Zhang, J. Augmenting organizational decision-making with deep learning algorithms: Principles, promises, and challenges. J. Bus. Res. 123, 588–604. https://doi.org/10.1016/j.jbusres.2020.09.046 (2020).

    Google Scholar 

  8. Argote, L. & Miron-Spektor, E. Organizational learning: from experience to knowledge. Organ. Sci. 22 (5), 1123–1137. https://doi.org/10.1287/orsc.1100.0621 (2011).

    Google Scholar 

  9. Stone, P. & Veloso, M. Multiagent systems: A survey from a machine learning perspective. Auton. Robots. 8 (3), 345–383. https://doi.org/10.1023/A:1008942012299 (2000).

    Google Scholar 

  10. Goodfellow, I., Bengio, Y. & Courville, A. Deep Learning (MIT Press, 2016).

  11. Van Wijk, R., Jansen, J. J. & Lyles, M. A. Inter-and intra-organizational knowledge transfer: A meta-analytic review and assessment of its antecedents and consequences. J. Manage. Stud. 45 (4), 830–853. https://doi.org/10.1111/j.1467-6486.2008.00771.x (2008).

    Google Scholar 

  12. Easterby-Smith, M., Lyles, M. A. & Tsang, E. W. Inter-organizational knowledge transfer: current themes and future prospects. J. Manage. Stud. 45 (4), 677–690. https://doi.org/10.1111/j.1467-6486.2008.00773.x (2008).

    Google Scholar 

  13. Inkpen, A. C. & Tsang, E. W. Social capital, networks, and knowledge transfer. Acad. Manage. Rev. 30 (1), 146–165. https://doi.org/10.5465/amr.2005.15281445 (2005).

    Google Scholar 

  14. Zahra, S. A. & George, G. Absorptive capacity: A review, reconceptualization, and extension. Acad. Manage. Rev. 27 (2), 185–203. https://doi.org/10.5465/amr.2002.6587995 (2002).

    Google Scholar 

  15. Szulanski, G. Exploring internal stickiness: impediments to the transfer of best practice within the firm. Strateg. Manag. J. 17 (S2), 27–43. https://doi.org/10.1002/smj.4250171105 (1996).

    Google Scholar 

  16. Panait, L. & Luke, S. Cooperative multi-agent learning: the state of the Art. Auton. Agent. Multi-Agent Syst. 11 (3), 387–434. https://doi.org/10.1007/s10458-005-2631-2 (2005).

    Google Scholar 

  17. Buşoniu, L., Babuška, R. & De Schutter, B. A comprehensive survey of multiagent reinforcement learning. IEEE Trans. Syst. Man. Cybernetics Part. C. 38 (2), 156–172. https://doi.org/10.1109/TSMCC.2007.913919 (2008).

    Google Scholar 

  18. Olfati-Saber, R., Fax, J. A. & Murray, R. M. Consensus and Cooperation in networked multi-agent systems. Proc. IEEE. 95 (1), 215–233. https://doi.org/10.1109/JPROC.2006.887293 (2007).

    Google Scholar 

  19. Finin, T., Fritzson, R., McKay, D. & McEntire, R. KQML as an agent communication language. Proceedings of the Third International Conference on Information and Knowledge Management, 456–463. (1994). https://doi.org/10.1145/191246.191322

  20. McMahan, B., Moore, E., Ramage, D., Hampson, S. & Arcas, B. A. y Communication-efficient learning of deep networks from decentralized data. Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, 1273–1282. (2017).

  21. Bengio, Y., Courville, A. & Vincent, P. Representation learning: A review and new perspectives. IEEE Trans. Pattern Anal. Mach. Intell. 35 (8), 1798–1828. https://doi.org/10.1109/TPAMI.2013.50 (2013).

    Google Scholar 

  22. Vaswani, A. et al. Attention is all you need. Adv. Neural. Inf. Process. Syst. 30, 5998–6008 (2017).

    Google Scholar 

  23. Mikolov, T., Chen, K., Corrado, G. & Dean, J. Efficient Estimation of word representations in vector space. ArXiv Preprint arXiv :13013781. (2013).

  24. Kipf, T. N. & Welling, M. Semi-supervised classification with graph convolutional networks. International Conference on Learning Representations. (2017).

  25. Hamilton, W. L., Ying, R. & Leskovec, J. Representation learning on graphs: methods and applications. IEEE Data Eng. Bull. 40 (3), 52–74 (2017).

    Google Scholar 

  26. Li, B., Mellou, K., Zhang, B., Pathuri, J. & Menache, I. Large language models for supply chain optimization. arXiv preprint arXiv:2307.03875. (2023).

  27. Wang, H., Cui, Z., Liu, R., Fang, L. & Sha, Y. A multi-type transferable method for missing link prediction in heterogeneous social networks. IEEE Trans. Knowl. Data Eng. 35 (11), 10981–10991. https://doi.org/10.1109/TKDE.2022.3188213 (2023).

    Google Scholar 

  28. Oroojlooy, A. & Hajinezhad, D. A review of cooperative multi-agent deep reinforcement learning. Appl. Intell. 53, 13677–13722. https://doi.org/10.1007/s10489-022-04105-y (2023).

    Google Scholar 

  29. Foerster, J., Assael, Y. M., de Freitas, N. & Whiteson, S. Learning to communicate with deep multi-agent reinforcement learning. Adv. Neural. Inf. Process. Syst. 29, 2137–2145 (2016).

    Google Scholar 

  30. Zhang, C., Song, D., Huang, C., Swami, A. & Chawla, N. V. Heterogeneous graph neural network. Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 793–803. (2019). https://doi.org/10.1145/3292500.3330961

  31. Xu, K., Hu, W., Leskovec, J. & Jegelka, S. How powerful are graph neural networks? International Conference on Learning Representations. (2019).

  32. Velickovic, P. et al. Graph attention networks. International Conference on Learning Representations. (2018).

  33. Battaglia, P. W. et al. Relational inductive biases, deep learning, and graph networks. (2018). arXiv preprint arXiv:1806.01261.

  34. Kingma, D. P. & Ba, J. Adam: A method for stochastic optimization. International Conference on Learning Representations. (2015).

  35. Grimm, V. et al. The ODD protocol for describing agent-based and other simulation models: A second update to improve clarity, replication, and structural realism. J. Artif. Soc. Soc. Simul. 23 (2), 7. https://doi.org/10.18564/jasss.4259 (2020).

    Google Scholar 

  36. Devlin, J., Chang, M. W., Lee, K. & Toutanova, K. BERT: Pre-training of deep bidirectional transformers for language understanding. Proceedings of NAACL-HLT, 4171–4186. (2019).

  37. Schlichtkrull, M. et al. Modeling relational data with graph convolutional networks. European Semantic Web Conference, 593–607. (2018). https://doi.org/10.1007/978-3-319-93417-4_38

  38. Xie, R., Liu, Z., Jia, J., Luan, H. & Sun, M. Representation learning of knowledge graphs with entity descriptions. Proc. AAAI Conf. Artif. Intell. 30 (1), 2659–2665. https://doi.org/10.1609/aaai.v30i1.10329 (2016).

    Google Scholar 

  39. Pathak, D., Agrawal, P., Efros, A. A. & Darrell, T. Curiosity-driven exploration by self-supervised prediction. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 16–17. (2017).

  40. Wang, Q., Mao, Z., Wang, B. & Guo, L. Knowledge graph embedding: A survey of approaches and applications. IEEE Trans. Knowl. Data Eng. 29 (12), 2724–2743. https://doi.org/10.1109/TKDE.2017.2754499 (2017).

    Google Scholar 

  41. Mnih, V. et al. Asynchronous methods for deep reinforcement learning. Proceedings of the 33rd International Conference on Machine Learning, 1928–1937. (2016).

  42. Iqbal, S. & Sha, F. Actor-attention-critic for multi-agent reinforcement learning. Proceedings of the 36th International Conference on Machine Learning, 2961–2970. (2019).

  43. Rashid, T. et al. QMIX: monotonic value function factorisation for decentralised multi-agent reinforcement learning. Proc. 35th Int. Conf. Mach. Learn. 4295, 4304 (2018).

    Google Scholar 

  44. Foerster, J., Farquhar, G., Afouras, T., Nardelli, N. & Whiteson, S. Counterfactual multi-agent policy gradients. Proc. AAAI Conf. Artif. Intell. 32 (1), 2974–2982. https://doi.org/10.1609/aaai.v32i1.11794 (2018).

    Google Scholar 

  45. Schulman, J., Wolski, F., Dhariwal, P., Radford, A. & Klimov, O. Proximal policy optimization algorithms. ArXiv Preprint arXiv :170706347. (2017).

  46. Paszke, A. et al. PyTorch: an imperative style, high-performance deep learning library. Adv. Neural. Inf. Process. Syst. 32, 8024–8035 (2019).

    Google Scholar 

  47. Newman, M. E. The structure and function of complex networks. SIAM Rev. 45 (2), 167–256. https://doi.org/10.1137/S003614450342480 (2003).

    Google Scholar 

  48. Hernandez-Leal, P., Kartal, B. & Taylor, M. E. A survey and critique of multiagent deep reinforcement learning. Auton. Agent. Multi-Agent Syst. 33 (6), 750–797. https://doi.org/10.1007/s10458-019-09421-1 (2019).

    Google Scholar 

  49. Lowe, R. et al. Multi-agent actor-critic for mixed cooperative-competitive environments. Adv. Neural. Inf. Process. Syst. 30, 6379–6390 (2017).

    Google Scholar 

  50. Lane, P. J., Koka, B. R. & Pathak, S. The reification of absorptive capacity: A critical review and rejuvenation of the construct. Acad. Manage. Rev. 31 (4), 833–863. https://doi.org/10.5465/amr.2006.22527456 (2006).

    Google Scholar 

  51. Sutton, R. S. & Barto, A. G. Reinforcement Learning: an Introduction 2nd edn (MIT Press, 2018).

  52. Yu, C. et al. The surprising effectiveness of PPO in cooperative multi-agent games. Adv. Neural. Inf. Process. Syst. 35, 24611–24624 (2022).

    Google Scholar 

  53. Ivanov, D., Dolgui, A. & Sokolov, B. The impact of digital technology and industry 4.0 on the ripple effect and supply chain risk analytics. Int. J. Prod. Res. 57 (3), 829–846. https://doi.org/10.1080/00207543.2018.1488086 (2019).

    Google Scholar 

  54. Tsai, W. Knowledge transfer in intraorganizational networks: effects of network position and absorptive capacity on business unit innovation and performance. Acad. Manag. J. 44 (5), 996–1004. https://doi.org/10.5465/3069443 (2001).

    Google Scholar 

  55. Grant, R. M. Toward a knowledge-based theory of the firm. Strateg. Manag. J. 17 (S2), 109–122. https://doi.org/10.1002/smj.4250171110 (1996).

    Google Scholar 

Download references

Funding

No funding was received for this research.

Author information

Authors and Affiliations

  1. Department of Business Administration, Woosong University, Daejeon, 34605, Korea

    Menglin Li & Yiming Li

  2. Economics and Commerce Studies of North East Asia, the Graduate School, Pai Chai University, Daejeon, 35345, Korea

    Wenwen Yu

Authors
  1. Menglin Li
    View author publications

    Search author on:PubMed Google Scholar

  2. Wenwen Yu
    View author publications

    Search author on:PubMed Google Scholar

  3. Yiming Li
    View author publications

    Search author on:PubMed Google Scholar

Contributions

ML conceptualized the research framework, designed the multi-agent reinforcement learning methodology, developed the MADQN-PER algorithm, conducted the computational experiments, performed data analysis, and drafted the original manuscript. WY contributed to the enterprise collaborative network modeling, participated in algorithm implementation, assisted with experimental design and validation, and contributed to manuscript revision. YL supervised the overall research project, provided critical insights on the theoretical framework, guided the experimental design, secured computational resources, reviewed and edited the manuscript, and coordinated the research activities. All authors have read and approved the final manuscript.

Corresponding author

Correspondence to Yiming Li.

Ethics declarations

Competing interests

The authors declare no competing interests.

Ethics approval and consent to participate

This study involves computational modeling and simulation using synthetic datasets and anonymized enterprise collaboration records. No human subjects research or personally identifiable information was collected as part of this study. The anonymized organizational data used in the real-world application case was obtained with appropriate institutional permissions and data use agreements. The research complies with relevant data protection regulations and ethical guidelines for computational research.

Consent for publication

All authors have reviewed the manuscript and consent to its publication. No identifiable information regarding individuals or organizations has been included.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary Material 1

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, M., Yu, W. & Li, Y. Deep neural network-based coupling model of inter-organizational knowledge flow and agent collaborative decision-making. Sci Rep (2026). https://doi.org/10.1038/s41598-026-37838-8

Download citation

  • Received: 14 November 2025

  • Accepted: 27 January 2026

  • Published: 02 February 2026

  • DOI: https://doi.org/10.1038/s41598-026-37838-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Keywords

  • Inter-organizational knowledge flow
  • Agent collaborative decision-making
  • Deep neural networks
  • Coupling mechanism
  • Graph neural networks
  • Multi-agent systems
Download PDF

Advertisement

Explore content

  • Research articles
  • News & Comment
  • Collections
  • Subjects
  • Follow us on Facebook
  • Follow us on Twitter
  • Sign up for alerts
  • RSS feed

About the journal

  • About Scientific Reports
  • Contact
  • Journal policies
  • Guide to referees
  • Calls for Papers
  • Editor's Choice
  • Journal highlights
  • Open Access Fees and Funding

Publish with us

  • For authors
  • Language editing services
  • Open access funding
  • Submit manuscript

Search

Advanced search

Quick links

  • Explore articles by subject
  • Find a job
  • Guide to authors
  • Editorial policies

Scientific Reports (Sci Rep)

ISSN 2045-2322 (online)

nature.com sitemap

About Nature Portfolio

  • About us
  • Press releases
  • Press office
  • Contact us

Discover content

  • Journals A-Z
  • Articles by subject
  • protocols.io
  • Nature Index

Publishing policies

  • Nature portfolio policies
  • Open access

Author & Researcher services

  • Reprints & permissions
  • Research data
  • Language editing
  • Scientific editing
  • Nature Masterclasses
  • Research Solutions

Libraries & institutions

  • Librarian service & tools
  • Librarian portal
  • Open research
  • Recommend to library

Advertising & partnerships

  • Advertising
  • Partnerships & Services
  • Media kits
  • Branded content

Professional development

  • Nature Awards
  • Nature Careers
  • Nature Conferences

Regional websites

  • Nature Africa
  • Nature China
  • Nature India
  • Nature Japan
  • Nature Middle East
  • Privacy Policy
  • Use of cookies
  • Legal notice
  • Accessibility statement
  • Terms & Conditions
  • Your US state privacy rights
Springer Nature

© 2026 Springer Nature Limited

Nature Briefing AI and Robotics

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing: AI and Robotics