Abstract
This study investigates tort liability arising from the application risks of generative artificial intelligence (AI) in the financial industry and circular economy (CE), offering targeted management recommendations. The study is based on survey data collected from 60 companies, analyzed using structural equation modeling. The study first examines the frequency of risk events and legal disputes across companies of varying sizes. It identifies key associations between risk factors and legal liability, including the mediating effects of organizational and contextual elements. The analysis reveals that large CE enterprises experience higher rates of data breaches and technical failures, while smaller financial firms report more frequent legal disputes and data leaks. Data leakage shows a strong correlation with legal liability (coefficient = 0.72, p < 0.001). Erroneous decisions and technical failures also influence liability, with coefficients of −0.36 (p = 0.012) and 0.45 (p = 0.005), respectively. Additionally, the characteristics of technology implementation, the legal environment, and enterprise management practices significantly mediate the relationship between risk and liability, with mediating coefficients of 0.25, 0.18, and 0.32 (all p < 0.05). The findings underscore a direct link between risk factors and legal liability. Moreover, factors related to the application of generative AI partially mediate this relationship, indicating a strong statistical correlation. These insights are critical for companies aiming to strengthen risk management, ensure regulatory compliance, and protect their legal and financial interests.
Similar content being viewed by others
Introduction
In today’s world, the circular economy (CE) and the financial industry play crucial roles in promoting sustainable development and economic growth. The CE focuses on efficient resource use and recycling, aiming to reduce consumption, minimize environmental pollution, and decouple economic progress from resource depletion (2023). Meanwhile, the financial industry serves as a backbone of economic activity, offering vital services such as financing, risk management, and investment consulting (Kharitonova et al. 2022). However, both sectors face increasing challenges driven by global economic shifts and rapid technological advancement.
The development of the CE is closely tied to global resource and environmental pressures. Accelerated resource depletion and worsening pollution highlight the urgent need for sustainable resource management. While many countries have made strides in advancing CE initiatives—such as improving waste management and promoting resource regeneration—progress remains uneven. Developing nations, in particular, struggle with inefficient resource use and limited management technologies, which hinder the adoption of CE practices. As global population and consumption levels continue to rise, these pressures demand more effective and scalable recycling systems.
In the financial sector, globalization and digitalization are accelerating, but also the complexity of financial risk. The rise in cross-border transactions has increased market volatility, while new financial products and technologies have introduced additional instability. The rapid growth of digital finance and financial technology (FinTech) has improved access and convenience for users, yet it also raises serious concerns around data privacy, cybersecurity, and regulatory compliance. Inconsistent financial regulations across countries further complicate cross-border operations, making global financial markets more vulnerable to disruption. Currently, green transition policies continue to drive the development of the CE, with low-carbon city pilot programs playing a particularly influential role in reshaping urban industrial structures and entrepreneurial activities (Li et al. 2023a). At the same time, in resource-intensive industries, the integration of big data and clean energy is increasingly recognized as a key strategy for improving ecological efficiency, creating a strong foundation for the application of artificial intelligence (AI) technologies (Li et al. 2023b). Moreover, the growing impact of climate change on corporate ESG (Environmental, Social, and Governance) performance has compelled firms to rely more heavily on intelligent technologies to enhance environmental and governance outcomes (Li et al. 2024a). In the financial sector, the role of intellectual property pledge financing in promoting corporate innovation has attracted scholarly attention, highlighting the complex interplay between financial system design and firms’ technological capabilities (Li et al. 2024b).
Globally, the CE and financial industries are increasingly shaped by climate change, energy shortages, and market volatility. In the CE sector, addressing climate change requires more efficient resource recycling and greater technological innovation to improve resource use. In the financial sector, growing market uncertainty and the widespread adoption of FinTech highlight the need for stronger regulatory frameworks and more robust risk management systems. Understanding these global challenges is essential for assessing the application risks of generative AI in both fields.
Despite some progress in developed regions, the global development of the CE remains in its early stages. Resource inefficiency and environmental degradation persist (Hacker 2023), underlining the need for more effective circular systems (Buckley et al. 2021). Challenges such as weak industrial coordination and fragmented circular value chains continue to hinder broader implementation. Similarly, the financial industry faces significant obstacles amid ongoing globalization and rapid digitalization (Buckley et al. 2021). Advances in technology—particularly the rise of FinTech—are transforming financial services at an unprecedented pace (Correia Loureiro et al. 2021). Innovations such as virtual currencies, blockchain, and AI are expanding the sector’s capabilities but also introducing new risks. Key concerns now include globalized financial risk, data security, and inconsistencies in cross-border regulation, all of which threaten the stability and sustainable growth of the financial industry (Feldman 2022; Gipson Rankin 2020).
As a key subset of AI, generative AI has attracted growing attention in the CE and financial sectors. In the CE, it is being applied to resource recycling, process optimization, and waste management. Through advanced data analysis and intelligent optimization, generative AI enables more precise and efficient resource use, improving the overall performance of CE value chains (Cheng and Liu 2023; Nielsen 2023). In the financial industry, generative AI is widely used in risk assessment, investment analysis, and transaction monitoring (Tao 2022). Advances in machine learning (ML), natural language processing (NLP), and related technologies have enhanced data accuracy and decision-making, boosting the efficiency and quality of financial services. However, the rapid adoption of generative AI in both sectors has also introduced new risks. Concerns around data security, algorithmic bias, and legal liability are becoming increasingly prominent, highlighting the need for further research and stronger regulatory frameworks.
As generative AI becomes more widely adopted in the CE and financial sectors, both enterprises and government agencies face increasingly complex challenges. One major concern is data security. In the CE, large volumes of environmental and resource usage data are collected and analyzed. In the financial industry, user transaction and financial data are heavily relied upon. However, frequent incidents of data breaches, tampering, and unauthorized access threaten both enterprise and individual privacy. These security failures not only undermine the stability of financial systems but can also trigger public alarm. As a result, safeguarding data security has become an urgent priority. Another pressing issue is algorithmic bias. Biases embedded in algorithm design or training data can distort AI-driven decision-making. In the CE, such deviations may lead to inefficient resource allocation or disruptions in production processes. In finance, biased algorithms can result in flawed investment strategies and inaccurate risk assessments. Ensuring fairness and accuracy in AI models is therefore essential for their reliable application. Legal liability also poses a significant challenge. The rapid deployment of generative AI has outpaced the development of corresponding legal frameworks. As a result, accountability for data breaches or erroneous AI decisions often remains unclear. This legal ambiguity makes enforcement difficult and increases the risk of disputes. Both the CE and financial sectors urgently require clearer legal guidelines and a more robust regulatory system to define tort liability and ensure responsible AI use. In this study, “tort liability” is defined in accordance with the relevant provisions of China’s Civil Code as the civil liability for damages arising from the violation of statutory obligations. Specifically, it includes liability for negligence, product liability, or—in certain special cases—strict liability, stemming from incidents such as data breaches, algorithmic bias, or system failures. Unlike contractual liability, tort liability focuses on unlawful harm occurring outside contractual relationships. Given the unpredictability and limited controllability of generative AI technologies, clearly defining such legal responsibilities is of significant practical importance.
This study investigates the risks associated with applying generative AI technology in the CE and the financial industry, with a particular focus on tort liability. It offers an innovative perspective by integrating two critical domains—CE and finance—that are often studied separately. While most existing research examines generative AI within a single sector, this study addresses a key interdisciplinary gap by exploring how AI-related risks intersect across both fields. Specifically, the study analyzes risks such as data breaches, algorithmic bias, and legal liability, and explains how these risks give rise to tort liability in different contexts. Its unique contribution lies in identifying emerging legal challenges posed by generative AI and providing theoretical support for policymakers and enterprises in managing risks and ensuring compliance.
The study begins by outlining the current development and main challenges in the CE and financial sectors, with a focus on resource utilization and risk management. It then reviews the application of generative AI in both industries and highlights its role in improving efficiency and service quality. The study methodology is described in detail, followed by an analysis of how AI-related risks lead to tort liability. The study also examines relevant legal cases from China to illustrate key points. Therefore, the study highlights the novelty of combining CE and financial industry perspectives in the context of AI risk and tort liability. It offers important insights for advancing the sustainable and responsible development of both sectors. Theoretically, it contributes to a deeper understanding of how emerging technologies influence legal accountability, by analyzing both direct risk–liability relationships and the mediating factors that shape them.
Literature review
Previous research has extensively examined the relationships between the CE, the financial industry, and AI, yielding valuable insights (Keuper et al. 2024). In the CE field, Garcia and Piccinelli (2023) proposed that China’s CE policies positively influenced the growth of related industries. However, enforcement gaps and weak oversight mechanisms remained major challenges. In the financial sector, Buocz et al. (2023) noted that emerging technologies created new opportunities for innovation in financial services, yet unresolved issues—such as regulatory shortcomings and data security risks—persisted. Regarding AI, Huang et al. (2023) highlighted the successful use of deep learning (DL) in enterprise-level data analysis and risk management. Van Loo (2020) identified several risk factors in the CE, including information asymmetry, incomplete resource recovery, and immature technology, all of which increased operational risks along the supply chain. Similarly, studies on the financial industry emphasized risks related to market volatility, complex financial products, and lagging regulatory responses (Jacob Alhadeff 2024). Chen (2024) examined the digital transformation of financial markets, showing that digital finance improved customer satisfaction but raised serious concerns over data privacy and cybersecurity—especially for small and medium-sized institutions. Adegbola et al. (2024) further pointed out that while quantum computing could significantly enhance cryptocurrency security, it also challenges the adequacy of current financial regulatory systems.
Cheng et al. (2023) highlighted that the intersection of the CE and the financial industry introduces new risks, such as financing difficulties for CE projects and conflicts of interest among financial institutions involved in CE activities. In the realm of generative AI, Sabnam and Rajagopal (2024) investigated the use of generative adversarial networks (GANs), finding that they significantly improved marketing effectiveness and customer profiling. However, they also pointed to persistent issues related to algorithmic bias and ethical concerns. Fosso Wamba et al. (2024) examined the application of generative AI in supply chain management. They observed that while generative AI enhances supply chain resilience by simulating market fluctuations, improvements are still needed in data quality and algorithmic fairness.
The existing literature has explored the legal liabilities associated with the risks of generative AI technology across various cases, sectors, and countries (Tzimas 2021; Grajzl and Murrell 2022). For instance, Dornis (2019) conducted a case study examining the implications of generative AI in medical malpractice within the healthcare sector. He highlighted that, although the technology can support medical diagnoses, its inherent uncertainty and lack of transparency in decision-making may create ambiguities around the attribution of medical responsibility. In the financial sector, Krippner (2023) identified potential legal risks related to data processing and privacy protection in the use of generative AI, underscoring the need for more robust regulatory frameworks. Similarly, Lussier (2022) argued that applying generative AI in areas such as product design and marketing could lead to potential rights infringements, calling for enhanced legal safeguards and oversight.
Although some existing studies have addressed the risks associated with generative AI, there is still no consensus on the legal liability framework, particularly in terms of defining tort liability types. Under the current framework of China’s Civil Code, tort liability generally follows the principle of fault-based responsibility. However, in cases involving AI technologies, such as data breaches or system failures, the traditional assumption of fault faces significant challenges. Some scholars, such as Almada and Petit (2025), have proposed introducing a “strict liability” mechanism for high-risk AI systems. This approach would hold parties responsible for damages regardless of fault. This idea is also reflected in the draft European Union (EU) AI Liability Directive, which reduces the evidentiary burden for the injured party by establishing a presumptive causal link. However, domestic research has not yet explored this issue in depth, and there is a lack of a systematic analysis connecting fault-based liability, strict liability, and AI-specific risks.
Moreover, international research on AI legal liability has gradually shifted from a “causal attribution” approach to an “algorithmic governance framework.” Kouroutakis (2024) pointed out that algorithmic harm typically does not result from the actions of a single party but arises from a chain of processes, such as data biases and model training flaws. Traditional tort theory struggles to address the attribution of responsibility in such multi-causal situations. Therefore, there is a need to draw on the EU AI Act’s definition of “high-risk systems” and, in combination with the principle of technological neutrality, develop a more adaptable legal framework. Most existing studies still focus on sectors such as healthcare and autonomous driving, with limited attention to the cross-sector risks of generative AI in the CE and financial industries. Specifically, there has been little exploration of “multi-party responsibility” or “composite liability types.” This study aims to fill this gap by proposing an AI tort liability analysis framework tailored to China’s context, based on empirical evidence from domestic enterprises.
Although numerous studies examine the development of the CE, AI technology, and the financial industry, research exploring the interconnections among these fields is still limited. Existing studies also have notable gaps. First, the relationship between the CE and the financial industry, particularly regarding risk factors, has not been thoroughly explored. Second, while some progress has been made in understanding the legal liabilities associated with the application risks of generative AI technology, further investigation is needed—particularly through case analyses and comparisons across different sectors and countries. This study aims to bridge these gaps by synthesizing the current literature and incorporating new case studies and research methodologies. The goal is to provide more comprehensive theoretical support and empirical insights to inform policy-making and practical applications in these fields.
Application research of AI technology in tort law
Research framework and theoretical basis
The research framework is built on relevant theoretical foundations from the CE, financial industry, and generative AI technology. By integrating these three domains, the framework provides a strong foundation for analyzing the risks associated with generative AI applications (Delfino 2023). The structure of this framework is presented in Fig. 1.
In Fig. 1, the CE theory suggests that economic growth should be decoupled from resource consumption by optimizing resource use and promoting recycling. This framework includes key concepts such as resource recycling, ecological protection, and industrial chain optimization. It draws on important theories like life cycle analysis and industrial ecology, which are essential for understanding sustainable resource management. The theoretical foundation of the financial industry encompasses frameworks related to financial markets, products, and regulation. Key theories in this area include modern financial theory, financial market efficiency theory, and financial risk management theory. Together, these theories provide valuable insights into financial market operations and risk management complexities. In the field of AI technology, core theories include ML, DL, and NLP. These theories support AI’s capabilities in decision-making and data analysis. The rise of generative AI introduces a new theoretical foundation for simulating human creativity and cognitive processes. Resource recycling, a central concept in CE theory, emphasizes efficient resource use and reuse to maximize utility while minimizing environmental impact. Life cycle analysis is used to assess the environmental impacts of products or services across their entire lifecycle, helping optimize resource efficiency and reduce pollution.
CE theory also emphasizes ecological protection and industrial chain optimization. Ecological protection focuses on maintaining a balance between economic activities and ecosystems, promoting pollution reduction and biodiversity conservation. Industrial chain optimization aims to integrate and coordinate the entire chain to enhance resource recycling and reduce consumption during production.
The theoretical framework of the financial industry addresses the dynamics of financial markets, products, regulatory oversight, and related components. Modern financial theory focuses on the operations of financial institutions and markets, emphasizing the design and innovation of financial products. Financial market efficiency theory examines how effectively market prices reflect available information. Financial risk management theory explores the various risks faced by financial markets and institutions, offering strategies for risk handling and mitigation.
The theoretical foundations of AI technology include ML, DL, NLP, and other related theories. ML enables intelligent processing of complex problems by allowing computer systems to learn from data and improve over time. DL, a subset of ML, facilitates efficient processing of large datasets and the recognition of complex patterns, simulating the structure of the human brain’s neural networks. NLP focuses on enabling systems to understand and process natural language, which is key for human-computer interaction (HCI) and information retrieval.
Generative AI technology, a recent advancement, provides a new theoretical basis for simulating human creativity and cognitive processes. By mimicking human creative thinking and learning, generative AI can generate and process various forms of data, such as images, audio, and text. This technology opens new opportunities for enhancing HCI and fostering artistic creation across diverse fields.
Research object and data collection method
This study focuses on enterprises that apply generative AI technology in the CE and financial sectors. The selection criteria are based on several key factors: (1) Enterprise Size: The sample includes small, medium, and large enterprises to ensure diversity; (2) Industry Sector: The study covers both the CE and financial sectors to capture a range of application scenarios; (3) Application of AI Technology: Only enterprises with extensive integration of generative AI in their operations are selected, ensuring sufficient understanding and practical use cases; (4) History of Risk Events: Preference is given to enterprises that have encountered risk incidents or legal disputes during AI technology application, allowing for a thorough analysis of associated risks and legal liabilities. To enhance transparency and reproducibility, a multi-stage sampling process is used. Initially, enterprises from the CE and financial sectors are screened for representativeness. Then, 60 enterprises are randomly selected for in-depth study, ensuring balanced representation of industry sectors and enterprise sizes to minimize selection bias. Given the relatively small sample size, future research should expand the sample to improve the generalizability of the findings.
The selected enterprises reflect the diversity of industries, sizes, and application contexts, providing a comprehensive view of the status and risks of generative AI in these sectors. A total of 60 enterprises were included, and details of some of these are presented in Table 1.
Table 1 presents a range of enterprises from the CE and financial industries, differing in size and AI application scenarios. All enterprises utilize AI technology, but their associated risks and legal disputes vary. In the CE sector, Enterprises 1 and 3 focus on waste recycling, utilization, and sustainable production optimization, operating as medium and small enterprises, respectively. Despite their efforts, they face challenges such as data leakage, environmental pollution, and resource wastage, leading to legal liabilities related to personal information protection and environmental incident reporting. In the financial industry, Enterprises 2 and 4 are large and medium-sized organizations focused on risk assessment, management, and investment decision support. However, flaws in their AI algorithms have caused inaccurate financial risk assessments, resulting in investment losses and legal disputes, including cases of financial fraud and investor claims. Enterprise 5, a large CE organization, has implemented waste reuse technology but faces legal risks related to AI robotics accidents and failure to meet data protection requirements. This has led to labor injury lawsuits and breaches of sensitive customer information. In summary, enterprises of varying sizes and sectors encounter unique risks and legal disputes linked to AI technology. These challenges highlight the need for improved risk management and legal compliance to ensure secure AI deployment and minimize legal liabilities.
To collect information on risk events and legal disputes related to the use of generative AI technology in enterprises, this study uses a questionnaire. The questionnaire design is based on the risk factors and legal liabilities identified in the preliminary literature review and covers several key dimensions: company background, AI application, risk events, legal disputes, response measures, and legal awareness. Each theme contains both closed and open-ended questions to gather quantitative and qualitative data. For instance, in the “AI Technology Application Status” section, questions include “In which areas does your enterprise primarily utilize generative AI?” (closed) and “What challenges have you faced during implementation?” (open-ended). The questionnaire is distributed to the management personnel of each participating enterprise via online survey tools (e.g., Google Forms), with each organization selecting five management representatives to ensure a diverse and representative sample. Throughout the process, the study team communicates with the enterprises via email and phone to facilitate distribution and collection. A 100% response rate was achieved, yielding 300 valid responses. An overview of the questionnaire contents is provided in Table 2.
Table 2 is organized into six themes: company background, AI technology application, risk events and legal disputes, problems and challenges, response measures and experience sharing, and legal awareness and regulatory compliance. These themes provide critical insights into the challenges and legal liabilities faced by enterprises in applying AI technologies. Company Background: This section collects basic information about the participating companies, such as their name, industry, and size. It offers a clear understanding of the sample enterprises and serves as a foundation for the analysis.
AI Technology Application: Here, the questionnaire captures how companies use AI technologies in the CE or financial sectors. It includes details on AI application areas, technical approaches, and implementation status. This helps identify the extent and methods of AI integration across various enterprises. Risk Events and Legal Disputes: This section documents the legal challenges and risk events companies encounter when deploying AI technology, such as data breaches, algorithmic biases, and privacy violations. It also includes the legal disputes that follow. These cases provide valuable references for analyzing legal risks related to AI applications. Problems and Challenges: The questionnaire identifies the main obstacles companies face when implementing AI, including technical difficulties, data security issues, and talent shortages. This helps researchers understand the practical challenges businesses encounter and informs future AI development. Response Measures and Experience Sharing: This section records the strategies companies have used to address risk incidents and legal disputes, along with the outcomes. It also encourages experience sharing and recommendations, fostering collaboration and learning among enterprises facing similar challenges. Legal Awareness and Regulatory Compliance: This theme assesses companies’ understanding of their legal liabilities and compliance with relevant laws and regulations. It provides insight into how well companies manage legal risks and adhere to regulations, offering valuable information for industry oversight.
The questionnaire clearly distinguishes between different types of legal liabilities arising from the use of generative AI technology, including contractual liability, tort liability, or both. For example, under the “Risk Events and Legal Disputes” section, the question “What types of legal liabilities do you think your company may face when using generative AI technology?” is asked, with the following options: (1) Contractual Liability (liabilities arising from breaches in technology service agreements or platform usage agreements); (2) Tort Liability (liabilities from violations of legal obligations that harm third-party rights, such as privacy or property rights); (3) Both. The questionnaire also examines the need for collaboration between technology scholars and legal experts to address legal issues related to AI applications. These questions aim to clarify how enterprises perceive legal liabilities while providing insights on how interdisciplinary collaboration can address these challenges.
The reliability and validity of the questionnaire are assessed using the metrics in Eqs. (1) and (2) (Spector-Bagdady 2021).
\(k\) refers to the question’s number; \({\sigma }_{i}^{2}\) means the variance of each question; \({\sigma }_{\text{total}}^{2}\)represents all questions’ total variance.
In the Kaiser–Meyer–Olkin (KMO) Measure of Sampling Adequacy test, \({r}_{{ij}}\) represents the correlation coefficient between items. Data analysis for this study is conducted using Statistical Package for the Social Sciences (SPSS)m26.0, and the questionnaire’s reliability and validity are deemed satisfactory.
Given that the sample size in this study consists of 60 enterprises, and considering that traditional covariance-based structural equation modeling (CB-SEM) typically requires a sample size of at least 100–200, this study uses Partial Least Squares Structural Equation Modeling (PLS-SEM) to overcome the limitation of a small sample size. PLS-SEM is particularly suited for small samples, non-normally distributed data, and exploratory research.
To enhance the robustness of parameter estimates, the study employs the Bootstrap resampling method (with 5000 resamples) to estimate the confidence intervals of the path coefficients. The 95% confidence intervals for all major paths do not include zero, indicating statistical significance. This method strengthens the reliability of the model estimation and further supports the credibility of the conclusions.
Research design
The questionnaire data were analyzed using content analysis and comparative analysis methods (Ananny 2022; Izzidien 2023). These methods are illustrated in Fig. 2. Content analysis is a qualitative approach that systematically examines text, images, or other data forms to identify patterns, themes, and trends. In this study, content analysis was applied to the responses to the open-ended questions in the questionnaire. Through organization, categorization, and refinement, relevant themes and keywords were extracted, highlighting both commonalities and differences. Comparative analysis is a mixed-method approach that assesses similarities and differences across groups or conditions to reveal their characteristics and patterns. In this study, comparative analysis was used to evaluate how generative AI technology is applied across different enterprises in the financial and CE sectors, with a focus on variations in risk events and legal disputes. Figure 2 summarizes the data collected through the questionnaire, illustrating how these two methods were used to analyze the data and provide insights into the legal liabilities of generative AI technology in the CE and financial sectors.
Figure 2 presents the two primary research methods used in this study: Content Analysis and Comparative Analysis, outlining the specific steps and procedures for each. The Content Analysis Method includes the following steps: Data Preprocessing: Raw data is cleaned and standardized to ensure consistency and accuracy, laying the groundwork for reliable analysis. Establishing a Coding System: Data segments are categorized using codes, which help convert complex responses into a structured format suitable for analysis. Data Analysis: The coded data is examined to identify recurring themes, trends, and underlying patterns. The Comparative Analysis Method combines qualitative and quantitative approaches to evaluate similarities and differences across groups. It consists of: Selecting Comparison Objects: Enterprises or groups are selected to serve as the basis for comparison. Defining Comparative Indicators: Based on the study’s objectives, key variables—such as types of technology used and frequency of risk events—are established for comparison. Data Analysis: Data is analyzed in accordance with these indicators to assess differences between groups. Result Interpretation: In both methods, this final step involves synthesizing the results to draw meaningful conclusions. Content analysis helps identify patterns and thematic insights from the data, while comparative analysis highlights differences in how generative AI is applied across enterprises in the CE and financial sectors.
Following the methodology shown in Fig. 2, a SEM is employed to further analyze the data. Risk factors identified through content and comparative analysis—such as data leaks, decision errors, and technical failures (Burdon et al. 2024)—serve as independent variables. The dependent variable is tort liability, which includes both contractual and tort responsibilities. The model also incorporates mediating and moderating factors that may influence the relationship between risk and liability. These include enterprise governance mechanisms, the regulatory environment, and characteristics of AI implementation (Bedford et al. 2023; Brian Elzweig 2023). The SEM establishes both direct relationships between risk factors and legal liability and potential indirect effects through these influencing factors. Table 3 summarizes the specific analytical paths used in the model.
According to Table 3, the analysis identifies two main types of relationships: direct and indirect paths linking application risks to legal liability. The first category, direct relationship paths, includes the effects of data leakages, erroneous decisions, and technical failures on legal liability. Data leakage represents a major security breach that can expose user data, violate privacy, and lead to legal consequences. It is therefore hypothesized to have a direct and statistically significant impact on legal liability. Similarly, erroneous decisions and technical failures can result in operational errors or harm to third parties, which also contribute directly to legal liability. The second category, indirect relationship paths, involves the mediating roles of three contextual factors: Technology Application Characteristics – These include aspects such as security, reliability, advancement, and quality. Technologies with higher safety and stability can reduce exposure to risks like data breaches or system failures, thereby indirectly lowering legal liability. Legal Environment – This encompasses the regulatory framework, enforcement rigor, and existing laws. A robust legal environment can both shape enterprise behavior and influence how risks translate into legal responsibilities. Enterprise Management Mechanisms – These refer to internal structures, risk management systems, and governance practices that affect how decisions are made and how effectively risks are mitigated, thus influencing both risk occurrence and potential liability. Together, these paths and hypotheses, as outlined in Table 3, form a comprehensive analytical framework for examining how generative AI-related risks may lead to legal liabilities. This framework allows for a deeper understanding of the factors that contribute to legal exposure and informs the development of targeted risk management and compliance strategies.
The study’s research model is designed to systematically test these relationships and hypotheses, focusing on the connection between generative AI application risks and tort liability. Statistical methods are applied to validate the model, and the path coefficients in the SEM are calculated based on Eq. (3) (Jaksic et al. 2023).
\({X}_{i}\) and \(Y\) represent independent and dependent variables; \({Cov}\) and \({Var}\) refer to covariance and variance Var.
To ensure the reliability and validity of the measurement model in the structural equation modeling (SEM), this study systematically verifies the measurement of latent variables. The study design includes three core latent variables: (1) Generative AI-Related Risk Factors (comprising data leakage, erroneous decisions, and technical failures), (2) Influencing Factors (including technology application characteristics, legal environment, and enterprise governance mechanisms), and (3) Legal Liability.
Each latent variable is measured using multiple observed indicators derived from specific questionnaire items. For example: “AI-Related Risk Factors” are assessed through sub-items of Question 8 (Q8A–Q8C), corresponding to data leakage, erroneous decisions, and technical failures, respectively. “Legal Liability” is measured through Question 10 (Q10A–Q10C), which captures responses regarding different types of legal responsibility. “Influencing Factors” are evaluated using responses from Questions 12–14 (focused on technical challenges and governance issues) and Questions 17–19 (assessing legal awareness and compliance implementation).
For reliability testing, Cronbach’s α was used to assess the internal consistency of each latent construct. All α values exceeded 0.70, indicating strong reliability. In addition, Composite Reliability (CR) and Average Variance Extracted (AVE) were calculated. All constructs showed CR values above 0.75 and AVE values above 0.50, meeting the recommended thresholds. The reliability and validity results are summarized in Table 4:
The results presented in Table 4 indicate that each latent variable demonstrates satisfactory measurement reliability and convergent validity, meeting the fundamental requirements for subsequent structural model analysis.
To systematically analyze the qualitative data obtained from the open-ended survey responses, this study employed content analysis to code and categorize participants’ answers into thematic groups. The analysis followed several key steps. First, among the 300 collected questionnaires, responses to the open-ended questions—primarily Questions 9 (specific risk events), 15 (effectiveness of response measures), and 16 (lessons learned)—were selected for analysis. Second, an open coding method was applied, with two researchers independently conducting preliminary coding of the responses. After reaching consensus on the initial codes, several thematic labels were identified, including: “technological instability,” “abuse of data access rights,” “unclear legal frameworks,” “insufficient employee training,” “successful emergency response,” and “recommendations for compliance strategy improvement.” Third, based on the frequency of themes and semantic clustering, six major coding categories were developed, as shown in Table 5.
Finally, the coded results were statistically analyzed for frequency and cross-referenced with the quantitative data from the questionnaire, enhancing the overall robustness and interpretability of the findings.
Analysis of the application results of AI technology in tort law
Correlation results analysis
Based on the collected enterprise data, Fig. 3 illustrates the annual occurrence of risk events and legal disputes across enterprises of varying sizes, showing the average values by enterprise scale. This visual representation helps clarify the relationship between enterprise size and the frequency of legal and risk-related incidents, thereby supporting the development of targeted risk management strategies tailored to different organizational contexts.
Figure 3 displays the frequency of various risk events and legal disputes reported by enterprises of different sizes within the CE and financial sectors. The data highlight distinct patterns. In the CE sector, large enterprises report higher instances of data leakages and technical failures, while medium-sized enterprises face more erroneous decisions. In contrast, in the financial sector, small and medium-sized enterprises exhibit higher frequencies of legal disputes, data breaches, and technical failures, respectively. Specifically, large CE enterprises report the highest number of data leakages, totaling 15 incidents. This can be attributed to the extensive customer base and volume of business data they manage, increasing their exposure to data breaches. Similarly, small financial enterprises experience a relatively high number of data leakages (8 incidents), reflecting limited resources and weaker data protection measures, which make them more vulnerable to cyber threats. Erroneous decision-making is most prevalent among medium-sized CE enterprises, with 8 recorded incidents, likely due to underdeveloped management and decision-making systems, making these firms more susceptible to misapplications of generative AI. In comparison, large enterprises report fewer erroneous decisions, potentially due to more robust decision-support frameworks and mature governance structures. Medium-sized financial enterprises report the highest number of technical failures (8 incidents), which may stem from limited resources and the complexity of their AI systems—greater than in small firms but not as well-supported as in large firms. Conversely, large CE enterprises and small financial firms report fewer technical failures (9 and 5 incidents, respectively), suggesting disparities in system stability and technological resilience across different enterprise sizes. Finally, legal disputes are most common among small financial enterprises, with 9 incidents reported, indicating possible deficiencies in legal awareness and compliance mechanisms in managing AI-related risks. In contrast, large CE enterprises report fewer legal disputes, reflecting stronger legal advisory systems and more effective risk management practices.
Analysis of the direct association path of each risk factor on legal liability
To assess the fit of the SEM, several standard fit indices were applied, including the chi-square statistic (χ²), Root Mean Square Error of Approximation (RMSEA), Comparative Fit Index (CFI), and Normed Fit Index (NFI). The results indicate a good model fit: the chi-square value divided by degrees of freedom is 1.89 (below the recommended threshold of 3), RMSEA is 0.045 (below 0.06), CFI is 0.95, and NFI is 0.92 (both exceeding the 0.90 benchmark). Collectively, these indices confirm that the SEM demonstrates a strong overall fit, effectively capturing the relationship between risk factors and legal liability. This validates the model’s suitability for accurately analyzing the association pathways between risk variables and legal responsibility.
The SEM results detailing the direct effects of individual risk factors on legal liability are presented in Table 6.
Table 6 is plotted, as shown in Fig. 4.
The results in Table 4 and Fig. 4 reveal that data leakage has a significant positive association with tort liability, with an association coefficient of 0.72 (p < 0.001). This indicates that data leakage incidents notably increase the legal liability faced by enterprises. Similarly, erroneous decisions show a negative association with tort liability, with a coefficient of −0.36 (p = 0.012). This may be due to internal resolution practices that prevent such decisions from escalating to formal legal action. Additionally, technical failures are positively associated with tort liability, with a coefficient of 0.45 (p = 0.005), suggesting that these failures significantly contribute to legal risks for enterprises. These findings highlight that different risk factors are associated with tort liability to varying degrees. Therefore, it is essential for enterprises to closely monitor these risks and implement effective management and decision-making strategies to minimize legal exposure.
Although erroneous decisions theoretically could lead to legal liability, the path analysis in this study shows a significant negative impact on legal liability, with a coefficient of −0.36. This unexpected result may be explained by several factors: First, some enterprises have implemented strict internal control processes and compliance review mechanisms during decision-making. As a result, even if erroneous decisions occur, they can be detected and corrected internally before triggering external legal liability. Second, enterprises may have downplayed the impact of erroneous decisions in their survey responses due to concerns about image protection or compliance, leading to reporting bias. Finally, the issues arising from erroneous decisions may manifest more in decreased operational efficiency rather than directly causing legal disputes, which may explain why their “explicit triggering effect” on legal liability is weaker than that of data leakage or technical failures.
Analysis of the mediating role of influencing factors in the relationship between risk and legal liability
Before examining the mediating role of influencing factors in the relationship between risk and tort liability, a SEM is used to validate the mediation model. The model fit indices are as follows: the chi-square value/df is 2.01 (indicating a good fit, as it is below the threshold of 3); the RMSEA is 0.048 (which is below the 0.06 threshold for a good fit); the CFI is 0.94; and the NFI is 0.91. The results of the mediation analysis are provided in Table 7.
The results of the visualization are illustrated in Fig. 5:
As shown in Table 7 and Fig. 5, factors such as technology application characteristics, the legal environment, and enterprise management mechanisms mediate the relationship between risk and tort liability. The mediating coefficients are 0.25, 0.18, and 0.32, respectively, all statistically significant (with p-values less than 0.05). These findings suggest that these factors partially mediate the connection between risk and legal liability, indicating their significant role in this relationship. Specifically, the three factors are crucial in shaping the strength of the association between risk and tort liability. Changes in these factors can alter the intensity of this connection. Therefore, when managing risks and mitigating legal liabilities, it is important to consider these factors to effectively address potential risk events and legal disputes. Additionally, bootstrap sampling results further confirm the robustness of the mediating relationships, as the 95% confidence intervals for all mediating paths do not include zero, indicating the statistical significance of the mediation effects. Through model validation and significance testing of the mediating relationships, it is evident that technology application characteristics, the legal environment, and enterprise management mechanisms play a critical mediating role between risk and tort liability. This provides a theoretical foundation for enterprises managing the application of generative AI technology, showing that improving technology application, legal compliance, and enterprise management can help reduce the legal risks associated with generative AI.
From a practical perspective, the role of these mediating variables suggests that enterprises can indirectly reduce legal risks by improving institutional and environmental factors. For example, for small and medium-sized financial enterprises, enhancing the soundness and transparency of the legal environment would significantly alleviate institutional pressures when facing tort liability. Specific measures could include improving client privacy protection agreements, introducing compliance risk control software, and strengthening employee training on laws such as the Personal Information Protection Law and the Cybersecurity Law. Similarly, increasing the transparency and auditability of technology applications can help avoid legal disputes arising from algorithmic “black boxes” or errors in automated decision-making. Regarding enterprise management mechanisms, it is recommended to strengthen internal accountability systems, establish AI-related “risk warning teams,” and improve the identification, response, and correction of risk events.
To further enhance the statistical robustness of the path analysis results, this study employed the Bootstrap resampling method (with 5000 resamples) to estimate the confidence intervals for the path coefficients in the structural equation model. The specific results are presented in Table 8:
As shown in Table 8, the 95% confidence intervals for all paths do not include zero, indicating that the relationships are statistically significant. This not only validates the reliability of the data analysis results but also enhances the interpretability of the study despite the relatively limited sample size.
Discussion
The results of this study reveal that in the CE sector, large enterprises are more vulnerable to data leakage and technical failures, while medium-sized enterprises experience a higher frequency of erroneous decision-making. In the financial sector, small enterprises are particularly prone to data leakage and legal disputes, whereas medium-sized enterprises face increased risks of technical failures. These findings highlight the varying degrees of risk exposure and legal disputes associated with the use of generative AI technology across different industries and enterprise sizes. They suggest that businesses must implement targeted risk management strategies to address their unique challenges. For large enterprises in the CE sector, data leakage is a significant risk. To mitigate this, these enterprises should enhance their information security management by adopting advanced encryption technologies and multi-layered access control systems to reduce the likelihood of breaches. Additionally, it is crucial to invest in data security training to ensure employees understand how to protect sensitive information and effectively respond to data breach incidents. To address technical failures, establishing emergency response teams that can swiftly resolve issues will minimize disruptions to operations. Medium-sized enterprises in the CE sector can reduce decision-making errors by utilizing AI-driven decision support systems that perform scenario analysis, helping to mitigate the biases inherent in human judgment. Strengthening inter-departmental collaboration and instituting regular review processes can also enhance the rigor and consistency of decision-making. In the financial sector, data leakage and legal disputes are particularly prevalent among small enterprises. Given their limited resources, small enterprises may benefit from consulting external information security experts to ensure compliance with security standards. Furthermore, increasing legal awareness around data usage and management is essential to prevent legal issues arising from non-compliance. Medium-sized financial enterprises are particularly susceptible to technical failures when deploying generative AI. To address this, these enterprises should conduct regular system maintenance, implement updates, and develop robust fault-tolerance mechanisms to ensure quick recovery and business continuity in the event of technical issues.
Further analysis in this study reveals that the characteristics of technology application, the legal environment, and enterprise management mechanisms play key mediating roles in the relationship between risk and legal liability. Firstly, the characteristics of technology application are statistically linked to the performance of generative AI technology within enterprises, which in turn influences the legal liabilities they face. The maturity and applicability of the technology determine system stability and security during operations. By selecting validated technological solutions and conducting regular security tests and maintenance, enterprises can significantly reduce the legal liability risks arising from technical failures. Additionally, maintaining strong partnerships with technology providers ensures continuous updates to technology, keeping pace with the evolving risk landscape. The legal environment also plays a significant mediating role in the relationship between risk and legal liability. Legal regulations regarding generative AI vary widely across countries and regions, particularly concerning data protection and privacy, and these differences significantly impact enterprises operating internationally. Enterprises should continuously monitor legal regulations, particularly those related to data privacy and AI ethics, to ensure compliance with the latest standards. Hiring legal consultants and conducting regular legal assessments can further mitigate the risks associated with non-compliance. Lastly, the enterprise management mechanism is another critical mediating factor. The effectiveness of an enterprise’s management system determines how well it can respond to risks related to generative AI technology. Establishing strong internal management frameworks, such as internal audits and risk management systems, helps enterprises quickly identify potential issues and implement corrective actions. Clear definitions of risk management responsibilities are essential within the organization, ensuring that every employee understands their duties and the legal liabilities they may incur during technology implementation. This proactive approach helps reduce the likelihood of risk exposure.
Although this study, through questionnaire surveys and structural equation modeling, has preliminarily revealed the relationship between risks and legal liability in the application of generative AI technology, many issues remain to be explored in greater depth given the rapidly evolving technological and legal landscape. Future research can be expanded in the following areas: First, longitudinal data or field research methods could be employed to track the evolution of risks and shifts in liability distribution throughout the entire AI system deployment process, providing insights into dynamic causal chains. Second, it is necessary to incorporate more dimensions of legal liability into the analytical framework—for instance, distinguishing between fault-based liability, product liability, and data compliance obligations—to better reflect actual legal cases and judicial reasoning. Third, since the current model is primarily based on data from Chinese enterprises, future studies could broaden the scope to include multinational samples, examining the mechanisms of risk governance for generative AI across different legal jurisdictions and identifying their similarities and differences. Moreover, future research could adopt interdisciplinary perspectives from law, ethics, and technology governance to construct a more comprehensive liability assessment model, thereby offering more practical risk control and governance recommendations for policymakers and enterprises. Overall, this study represents an initial step in the exploration of legal risks associated with generative AI and lays a theoretical and empirical foundation for subsequent systematic research, holding substantial academic value and practical significance.
Conclusion
This study employs a survey-based approach combined with SEM to investigate the tort liability arising from application risks associated with generative AI technology in the CE and financial sectors. The findings reveal significant differences in risk events and legal disputes across enterprises of different sizes, with various risk factors exhibiting differing degrees of association with legal liability. Additionally, factors such as technology application characteristics, the legal environment, and enterprise management mechanisms play critical mediating roles in the relationship between risk and legal liability. These insights not only identify the key risks enterprises face when applying generative AI technology but also offer theoretical foundations and practical guidance for managing and mitigating such risks. First, this study is the first to integrate the CE and financial sectors in analyzing the relationship between generative AI application risks and tort liability. This interdisciplinary approach addresses gaps in the existing literature on cross-industry risk management and provides new perspectives for evaluating generative AI risks across different domains. Second, by employing SEM, the study explores both the direct and indirect statistical relationships between various risk factors and legal liability, supplying empirical evidence to support enterprise-level legal risk management. Third, it clarifies the mediating roles of technology application characteristics, the legal environment, and enterprise management mechanisms in the context of generative AI applications, offering practical references for developing legally compliant management strategies. Despite these contributions, the study has certain limitations. First, the sample size is relatively small, comprising only 60 enterprises, which may not fully capture the diversity and complexity of the broader industry. Second, sample selection bias may exist, as data were primarily collected from enterprises with experience in generative AI applications. This could skew the sample toward more technologically mature organizations and overlook those that have yet to widely adopt such technologies. Future research should aim to increase the sample size and include a broader range of enterprise types and sizes to enhance the generalizability and representativeness of the findings and reduce potential sampling bias. Moreover, the primary data collection method in this study is a survey. Although the response rate was high, the depth of qualitative insights may be limited. Future research could incorporate case studies or fieldwork to validate and enrich the findings, thereby enhancing both applicability and reliability. Several directions are recommended for further exploration. First, future studies could examine differences in tort liability related to the application risks of generative AI technology across various industries and enterprise sizes. This would help clarify the contextual risk characteristics of generative AI applications. Second, additional variables—such as the level of technological innovation, management standards, and organizational culture—could be introduced to provide a more comprehensive analysis of the factors influencing the relationship between risk and legal liability. Furthermore, qualitative approaches like case studies or field investigations could reveal specific legal liability challenges in real-world scenarios, strengthening the empirical foundation of this study and enabling more targeted risk management recommendations. Lastly, given the rapid evolution of generative AI technology, future studies should also focus on how ongoing technological advancements reshape legal risks and governance strategies. The academic contribution of this study lies in its empirical analysis of the application risks and legal liabilities of generative AI in the CE and financial sectors, addressing a key research gap in this emerging field. For enterprises, the study offers actionable recommendations for risk management, enabling them to better navigate tort liability, protect corporate interests, and improve risk response and compliance capabilities. For government agencies, the findings provide valuable references for the development and enforcement of relevant laws and policies, contributing to healthy industry development and the maintenance of market order. Overall, the study holds significant practical relevance and offers strong value for broader application and promotion.
Data availability
The data that support the findings of this study are available on request from the corresponding author, uponreasonable request.
References
Adegbola MD, Adegbola AE, Amajuoyi P et al. (2024) Quantum computing and financial risk management: a theoretical review and implications. Comput Sci IT Res J 5:1210–1220. https://doi.org/10.51594/csitrj.v5i6.1194
Almada M, Petit N (2025) The EU AI Act: Between the rock of product safety and the hard place of fundamental rights. Common Market Law Rev, 85–120. https://doi.org/10.54648/cola2025004
Ananny M (2022) Seeing like an algorithmic error: what are algorithmic mistakes, why do they matter, how might they be public problem. Available: https://law.yale.edu/isp/publications/digital-public-sphere/healthy-digital-public-sphere/seeing-algorithmic-error-what-are-algorithmic-mistakes-why-do-they-matter-how-might-they-be-public
Bedford N, Bonython W, Taylor A (2023) Law as it is, and how it could be: law reform participation as authentic assessment and a pedagogical tool. Law Teacher, 58–73. https://doi.org/10.1080/03069400.2023.2288415
Brian Elzweig LJ (2023) When does a Non-Fungible Token (NFT) become a security?. Available: https://readingroom.law.gsu.edu/gsulr/vol39/iss2/8
Buckley RP, Zetzsche DA, Arner DW et al. (2021) Regulating artificial intelligence in finance: putting the human in the loop. Syd Law Rev 43:43–81. https://doi.org/10.3316/informit.676004215873948
Buocz T, Pfotenhauer S, Eisenberger I (2023) Regulatory sandboxes in the AI Act: reconciling innovation and safety? Law Innov Technol 15:1–33. https://doi.org/10.1080/17579961.2023.2245678
Burdon M, Cohen T, Buckley J et al. (2024) From object obfuscation to contextually-dependent identification: enhancing automated privacy protection in street-level image platforms (SLIPs). Inf Commun Technol Law, 1–24. https://doi.org/10.1080/13600834.2024.2321052
Chen J (2024) Fintech: digital transformation of financial services and financial regulation. Highlights Bus Econ Manag 30:38–45. https://doi.org/10.54097/512jkg86
Cheng L, Liu X (2023) From principles to practices: the intertextual interaction between AI ethical and legal discourses. Int J Leg Discourse 8:31–52. https://doi.org/10.1515/ijld-2023-2001
Cheng L, Xu M, Chang C-Y (2023) Exploring network content ecosystem evaluation model based on Chinese judicial discourse of digital platform. Int J Leg Discourse 8:199–224. https://doi.org/10.1515/ijld-2023-2010
Correia Loureiro SM, Guerreiro J, Tussyadiah I (2021) Artificial intelligence in business: State of the art and future research agenda. J Bus Res 129:911–926. https://doi.org/10.1016/j.jbusres.2020.11.001
Delfino RA (2023) Deepfakes on trial: a call to expand the trial judge’s gatekeeping role to protect legal proceedings from technological fakery. Hastings Law J 74:293–348. https://doi.org/10.2139/ssrn.4032094
Dornis T (2019) Artificial creativity: emergent works and the void in current ip doctrine. SSRN Electron J 22:1. https://doi.org/10.2139/ssrn.3451480
Feldman R, Stein K (2022) AI governance in the financial industry. Available: https://law.stanford.edu/publications/ai-governance-in-the-financial-industry/
Fosso Wamba S, Guthrie C, Queiroz MM et al. (2024) ChatGPT and generative artificial intelligence: an exploratory study of key benefits and challenges in operations and supply chain management. Int J Prod Res 62:5676–5696. https://doi.org/10.1080/00207543.2023.2294116
Garcia EV, Piccinelli M (2023) Preparing for the artificial intelligence revolution in nuclear cardiology. Nucl Med Mol Imaging 57:51–60. https://doi.org/10.1007/s13139-021-00733-3
Gipson Rankin S (2020) Technological tethereds: potential impact of untrustworthy artificial intelligence in criminal justice risk assessment instruments. SSRN Electron J 78:647–124. https://doi.org/10.2139/ssrn.3662761
Grajzl P, Murrell P (2022) Using topic-modeling in legal history, with an application to pre-industrial english case law on finance. Law Hist Rev 40:189–228. https://doi.org/10.1017/s0738248022000153
Hacker P (2023) The European AI liability directives - critique of a half-hearted approach and lessons for the future. Comput Law Security Rev 51:105871. https://doi.org/10.1016/j.clsr.2023.105871
Huang H, Mbanyele W, Wang F et al. (2023) Nudging corporate environmental responsibility through green finance? J Bus Res 167:114147. https://doi.org/10.1016/j.jbusres.2023.114147
Izzidien A (2023) Using the interest theory of rights and Hohfeldian taxonomy to address a gap in machine learning methods for legal document analysis. Humanit Soc Sci Commun 10:1–15. https://doi.org/10.1057/s41599-023-01693-z
Jacob Alhadeff CC, Max Del Real (2024) Limits of algorithmic fair use. Available: https://digitalcommons.law.uw.edu/wjlta/vol19/iss1/1
Jaksic Z, Devi S, Jaksic O et al. (2023) Civil liability in the development and application of artificial intelligence and robotic systems: basic approaches. Biomimetics 8:278. https://doi.org/10.3390/biomimetics8030278
Keuper K, Bartek J, Maya-Mendoza A (2024) The nexus of nuclear envelope dynamics, circular economy and cancer cell pathophysiology. Eur J Cell Biol 103:151394–151394. https://doi.org/10.1016/j.ejcb.2024.151394
Kharitonova YS, Savina VS, Pagnini F (2022) Civil liability in the development and application of artificial intelligence and robotic systems: basic approaches. Вестник Пермского университета Юридические науки 58:683–708. https://doi.org/10.17072/1995-4190-2022-58-683-708
Kouroutakis A (2024) Rule of law in the AI era: addressing accountability, and the digital divide. Discov Artif Intell 4:115. https://doi.org/10.1007/s44163-024-00191-8
Krippner GR (2023) Unmasked: a history of the individualization of risk. Sociological Theory 41:83–104. https://doi.org/10.1177/07352751231169012
Li C, Liang F, Liang Y et al. (2023a) Low-carbon strategy, entrepreneurial activity, and industrial structure change: evidence from a quasi-natural experiment. J Clean Prod 427:139183. https://doi.org/10.1016/j.jclepro.2023.139183
Li C, Tang W, Liang F et al. (2024a) The impact of climate change on corporate ESG performance: the role of resource misallocation in enterprises. J Clean Prod 445:141263. https://doi.org/10.1016/j.jclepro.2024.141263
Li DD, Guan X, Tang TT et al. (2023b) The clean energy development path and sustainable development of the ecological environment driven by big data for mining projects. J Environ Manag 348:119426. https://doi.org/10.1016/j.jenvman.2023.119426
Li Y, Zhang Y, Hu J et al. (2024b) Insight into the nexus between intellectual property pledge financing and enterprise innovation: a systematic analysis with multidimensional perspectives. Int Rev Econ Financ 93:700–719. https://doi.org/10.1016/j.iref.2024.03.050
Lussier N (2022) Nonconsensual deepfakes: detecting and regulating this rising threat to privacy. Available: https://digitalcommons.law.uidaho.edu/idaho-law-review/vol58/iss2/6
Nielsen C (2023) How regulation affects business model innovation. J Bus Models 11:105–116. https://doi.org/10.54337/jbm.v11i3.8127
Sabnam S, Rajagopal S (2024) Application of generative adversarial networks in image, face reconstruction and medical imaging: challenges and the current progress. Comput Methods Biomech Biomed Eng 12:2330524. https://doi.org/10.1080/21681163.2024.2330524
Spector-Bagdady K (2021) Governing secondary research use of health data and specimens: the inequitable distribution of regulatory burden between federally funded and industry research. J Law Biosci 8:1–39. https://doi.org/10.1093/jlb/lsab008
Tao F (2022) A new harmonisation of art and technology: Philosophic interpretations of artificial intelligence art. Crit Arts South North Cultural Media Stud 36:110–125. https://doi.org/10.1080/02560046.2022.2112725
Tzimas T (2021) AI, issues of ownership, liability and the role of international law. Law Govern Technol Ser, 199–226 https://doi.org/10.1007/978-3-030-78585-7_9
Van Loo R (2020) The revival of respondeat superior and evolution of gatekeeper liability. Georget Law J 109:141–189. https://doi.org/10.3316/agispt.20201112039559
Author information
Authors and Affiliations
Contributions
QC: Writing – review & editing, Conceptualization, Data curation, Formal analysis; XH: Writing – original draft, Methodology, Project administration, Resources, Validation. Ultimately, all the authors declared no conflicts of interest, contributed to the work, and approved the submitted version.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Ethics approval
This study was approved by the Academic Ethics Review Committee of East China Normal University, China, on March 15, 2024. Our study did not involve animal or human clinical trials and was not unethical.
Informed consent
In accordance with the ethical principles of the Declaration of Helsinki, informed consent was obtained from all participants prior to their involvement in the study on March 20, 2024. The anonymity and confidentiality of the participant guaranteed, and participation was completely voluntary. Before filling out the questionnaire, they have been informed of the research purpose and informed that “submitting answers” is considered informed consent. Participants can exit at any time during the questionnaire filling process. No vulnerable individuals or minors were involved in the study.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
About this article
Cite this article
Chen, Q., Hu, X. Tort liability for the application risk of generative artificial intelligence technology in the circular economy and financial industry: evidence from China. Humanit Soc Sci Commun 12, 1042 (2025). https://doi.org/10.1057/s41599-025-05419-1
Received:
Accepted:
Published:
Version of record:
DOI: https://doi.org/10.1057/s41599-025-05419-1







