Introduction

As artificial intelligence (AI) becomes increasingly integrated into various sectors of society, AI literacy has emerged as a critical skill for citizens to effectively engage in creative and professional work1. However, alongside the widespread adoption of AI technologies, several ethical concerns have arisen. Issues such as privacy breaches, data security risks, and the widening digital divide pose significant challenges to the effective and equitable implementation of AI2,3,4. Consequently, AI ethics has emerged as a central research area across sectors, particularly in addressing these challenges.

AI ethics is an interdisciplinary field of research and practice that examines the impacts of artificial intelligence on individuals, society, and institutions, and guides its design, development, and use through ethics principles to ensure legality, ethical compliance, and human well-being. Current research on AI ethics primarily revolves around three key areas: (1) principles that guide AI systems in processes such as data analysis, learning, and self-adjustment to complete tasks; (2) ethical guidelines for the development of AI technologies; (3) ethical responsibilities associated with the use and deployment of AI. First, AI ethics focuses on accountability, human agency, fairness, and social impact, ensuring that AI systems adhere to these principles during data analysis, learning, and self-adjustment processes5. Second, AI ethics involves guiding the development of AI technologies to ensure ethical interactions with both humans and other AIs, as well as ethical operations within society6. Third, the responsible use of AI emphasizes that entities utilizing these technologies must be accountable for their outcomes and ensure adherence to ethical principles throughout the development and deployment stages7,8. Despite diverse definitions and perspectives on AI ethics, all share a common objective: mitigating risks associated with AI use and ensuring its healthy development and appropriate integration into society9.

In the age of intelligence, ensuring the responsible use of AI technologies has become a critical priority for integrating AI into society. The urgency highlights the need for research and practice in the domain of AI ethics literacy. AI ethics literacy enables individuals to effectively navigate ethical challenges associated with AI technologies10. Meanwhile, the ability of learners to actively set goals, employ appropriate strategies, and monitor and regulate their cognition, motivation, and emotions during the learning process—so as to effectively manage and optimize their learning toward achieving desired outcomes—has been studied in relation to their AI literacy. However, research pertaining to the relationship between AI ethics literacy and self-rated learning competence using AI remains scarce. Therefore, this study aims to construct a framework for AI ethics literacy, explore its constituent components, and examine whether AI ethics literacy can enhance students’ self-rated learning competence using AI.

The study addresses the following questions: (1) What are the constituent components of AI ethics literacy? (2) How do these constituent components relate to each other? and (3) Does AI ethics literacy enhance students’ self-rated learning competence using AI?

Literature review and hypotheses development

In this section, we review the relevant literature, summarize the previous research findings, and extract the theoretical foundation. Additionally, based on the analysis and synthesis of the literature, we propose a series of hypotheses and establish a theoretical model to guide subsequent research design and data analysis.

AI ethics literacy framework

UNESCO defined literacy as the ability to read and write. However, literacy has evolved into a multidimensional concept encompassing the ability to acquire, comprehend, process, and apply information11. In studies on digital literacy, knowledge, attitude, and competence were considered three key dimensions for enhancing digital literacy12,13. Similarly, in research on AI literacy, Ng et al.14 and Lee7 identified three core dimensions: knowing, using, and critically considering AI. Building upon these frameworks, we propose that AI ethics literacy can be categorized into three dimensions: AI ethics knowledge, AI ethics attitude, and AI ethics competence. The categorization reflects the stages of enhancing literacy and provides a structured approach to explore AI ethics literacy.

AI ethics knowledge

AI ethics knowledge (AIEK) refers to the comprehension and consideration of ethical principles and issues surrounding the development, deployment, and use of AI technologies. It encompasses awareness of key ethical principles such as privacy, fairness, accountability, and the broader social implications of AI systems15. In the educational context, AIEK denotes the comprehension and application of ethical principles in the use of AI-based educational tools during teaching and learning processes. It ensures that AI systems designed for educational purposes promote fairness, accountability, and respect for students’ privacy and rights16. Based on the existing literature surrounding AI ethics, this study preliminarily categorizes the principles of AI ethics into four core dimensions: responsibility and accountability, fairness and inclusivity, human-centricity, and privacy protection. Together, these dimensions constitute the foundational structure of AIEK.

AI ethics attitude

For students, AI ethics attitude (AIEA) refers to the recognition of both the positive and negative impacts of AI and the capacity to make informed and responsible decisions within human-AI interactions. AIEA involves confronting ethical risks with awareness, engaging proactively with AI technologies, and adhering to established ethical principles17,18,19. According to Ghotbi et al., cultural background and cognitive development play a more significant role in shaping these attitudes than individual viewpoints20. Li et al. further highlights the influence of social environment and public awareness of AI ethics on students’ attitudes21. A comprehensive educational framework is therefore essential to systematically develop students’ ethical awareness and judgment.

AI ethics competence

Ethical competence is defined as an individual’s capacity to identify, apply, and act in accordance with ethical principles in practical contexts, representing a fundamental component of moral agency22,23. Yang et al. defined AI ethics competence (AIEC) as the reasoned and consistent application of AI technologies across daily activities—including education, work, and personal life—while remaining attentive to ethical concerns and skillfully managing human-AI interactions17. As AI becomes increasingly pervasive in educational systems, fostering AIEC has become essential to promoting meaningful and effective AI learning experiences24,25.

Hypothesis formulation and modeling

This study investigates the constituent components of AI ethics literacy, interrelationships, and the impact of AI ethics literacy on students’ self-rated learning competence using AI. Drawing on existing literacy models in the literature, it is determined that the preliminary model consists of three dimensions: knowledge, attitude and competence. Subsequently, this study examines the four preset ethical principles constituting AIEK, and explores the relationships between students’ knowledge, attitude, and competence regarding AI ethics and evaluates how these factors contribute to enhancing their self-rated learning competence using AI (SRLC).

In this model, students’ AIEK is measured by their level of understanding of AI ethics principles. After reviewing and organizing the literature related to AI ethics principles, we have summarized the key content into the following four principles: responsibility and accountability (RA), fairness and inclusivity (FI), human-centricity (HC), and privacy protection (PP). Subsequently, we conceptualize AIEK as a higher-order “reflective-formative” structure, with the above four principles hypothesized as lower-order reflective constructs (Table 1)26. The higher-order formative model reduces the number of path model relationships, mitigates collinearity, simplifies interpretation, and enhances the model’s reliability and validity. Instead of specifying relationships between multiple independent and dependent constructs in a path model27,28,29. Higher-order constructs also help to overcome the bandwidth-fidelity dilemma. According to which there is a tradeoff “between variety of information and thoroughness of testing to obtain more certain information. Each lower-order construct is defined by distinct measurement items, reflecting specific conceptual meanings.

Knowledge plays a fundamental role in shaping attitudes30,31. Well-defined knowledge of AI ethics fosters objective and positive attitudes20. When students recognize the ethical implications of AI technology, they are more likely to adopt a proactive approach to addressing these issues. Additionally, research suggests that integrating AI into mainstream culture facilitates its acceptance in daily life32. Based on these insights, the following hypothesis is proposed:

H1a AI ethics knowledge has a positive influence on AI ethics attitude.

Based on Bandura’s Social Cognitive Theory, human behavior is not unidirectionally controlled by the environment; instead, it results from the triadic reciprocal interaction among the personal factors (including cognition, emotion, self-efficacy, etc.), the environmental factors, and behavior33. Knowledge significantly influences competence development34. In domains requiring conscious application of knowledge, such as AI ethics, awareness of challenges and significance can motivate deeper engagement. Accordingly, the following hypothesis is proposed:

H1b AI ethics knowledge has a positive influence on AI ethics competence.

Table 1. Latent construct of AI ethics knowledge.

Based on the Theory of Reasoned Action, an individual’s behavior is determined by their behavioral intention, and attitude is a key factor influencing behavioral intention35. Meanwhile, the Technology Acceptance Model (TAM) also indicates that perceptions of usefulness and ease of use shape users’ willingness to adopt technology36,37. Students with positive and responsible attitudes toward AI are more likely to use AI tools ethically and effectively, enhancing their independent learning capabilities38. Positive attitudes also promote critical thinking and ethical judgment, which are vital for selecting reliable AI resources and avoiding biased information39. Students with positive attitudes toward AI ethics are more likely to actively participate in learning activities and address ethical issues in practical applications40. Based on these findings, the following hypothesis is proposed:

H2a AI ethics attitude has a positive influence on self-rated learning competence using AI.

H2b AI ethics attitude has a positive influence on AI ethics competence.

Finally, the relationship between AIEC and SRLC is explored. Previous studies demonstrate that ethical competence influences students’ acceptance and effective use of AI tools, ultimately enhancing their learning outcomes41. A systematic review by Crompton and Burke further highlights that ethically competent students perform better when using AI tools for independent learning42. Based on this evidence, the following hypothesis is proposed:

H3 AI ethics competence positively influences self-rated learning competence using AI.

The hypothetical model, shown in Fig. 1, posits that AIEK comprises four fundamental principles, which influence both attitudes and competence. Together, these elements shape students’ ability to engage in SRLC. The interplay between knowledge, attitudes, and competence establishes a foundation for the responsible and effective use of AI in educational.

Fig. 1
figure 1

The conceptual model.

Method

Instrument development

In this study, the survey instrument employed was derived from existing questionnaires. The measurement of AIEK referenced the scale developed by Kim and Ko, which encompassed four dimensions: protection, fairness, and accountability43. The scale demonstrated high reliability and validity and has been optimized through confirmatory factor analysis. The AIEA questionnaire drawn on existing surveys from Jang44 and Kwak et al.45, with modifications made to align with the context of university students. The questionnaire items for the AIEC primarily measuring students’ ability to mitigate AI ethical risks during AI usage. The items for self-rated learning competence using AI were sourced from the work of Zimmerman, whose dimensional framework has been widely cited in several studies46. The original dimension encompassed three aspects: “manage study time,” “adjust learning methods,” and “focus on and reflect upon the content.” We integrated AI-empowered learning elements into the dimension, forming the “self-rated learning competence using AI” dimension. While the scales were adapted from established instruments, this study integrates the established instruments within an original three-dimensional framework of AI ethics literacy, representing a progressive developmental model. Furthermore, it innovatively incorporates core AI ethical principles directly into the structure of AIEK, moving beyond conventional operationalizations.

The questionnaires were tailored to meet the objectives of this research. A preliminary test was conducted with 15 undergraduate students to gather their initial feedback. Based on the feedback, revisions have been made to enhance the clarity and comprehensibility of the questionnaire. After further evaluation, it was confirmed that the instrument was free from semantic errors and understandable to the respondents. The finalized questionnaire encompassed seven dimensions, each measured by three or more items on a five-point Likert scale, ranging from ‘strongly disagree’ to ‘strongly agree’ (Table 2).

Table 2 Variable dimensions.

Data collection and samples

This study used convenience sampling to recruit participants from several universities in Hangzhou, China. The survey was distributed through Wenjuanxing, a widely used online survey platform in China, and the measurement is based on self-reported data. Before completing the questionnaire, all participants were informed of the purpose of the study and ensured that all responses were anonymous. Data entered into the database did not include any personally identifiable information. After excluding incomplete or invalid questionnaires, we received 482 valid responses. Among the valid participants, 191(39.63%) were male, and 291(60.37%) were female. Regarding educational background, 251(52.07%) were vocational college students, 168(34.85%) were undergraduate students, and 63(13.07%) were graduate students. In terms of academic majors, engineering and technology constituted the largest group(46.27%), followed by social sciences(34.02%), business(12.86%), and arts(6.85%). Table 3 elaborates on the demographic information of the participants.

Table 3 Participants’ demographic information.

Data analysis

The analytical model developed in this study integrates complex higher-order formative and reflective constructs and incorporates predictive elements that extend beyond the capabilities of traditional statistical models. To handle this complexity effectively, SmartPLS3 was employed for statistical data analysis. Partial Least Squares Structural Equation Modeling(PLS-SEM) is particularly favored in the social sciences because of its robust capacity to facilitate prediction and its user-friendly approach to the estimation of statistical models. A distinguishing feature of PLS-SEM is its flexibility in handling complex model structures without stringent requirements for normal data distribution, which is often a prerequisite in other forms of structural equation modeling. This makes PLS-SEM highly suitable for exploratory studies where the primary interest lies in theory building and prediction rather than mere confirmation of existing theories50. Given these characteristics, PLS-SEM was selected as the optimal tool to conduct the intricate analyses required in our study.

Results

Common method bias

A common method bias(CMB) arises from reliance on a single data source and can bias results51,52. Given that the data in this study were derived from questionnaire samples, there was potential for CMB, which could negatively affect the validity of the structure and relationships53. Hence, firstly, this study employed Harman’s single-factor test to assess the presence of CMB. The results indicated that the variance explained by the first factor was 27.8%, less than 40%, suggesting, based on Harman’s test, that CMB was not a concern in this study. Secondly, according to Kock (2015), the occurrence of VIF greater than 3.3 is proposed as an indication of pathological collinearity, and also as an indication that a model may be contaminated by CMB54. Therefore, if all VIFs in the inner model resulting from a full collinearity test are equal to or lower than 3.3, the model can be considered free of CMB. As shown in Table 4, all VIFs in the inner model were lower than 3.3. Thus, the model could be considered free of CMB.

Table 4 VIFs in the inner model.

Measurement model evaluation

This study conceptualizes AIEK as a second-order construct formed by four first-order constructs (FI, HC, PP and RA), and therefore models the second-order construct as formative. The four first-order constructs are estimated using reflective measurement models. Following the disjoint two-stage approach, in the first stage, the four first-order constructs are estimated, and their latent variable scores are extracted. In the second stage, these latent variable scores are used as indicators to estimate the second-order construct AIEK.

In the first stage, all relevant lower-order constructs (both those forming the higher-order construct and those at even lower levels) are incorporated into the model to assess their reliability and validity. As demonstrated in Table 5, the measurement model’s Cronbach’s alpha, Composite Reliability (CR), and Average Variance Extracted (AVE) indicated that when the AVE values are higher than 0.5, each latent variable possesses satisfactory convergent validity55. Concurrently, the range of Cronbach’s alpha from 0.762 to 0.921 exceeds the critical threshold of 0.7, and the CR values also surpass the critical threshold of 0.7, indicating that the measurement model boasts commendable reliability56.

Table 5 Reliability analysis.

Finally, we employ three methods to validate the discriminant validity of the model. The first method is Fornell-Larcker criterion, as demonstrated in Table 6, where the correlation coefficients between the latent variables are all less than the square root of the AVE values for each dimension under the measurement model57.

Table 6 The fornell-larcker criterion.

The second validation method involves the criterion of cross-loading, as indicated in Table 7, where the loadings of the indicators on their respective constructs are greater than all loadings on other constructs, indicating good discriminant validity57. The third method we employed is Heterotrait Monotrait Ratio criterion (HTMT), adopting a conservative threshold of < 0.8558. As shown in Table 8, the correlations between most constructs were below 0.85, while the correlation between SRLC and AIEA exceeds this threshold. But considering that Social Cognitive Theory emphasizes the interactive relationship between personal attitudes and behaviors, we argue that although these two constructs are conceptually distinct, a stronger association between them is theoretically expected. Thus, a relatively high HTMT value is consistent with our theoretical model. Based on the above evidence, we confirm that the measurement model demonstrates discriminant validity.

Table 7 Cross loading.
Table 8 The heterotrait-monotrait ratio criterion.

Subsequently, the latent variable scores derived from the calculation of the lower-order measurement model were saved as individual variables in the dataset for use in constructing the higher-order model in the following stage.

Formative construct validation

In stage two, the latent variables scores from the stage one results allowed creating and estimating the model. In this study, AIEK is the higher-order construct, comprising four lower-order constructs: RA, FI, HC and PP. To validate the effectiveness of the higher-order construct, indicators such as VIF, outer weight, and outer loading27. We assessed VIFs to examine collinearity, and all VIFs were below the recommended threshold of 3.33 (Table 9), indicating no collinearity issues59. And the second-order construct detail are shown in Table 1010.

Table 9 Formative construct validation.
Table 10 Second-order construct outer weights and outer loadings.

Structure model

To validate the structural model, we opted for the coefficient of determination (R²), predictive relevance (Q²), model fit, and testing of the proposed hypotheses for further analysis.

R2 and Q2

The R Square statistics elucidated the variance in the endogenous variable accounted for by the exogenous variable(s). Hair et al. (2013) suggested that R² values for endogenous latent variables falling within the ranges of 0.25, 0.50, and 0.75 could be described as weak, moderate, or substantial, respectively60.

In accordance with the recommendations of Hair et al. (2011), this study employed bootstrapping with 5000 samples to test the significance of path coefficients and to calculate R²57. The results, as shown in Table 11, indicate that the explained variances for AIEA, AIEC, and SRLC were all greater than 0.5, which were deemed valid. Another metric of measurement is Q², where values above zero suggest that the observed values are well-reconstructed, indicating that the model has predictive relevance57.

Table 11 Predictive accuracy and predictive relevance.

Model fit

To verify the model fit, we calculated the standardized root mean residual (SRMR), d_ULS, d_G, and NFI values of the model. The results (Table 12) all met the reference standard, indicating a satisfactory model fit61.

Table 12 Indicators of model fit.

Hypothesis testing

In the hypothesis testing section, we calculated the path coefficients (β) and significance (p) values of the structural model, and the results showed that this study was able to test the proposed hypothesis.

Given that the model constructed in this study demonstrated good reliability and validity, we utilized the bootstrapping method in SmartPLS3 to examine the structural model and calculate the path coefficients and their significance for the hypothesized paths. As shown in Fig. 2; Table 13, all paths were statistically significant. AIEK had a direct impact on AIEA and AIEC, with coefficients (β = 0.797, p < 0.001) and (β = 0.568, p < 0.001), respectively, supporting H1a and H1b. AIEA directly influenced AIEC and SRLC, with coefficients (β = 0.300, p < 0.001) and (β = 0.456, p < 0.001), respectively, supporting H2a and H2b. Furthermore, AIEC has a direct impact on SRLC (β = 0.416, p < 0.001), thereby supporting H3.

Table 13 Direct effect.
Fig. 2
figure 2

Results of structural model.

In our research model, two mediating paths were established: AIEA and AIEC, with the mediation effect of the model tested through the bootstrapping method. According to Zhao et al. (2010), significant direct and mediating effects indicated partial mediation, non-significant direct effects alongside significant mediating effects indicated full mediation, and significant direct effects with non-significant mediating effects suggested no mediation effect62. As shown in Table 14, AIEK significantly indirectly affected SRLC (β = 0.364, p < 0.001; β = 0.236, p < 0.001). AIEA (β = 0.456, p < 0.001) and AIEC (β = 0.416, p < 0.001) also significantly impacted SRLC, thus supporting H2a and H3. Next, while AIEA significantly directly influenced SRLC, it was also indirectly impacted by AIEC (β = 0.125, p < 0.01), which supports H2b. Finally, AIEK could indirectly influence AIEC through AIEA, exhibiting partial mediation. In summary, AIEA and AIEC partially mediated the direct relationship between AIEK and SRLC, indicating that AIEK could directly affect SRLC but also indirectly influence it through AIEA and AIEC.

Table 14 Mediation effect.

Discussion

This research primarily aims to investigate the internal components of AI ethics literacy, the interrelationships among these components, and the influence of AI ethics literacy on students’ self-rated learning competence using AI. To validate our hypothesized model, we employ partial least squares structural equation modeling (PLS-SEM). The findings confirm the relationships between various components of AI ethics literacy and demonstrated their significant impact on students’ capacity for independent learning in AI-assisted contexts.

Components of AI ethics literacy

This study conceptualizes AI ethics literacy through literature review, identifying three fundamental components: knowledge, attitude, and competence. AIEK encompasses awareness and understanding of core AI ethics principles. AIEA represents individuals’ perspectives on ethical dimensions of AI technologies. AIEC reflects the practical ability to apply ethical principles when utilizing AI technologies.

A lower-order construct with a large and statistically significant outer weight in a formative higher-order construct indicates that it is an important component of the higher-order construct. This study identifies FI (outer weight = 0.360, p < 0.001) and HC (outer weight = 0.445, p < 0.001) as important components of AIEK, which align with societal discussions on technological ethics63. FI was frequently cited by participants, suggesting their recognition of AI’s potential impact on social equity64,65. Similarly, the emphasis on HC reflects students’ concerns about safeguarding rights and interests in AI development, as well as their anxieties about human-machine relationships. Besides, students’ focus on PP indicates growing awareness of data privacy in AI-enhanced learning environments. The prioritization above reflects the limitations of current AI ethics education, emphasizing self-interest security and social impact while neglecting the safety of AI applications themselves.

Unlike previous research on AI literacy, which often treats ethical considerations merely as one component of literacy, this study innovatively refines the conception of AI ethics literacy by focusing specifically on the knowledge, attitude, and competence underlying it. Diverge from earlier works that treated ethics as a monolithic construct by empirically differentiating and validating the roles of knowledge, attitude, and competence within ethical literacy. The findings build on existing conceptualizations of AI literacy by addressing ethical dimensions arising from AI technologies’ unique characteristics and impacts. The proposed multi-component model offers a potential framework for educational interventions that integrate knowledge, attitude, and competence relevant to responsible AI engagement.

Interrelationships among AI ethics literacy components

The effects of AIEK on AIEA and AIEC

Align with existing theories and studies on TAM and UTAUT by highlighting the role of ethical understanding as a facilitator of confidence and engagement with AI tools. Our findings indicate a positive association between AIEK and AIEA (H1a), suggesting that students with a solid foundation in AIEK often develop mature AIEA. The finding aligns with established educational theories suggesting that knowledge acquisition forms the cognitive foundation upon which attitudinal frameworks are constructed66. Students with a robust understanding of AI ethics principles demonstrate enhanced ability to identify and evaluate ethical considerations when engaging with AI applications67. These results highlight the value of knowledge-based interventions in fostering thoughtful ethical perspectives, as students equipped with ethical frameworks may better navigate the complex ethical landscapes of emerging AI technologies6.

Similarly, AIEK exhibits a positive relationship with AIEC (H1b), indicating that knowledge accumulation is an important precursor to practical ethical competence development. The finding aligns with the understanding that knowledge provides a foundation for competence and innovation68. The observed knowledge-competence pathway reflects a challenge in higher education: students facing AI-related ethical dilemmas sometimes lack the frameworks to recognize issues or act appropriately. For instance, some students may overlook data privacy risks when using AI tools, underscoring the need for targeted educational approaches. The results suggest that universities can support the concurrent development of positive ethical attitude and practical competence by establishing systematic ethical knowledge bases through curricula.

The effects of AIEA on AIEC

Our analysis supports the hypothesized association between AIEA and AIEC, with results showing that AIEA significantly influences AIEC (H2b). The finding contributes to our understanding of how attitudinal factors shape competence development in AI ethics education. Students demonstrating positive ethical attitudes toward AI tend to exhibit greater self-efficacy in addressing ethical challenges, aligning with Kajiwara and Kawabata’s (2024) observations on the interplay between attitudes and capabilities9.

The attitude-competence pathway suggests that ethical attitudes may facilitate the translation of knowledge into practical application. Students with responsible attitudes toward AI ethics show increased sensitivity to ethical considerations in AI applications, which can enhance their ability to apply ethical principles in real-world contexts69. Additionally, positive ethical attitudes appear to serve as motivational drivers, encouraging active engagement with AI-related ethical issues and potentially fostering iterative learning processes.

Results above highlight the value of educational approaches that integrate attitudinal and knowledge-based components to promote ethical competence. Pedagogical strategies designed to cultivate thoughtful and critical attitudes toward AI technologies could help students develop the capacity to navigate ethical complexities in AI-mediated environments.

The effects of AIEA and AIEC on SRLC

Our findings indicate positive relationships between AI ethics dimensions and students’ SRLC. Consistent with prior research70, we observed a positive association between AIEA and SRLC. The relationship can be interpreted through the lens of technology acceptance theory, which posits that attitude toward technology influence engagement and utilization patterns71. Students demonstrating positive attitudes toward AI ethics tend to exhibit higher levels of acceptance of AI educational applications, which may facilitate their willingness and ability to use AI tools for independent learning72.

Similarly, our results confirm that AIEC significantly predicts SRLC, suggesting that students who understand the ethical implications of AI technologies and can effectively address related challenges may develop greater confidence in using AI tools for learning. Ethical competence appears to foster autonomous and critical engagement with AI technologies, encouraging students to approach AI tools with reflective awareness and responsibility in their learning processes73.

Findings above highlight the importance of integrating ethical considerations into AI education. By nurturing both positive ethical attitudes and practical competencies, educational institutions can support students in developing effective strategies for SRLC, preparing them for increasingly AI-integrated educational environments.

Implications

This study examines how AI ethics literacy impacts students’ self-directed learning capabilities in AI-supported environments, offering significant practical implications for educational stakeholders. Our findings provide actionable insights for developing students’ AI ethics literacy systematically and navigating AI technologies responsibly in educational contexts.

We introduce a model of AI ethics literacy highlighting the interconnected roles of knowledge, attitudes, and competencies. Our results clarify the relationships among these components, showing that ethical knowledge directly influences both ethical attitudes and competencies, while ethical attitudes positively affect ethical competencies in AI contexts. Nowadays, the rapid adoption of large language models (LLMs) in educational settings brings convenience to teaching and learning, while simultaneously introducing risks—including the generation of hallucinations, academic integrity issues, and the formation of knowledge bubbles—that merit careful consideration. The development of rational human-AI collaborative models remains an area requiring further exploration. Fostering AI ethical literacy is therefore essential to support the ethically-grounded adoption of LLMs in education. These findings further underscore practical implications for educators, policymakers, and instructional designers.

To enhance students’ AIEK, educational institutions should guide learners in examining the societal, individual, and cultural impacts of AI technologies, including potential biases and discriminatory outcomes. Real-world scenarios can help students identify appropriate AI applications and recognize ethical dilemmas. For developing ethical attitudes, educators should cultivate empathy and responsibility, emphasizing respect for privacy, fairness, and human dignity. To build ethical competencies, curricula should incorporate training in ethical risk assessment and critical thinking skills, providing frameworks for ethical decision-making.

For educational administrators, our findings provide guidance for resource allocation and development of targeted training programs. Assessment of students’ AI ethics literacy enables personalized educational approaches, with advanced courses available for students demonstrating higher literacy levels. From a policy perspective, ensuring AI educational applications adhere to ethical principles—accountability, fairness, human-centricity, and privacy protection—remains paramount. Policies grounded in human-centered philosophy, strengthened accountability mechanisms, and cross-departmental cooperation can create ethical AI learning environments that enhance students’ independent learning competence.

These implications collectively underscore the importance of integrating ethical considerations into AI education frameworks to prepare students for responsible engagement with increasingly AI-integrated educational and professional landscapes.

Conclusion, limitation, and further research

This study investigates the core components of AI ethics literacy through literature review, surveys, and empirical analysis. We hypothesize and verify three constituent components of AI ethics literacy, relationships, and impact on students’ self-rated learning competence using AI. Our findings confirm that AI ethics literacy significantly enhance students’ self-rated learning competence using AI. We also hypothesize and verify four AI ethics principles that influence the development of students’ AI ethics knowledge. The insights provide theoretical foundations for cultivating students’ AI ethics literacy and offer evidence-based guidance for enhancing AI-supported learning.

Although this study has achieved its intended aims, it also exhibits certain limitations. Firstly, the current study relies on a convenience sample of college students in Hangzhou, a city with advanced digital infrastructure and a high concentration of tech-savvy populations. The geographic and demographic focus limit the generalizability of the findings to broader populations. Secondly, the study establishes a theoretical framework for AI ethics literacy but lacks empirical validation of intervention strategies. While the PLS-SEM model identify significant relationships among knowledge, attitude, competence, and self-rated learning competence using AI, translating insights into practical educational tools requires rigorous testing.

Therefore, future research needs to develop a multi-method assessment approach that integrates both self-perception scales and objective knowledge measures to enable a more robust and comprehensive evaluation of AI ethics literacy. Furthermore, studies should intentionally recruit participants from diverse cultural backgrounds and various educational tiers to enhance the generalizability of the findings. Universities should incorporate AI ethics literacy into relevant curriculum practices, and empirical efforts should further examine the relationship between students’ AI ethical competence and other literacy capabilities across different domains.