Abstract
Artificial Intelligence in Education (AIED) is becoming increasingly influential in the educational sphere, offering significant benefits and presenting ethical risks. This study fills a crucial gap by systematically classifying and analyzing these risks. Using a combined approach of systematic review and grounded theory coding, ethical risks were categorized into three dimensions: technology, education, and society. In the technology dimension, risks include privacy invasion, data leakage, algorithmic bias, the black box algorithm, and algorithmic error. The education dimension risks involve student homogenized development, homogeneous teaching, teaching profession crisis, deviation from educational goals, alienation of the teacher-student relationship, emotional disruption, and academic misconduct. Risks in the society dimension consist of exacerbating the digital divide, the absence of accountability, and a conflict of interest. Based on an analysis of the types, potential triggers, and hazards associated with these risks, we propose strategies spanning three critical dimensions—technology, education, and society, from the perspectives of stakeholders, to address these ethical risks. This study contributes to a concise and precise analysis of the ethical risks associated with AIED, offering practical solutions for the responsible implementation of AIED.
Similar content being viewed by others
Introduction
The application of artificial intelligence (AI) has surged in education (Bearman et al., 2023). In the 2022 Horizon Report (Teaching and Learning Edition), two key technologies and practices were identified that shared a common grounding in AI-based technologies: AI for learning analytics and AI for learning tools (EDUCAUSE, 2022). Artificial intelligence in education (AIED) has played an important role in improving the quality of the education sector because its application can make it easier for teachers and students to carry out learning activities in many subjects (Prahani et al., 2022). At the same time, generative AI is starting to trickle in and becoming a transformative resource that educators and students can draw on in teaching and learning (Lim et al., 2023). In the 2023 Horizon Report (Teaching and Learning Edition), generative AI was identified as one of the most disruptive technologies, and the revolutionary potential extended beyond the classroom (EDUCAUSE, 2023).
However, AIED entails diverse and intricate ethical risks, carrying substantial potential for serious harm. It can introduce significant risks of unfairness (Sahlgren, 2023), such as algorithmic bias (Berendt et al., 2020; Köbis and Mehner, 2021; Holmes et al., 2021). Meanwhile, concerns are looming over transparency and accountability as AI becomes increasingly entrenched in decision-making processes (Slimi and Carballido, 2023). For example, exploiting personal data due to a lack of awareness can go unnoticed without accountability (Chaudhry and Kazim, 2022). Problems also persist with the opacity of AI systems (Köbis and Mehner, 2021), such as black box algorithms (Kitto and Knight, 2019). In addition, there were issues with data, privacy, etc. Akgun and Greenhow (2021) mentioned that privacy violations can occur as people are exposed to an intensive amount of personal data (their language, location, biographical data, and racial identity) on different online platforms. Lai et al. (2023) also argued that using intelligent technology to collect students’ learning data may cause safety and ethical problems due to data leakage. Moreover, several international organizations and countries have published several reports offering insights into the ethics of AIED. UNESCO (2021) AI and education: Guidance for policymakers pointed out that there are issues centered on data and algorithms, pedagogical choices, inclusion, the ‘digital divide’, children’s right to privacy, liberty, and unhindered development, and on equity in terms of gender, disability, social and economic status, ethnic and cultural background, and geographic location. In the European Commission’s (2022) Ethical guidelines on the use of artificial intelligence (AI) and data in teaching and learning for Educators, four key considerations were identified that underpin the ethical use of AI and data in teaching, learning, and assessment: human agency, fairness, humanity, and justified choice. The U.S. Department of Education (2023) mentioned that a central safety argument in the Department’s policies is the need for data privacy and security in the systems used by teachers, students, and others in educational institutions. These reports or policies indicate the presence of numerous ethical risks associated with AIED, emphasizing the urgent need for identification and prompt resolution.
Although previous research has explored the ethics of AIED, existing reviews have focused on specific dimensions or contexts of AI in education. For example, Salas-Pilco and Yang (2022) systematically reviewed AI in Latin American higher education, emphasizing their potential for improving teaching, learning, and administration but providing limited discussion on ethical risks. Similarly, Pierrès et al. (2024) conducted a scoping review on AI in higher education for students with disabilities, highlighting ethical concerns such as bias and privacy, but narrowing their focus to a specific demographic. Gouseti et al. (2024) reviewed the ethics of AI in K-12 education, identifying challenges in operationalizing ethical frameworks for children but not addressing the broader application of AI in diverse educational settings. While previous studies have made valuable contributions in specific areas, they lack a holistic approach that identifies, categorizes, and addresses these risks across the entire spectrum of AIED. This systematic review categorizes the diverse ethical risks associated with AIED and proposes specific and targeted strategies for their management, specifically examining the application of AI in educational settings where AI tools are already in use by educators or learners. The study has significant theoretical and practical implications. Theoretically, it contributes to the conceptual understanding of the ethical risks of AIED, offering a more nuanced and comprehensive perspective on how these risks manifest and interrelate. Practically, it offers stakeholders, such as educators, students, and AI developers, valuable insights and actionable strategies to address these risks, thereby promoting responsible and ethical use. The research questions for this study are as follows:
-
(a)
What are the typical ethical risks, causative factors, and resultant detriments in the realm of AIED?
-
(b)
How can identified ethical risks be systematically mitigated and addressed?
The remainder of the paper is structured as follows. The methods section describes our search strategies, inclusion and exclusion criteria, and coding approach. The results section presents the data from the reviewed studies to provide insights into the ethical risks of AIED. The discussion section analyzes the origins and links of the ethical risks and provides governance strategies across three dimensions—technology, education, and society—to address these ethical risks. The conclusion section concludes the study, acknowledges its limitations, and provides recommendations for future research.
Methods
The research process for this systematic review was conducted using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines (Moher et al., 2010), which include literature search, literature screening, content-coding analysis, presentation of results, and discussion of issues.
Search strategies
This study comprehensively reviewed the literature from January 2019 to June 2024. The papers were searched across 5 prominent databases: ScienceDirect, Wiley Online Library, Springer, Web of Science, and China National Knowledge Infrastructure (CNKI). The search terms were formulated by Boolean logic as follows: (“artificial intelligence” OR AI OR AIED) AND (education OR learning OR teaching) AND (“ethical risks” OR “ethical principles” OR ethics). The subject categories were focused on Education and Computer Science. Only research articles were selected, excluding other types of publications. In addition, searches were performed in titles and abstracts using the Boolean statements above to improve the accuracy, provided the database supported such a functionality. The preliminary search resulted in the following outcomes: Web of Science yielded 378 records, from which 52 pertinent results were identified based on the manual screening; 87 records were found in ScienceDirect, and 17 relevant records were retained after manual screening; Wiley Online Library yielded 283 records, resulting in 13 relevant records manual screening; Springer produced 352 records, of which 13 met the criteria after manual screening; CNKI yielded 135 records, and 33 papers were obtained by manual screening.
Screening and inter-rater reliability
The inclusion and exclusion criteria are detailed in Table 1. The papers obtained from the above databases were screened for removal of duplicates, checking for availability, exclusion of article types, and fit with the topic, and finally culminating in a total of 75 articles that were deemed suitable for inclusion. The stepwise screening procedures are graphically presented in Fig. 1.
Coding
This study adopted the coding method of grounded theory, which can be formally divided into open coding, axial coding, and selective coding (Wolfswinkel et al., 2013). During the open coding phase, the researchers examined 75 papers in the sample without any personal presuppositions or biases. Researchers extracted content about the ethical risks of AIED, after which the extracted content was conceptualized into preliminary concepts. In the axis coding phase, the researchers analyzed the concepts mentioned above, considering the attributes and relationships of the concepts, and classified these concepts into different categories, such as data, teaching, and governance. During the selective coding phase, the researchers analyzed the preliminary categories, taking into account the attributes and relationships of these categories, and divided the preliminary categories into main categories so that these categories could accurately cover the types of ethical risks of AIED. The codification system is shown in Table 2.
The methodology aimed to systematically identify and categorize the ethical risks of AIED, exploring their causes and hazards.
Results
Distribution of studies
We conducted an analysis encompassing the enumeration of papers within the sample, detailing the distribution across journals and years.
Distribution of journals
The sample comprised 42 journals, among which, International Journal of Artificial Intelligence in Education (n = 5), E-education Research (n = 5), and China Educational Technology (n = 5) were the journals with the most articles. Table 3 shows the journals to which the literature in the sample is attributed.
Distribution of years
We found an overall upward trend in the number of studies (see Fig. 2). Specifically, the pinnacle of the observed trend occurred in 2023, with a notable surge of 25 papers published this year.
Results of coding
During the coding process, we quantified the frequency of ethical risks identified in the literature, as depicted in Fig. 3.
The ethical risks of AIED from the technology dimension
The analysis identified two primary perspectives within the technology dimension susceptible to relevant ethical risks: data and algorithms. The perspective of data includes privacy invasion and data leakage, and the perspective of the algorithm includes algorithmic bias, black box algorithm, and algorithmic error.
Privacy invasion
The results indicated 46 occurrences of privacy invasion-related statements from 75 papers. Privacy leakage refers to the unauthorized disclosure of personal, sensitive, or confidential information, which can lead to the leakage of private data of teachers and students (Nguyen et al., 2023; Hu et al., 2022; Wang et al., 2022; Zhao et al., 2022). In addition, it can also threaten the security of students’ personal information (Zhang et al., 2021) and even lead to law violations (Zhao et al., 2020). The privacy of educational subjects during data collection, storage, and sharing can be violated (Guan et al., 2023), which may be caused by the following factors. (a) Personal data exposure: people are exposed to an intensive amount of personal data (their language, location, biographical data, and racial identity) on different online platforms (Akgun and Greenhow, 2021). (b) Personal data collection: large amounts of student data need to be collected in the process of using AI (Masters, 2023), but student data collection, storage, and use are all carried out autonomously by intelligent educational subjects (Luo et al., 2023).
Data leakage
Among the examined papers, 32 instances of statements concerning data leakage were identified. Data leakage refers to the unauthorized exposure of student, teacher, or institutional data. It can raise safety and ethical problems (Lai et al., 2023), such as consent, privacy, ownership, data choices, data provenance, and proxies (Holmes et al., 2023). Notably, it poses threats to the security of both teacher and student information, along with property security concerns (Shen and Wang, 2019). Moreover, it compromises human sovereignty and dignity by exposing individual identities, privacy, sense of belonging, and reputation (Feng et al., 2020). The occurrence of data leakage can be attributed to three primary reasons. (a) Data insecurity: data is vulnerable to hacking and manipulation (Holmes et al., 2021; Dakakni and Safa, 2023). (b) Excessive data collection and abuse of data (Masters, 2023; Borenstein and Howard, 2021): the data is stored in the cloud, raising concerns from users about potential breaches, unintended applications, or misuse of their personal information and records of tool usage (Qin et al., 2020; Mogavi et al., 2024). (c) Inadequate data management (Hu et al., 2022): the data is not adequately safeguarded or utilized without proper authorization (Kayalı et al., 2023).
Algorithmic bias
The results indicated 53 occurrences of algorithmic bias-related statements in 75 papers. Algorithmic bias comprises the biases that have long existed in society, affecting certain groups such as underprivileged and marginalized communities and involves the manifestation in algorithms’ outcomes (Kordzadeh and Ghasemaghaei, 2021). Algorithmic bias presents a range of detrimental impacts. (a) Decisions made by AI systems, such as admissions or grading, can significantly impact students’ lives (Slimi and Carballido, 2023). (b) Algorithmic bias may deepen prejudice and discrimination among educational administrators and create inappropriate stereotypes (Kitto and Knight, 2019), which can affect instructional management and provide teachers with incorrect guidelines (Wang et al., 2023b). The causes contributing to algorithmic bias are outlined as follows. (a) The bias of algorithmic designers and executives (Lai et al., 2023): the algorithms are not intelligent by themselves but are trained by humans (Anagnostou et al., 2022). (b) Society’s ever-evolving historical and institutionalized systemic biases (Akgun and Greenhow, 2021). (c) Bias due to data: the discriminatory nature of the data itself may lead to algorithmic bias. As underscored by Du et al. (2019), algorithms trained on inaccurate or biased historical datasets inevitably replicate these biases, thereby leading to skewed results. Additionally, insufficient data can lead to algorithmic bias. Chaudhry and Kazim (2022) argued that discrimination against certain groups is due to data deficiencies.
Black box algorithm
Among the 75 papers examined, 29 contained statements pertinent to the black box algorithm. Black box algorithm refers to algorithms of AI that are often uninterpretable or opaque (Berendt et al., 2020; Schiff, 2022; Slimi and Carballido, 2023; Masters, 2023), which pose many challenges for stakeholders (Nguyen et al., 2023). An example illustrating this challenge was presented by Luo et al. (2023), who argued that educators lack insight into the generation process of algorithms governing personalized learning and teaching content. Consequently, the educators face uncertainties regarding the correctness of decisions based on these algorithms, their alignment with student needs, and their potential negative impacts on student growth. The primary reason for the black-box nature of these algorithms lies in the inherent difficulty in understanding their decision-making mechanisms and the challenge of tracing and explaining their operations (Deng and Li, 2020). In particular, since developers often struggle to fully comprehend the decision-making processes of AI systems, they cannot clearly explain the diagnosis and suggestions provided by AI to teachers and students (Qin et al., 2020).
Algorithmic error
Within the scope of the 75 papers scrutinized, 23 instances contained statements addressing algorithmic error. Algorithmic error refers to mistakes or inaccuracies in the outputs produced by algorithms, which can result in adverse consequences, including erroneous reports on students’ learning analyses and inaccurate predictions of academic levels (Shen and Wang, 2019). AI may also misjudge a learner’s emotions and provide incorrect answers based on their statements and facial expressions (Jafari, 2024). The primary cause of algorithmic errors lies in unstable technologies (Mishra et al., 2024). Certain technical risks in the design, computation, and execution of algorithms can inadvertently produce deceptive or erroneous information due to training data limitations or inherent biases (Zhao et al., 2021; Mogavi et al., 2024; Yang et al., 2024). While technological limitations are the root cause, insufficient human oversight can further exacerbate these errors by failing to identify and mitigate issues in real time (Miao, 2022; Celik, 2023).
The ethical risks of AIED from the education dimension
The findings suggest that ethical risks within the education dimension predominantly emerge across four perspectives: learning, teaching, teacher-student relationship, and academic research. The learning perspective encompasses concerns related to student homogenized development. From a teaching perspective, ethical risks are observed in homogeneous teaching, the teaching profession crisis, and deviation from educational goals. The teacher-student relationship perspective highlights ethical risks associated with the alienation of teacher-student relationships and emotional disruption. From an academic research perspective, the ethical risk is academic misconduct.
Student homogenized development
Within the 75 papers analyzed, 32 instances pertained to student homogenized development. Despite the putative benefits of so-called personalization through algorithms, the unintended consequences can be homogenization rather than enabling students to develop their potential or to self-actualize (Holmes et al., 2023). Student homogenized development means that AI systems can filter resources accurately and personalize push, but also, to a certain extent, lead to a high degree of homogenization of the teaching resources acquired by the students; highly similar views of the resources inadvertently block the transmission of heterogeneous thoughts and ideas (Zhao et al., 2020). In the end, an irrational reliance on AI inhibits the individualization of student growth (Bu, 2022). Student homogenized development yields adverse effects by encroaching upon students’ freedom, limiting their access to diverse learning opportunities, and impeding their ability to engage fully with pertinent educational content (Feng et al., 2020). Furthermore, Vetter et al. (2024) raised concerns that using AI Text Generators can limit students’ critical thinking. This phenomenon is attributed to the following reasons. (a) Identical learning content: When diverse individuals encounter similar learning materials and scenarios, their developmental trajectories tend to converge, leading to homogenization (Chen and Zhang, 2023). (b) Information cocoon: with the formation of an information cocoon, which means that in the process of network information dissemination, users only pay attention to the areas they are interested in or choose. In the long run, users will be solidified in a closed information space, students can only be exposed to relevant content recommended by personalized algorithms, gradually losing their learning autonomy, and their development is also restricted (Feng et al., 2020).
Teaching profession crisis
Within the 75 papers reviewed, 12 instances addressed the teaching profession crisis. Teaching profession crisis refers to the challenges and dilemmas that educators face due to the rapid advancement of AI and their widespread incorporation into teaching practices. The term has two primary implications. First, teachers may experience a role crisis at the level of self-professional identity (Zhao et al., 2022). Second, the misuse of intelligent teaching devices may replace teachers functions and threaten teachers’ jobs (Miao, 2022). The teaching profession crisis hints at the looming possibility of an upheaval wherein teaching roles face displacement or substitution and may even become obsolete at a certain point (Xie, 2023; Shamsuddinova et al., 2024). Adams et al. (2023) indicated that using AI in schools can lead to increased teacher workloads, the need for ongoing professional development, additional preparation time, possible diminishment of meaningful relationships with students, and technological unemployment due to AI. The main reason is over-prescribing automated solutions (Chounta et al., 2022).
Homogeneous teaching
Within the examined literature comprising 75 papers, 10 instances addressed homogeneous teaching. This term encapsulates the potential limitation on teachers’ autonomy when employing AI for teaching purposes (Miao, 2022). The concern is the possible compromise of the teachers’ teaching artistry, subjectivity, and creativity (Zhao et al., 2022). The reasons contributing to homogeneous teaching are delineated as follows. (a) AI over-monitoring: the over-monitoring of AI-empowered classroom feedback ignores the autonomy and self-control of teachers and students, inhibits teachers’ creativity, and leads to the ceding of teachers’ educational autonomy (Luo et al., 2022; Bhimdiwala et al., 2022). (b) Algorithmic control: there exists a substantial probability of algorithms assuming a controlling role within educational settings. The absence of regulations governing algorithms can potentially erode teachers’ educational freedom (Luo et al., 2023).
Deviation from educational goals
Among the pool of 75 papers scrutinized, 5 instances addressed the concept of deviation from educational goals. This phenomenon implies a potential alteration in the fundamental essence of education and teaching (Gao and Yang, 2022), resulting in a departure from the principles of ‘human-centered education’ (Bu, 2022). For instance, using AI to monitor student behavior or performance can engender a surveillance culture that curtails students’ capacity to take risks and make mistakes, which are essential for their growth and development (Mouta et al., 2023). The primary causative factor contributing to this deviation appears to be the ardent pursuit and abuse of AI technology (Gao and Yang, 2022; Guo et al., 2024).
Alienation of teacher-student relationship
Within the analyzed corpus of 75 papers, 28 instances delineated the concept of alienation in the teacher-student relationship. Alienation of the teacher-student relationship means that in the conventional ecology of education, the relationship between teachers and students is that of educators, while technology serves as a supplemental tool. As AI technology develops, it is possible that “human-like AI” will destroy this relationship (Bu, 2022). Wogu et al. (2018) supported this argument by referring to Marx’s theory of alienation. As workers interact with assembly line machines, they are likely to become alienated from their human nature. The perils associated with the alienation of the teacher-student relationship are multifaceted. (a) The teachers’ position of intellectual authority is constantly being dissolved, and teaching is alienated into the unidirectional transmission of knowledge (Feng, 2023). (b) Diminished nonverbal intimacy between teachers and students due to AI, leading to a weakened sense of social presence and reduced interpersonal interaction (Lai et al., 2023; Zhang and Zhang, 2024). (c) Concerns raised by Liu et al. (2019) suggested that this alienation may provoke and intensify the decline of humanistic values, teaching environments, and perceptions. The primary driver behind the alienation of the teacher-student relationship stems from the fortification of the human-machine relationship. As the relationship between humans and machines intensifies, the interpersonal connection among humans may exhibit a localized trend of fading, leading to the erosion of the geographical, relational, and emotional bonds between teachers and students (Zhao et al., 2020; Chen and Zhang, 2023). Over-automation can hinder teachers’ flexibility and interrupt the natural flow of teacher-student interaction, as AI tools like automatic writing evaluation may interfere with the teacher-student relationship and reduce teachers to a functional role (Bhimdiwala et al., 2022; Adams et al., 2023).
Emotional disruption
Within the compilation of 75 papers, 16 instances addressed the concept of emotional disruption. Emotional disruption refers to the changes in students’ emotional and psychological states—manifesting as negative emotions, anxiety, stress, or frustration—that occur as a result of using AI tools. This disruption is anticipated to adversely impact students’ emotional and cognitive development (Luo et al., 2023), potentially diluting the spiritual interactions and individual collisions integral to educational activities (Feng, 2023). Additionally, it may harm students’ mental health and social competencies (Bu, 2022). The reasons contributing to emotional disruption are twofold. (a) Technological isolation: for teachers and students sitting at the Internet terminal, it is easy to create the phenomenon of ‘pseudo-participation’ (Feng et al., 2020). (b) Alienation of the teacher-student relationship: the estrangement within this relationship continually suppresses and sidelines regular emotional exchanges. This suppression can result in emotional imbalances and elevate the risks associated with excessive emotional labor and masking (Luo et al., 2022). When the teacher-student relationship becomes estranged, teachers may become tired and weary of over-emotional commitment, and students may not be pleased to show their emotions, which in turn affects the quality of teaching and authentic emotional connection. (c) Overuse of AI: AI may fail to provide emotional support to students. Thus, overuse of tutor robots can lead to compromised social-emotional development (Adams et al., 2023; Šedlbauer et al., 2024).
Academic misconduct
Results indicate 12 occurrences of academic misconduct-related statements out of 75 papers. Academic misconduct refers to students may plagiarize AI-generated content while writing assignments and exams and the use of AI for writing assistance, where the researcher may inadvertently commit academic misconduct such as plagiarism, which ultimately results in academic distrust and imbalance (Feng, 2023; Wang et al., 2023b). The reasons for academic misconduct may be as follows. (a) Skill deficiencies: weaker students are more likely to use AI to help them with essay writing and plagiarism (Sweeney, 2023). (b) Academic pressure: many students may seek assistance from external services to meet deadlines (Sweeney, 2023).
The ethical risks of AIED from the societal dimension
Results indicate that ethical risks from the society dimension primarily manifest across two perspectives—development and governance. The development perspective encompasses the issue of exacerbating the digital divide, while the governance perspective includes concerns related to the absence of accountability and conflict of interest.
Exacerbating the digital divide
The analysis revealed 15 instances discussing the exacerbation of the digital divide within the pool of 75 papers. The digital divide means that in the process of digitalization, different regions, industries, and individuals have different degrees of digital device ownership and digital technology application, resulting in information gaps between different regions, industries, and individuals, and further aggravating the polarization between the rich and the poor. It exacerbates existing educational inequities (Zhang and Zhang, 2024). On the one hand, lack of access to the Internet creates a disparity between students with and without access to new forms of information technology, leading to unequal learning outcomes and opportunities (Lutz, 2019; Park et al., 2022). On the other hand, the ability to utilize acquired skills and knowledge from AI interactions can lead to competitive advantages, exacerbating existing educational, professional, and social inequalities (Dakakni and Safa, 2023). The emergence of exacerbating the digital divide can be attributed to two principal factors. (a) Equipment conditions: external factors such as media usage, limited access to smart devices, and a constrained market for specialized educational applications were cited by Gao and Yang (2022) as contributors to the widening of the digital divide. (b) Information monopoly: Luo et al. (2023) argued that major Internet education platforms, by leveraging AI to collect vast amounts of data from students and exercising technological monopolies, are not only creating data barriers and ‘information cocoons’ but also exacerbating the digital divide. This situation raises concerns about AI limiting students’ growth by reducing their control over personal data and reinforcing educational inequalities.
Absence of accountability
Within the reviewed literature comprising 75 papers, 13 instances deliberated on the absence of accountability. Absence of accountability refers to the high risk of AIED ethics in terms of technical norms and the relative lack of legal and professional accountability mechanisms (Zhao et al., 2020), which is highly detrimental to the trustworthiness of AI (Shen and Wang, 2019). Absence of accountability can be attributed to two primary reasons. (a) Poorly formulated policies: Du et al. (2019) underscored the deficiency in well-structured policies delineating responsibility, lacking clarity in core responsibilities, and a deficiency in supervisory obligations. (b) Lack of awareness: Chaudhry and Kazim (2022) argued that the inadvertent exploitation of personal data due to a lack of awareness may occur without any subsequent accountability. (c) Autonomy of AI: due to the autonomy of AI, there may be instances where it is unclear who is responsible (Jang et al., 2022).
Conflict of interest
Within the reviewed literature encompassing 75 papers, 5 instances delved into conflict of interest. This conflict arises from commercial interests against the public interest in education (Zhao et al., 2020). Nemorin et al. (2023) underscored the ethical concerns surrounding the commercial mining of students’ emotional lives, especially when the extraction of value does not align with students’ best interests. When the primary motivation for adopting AI systems is perceived as profit-driven, users will think these businesses are indifferent to their interests (Qin et al., 2020). Additionally, Nam and Bai (2023) posited that the introduction of ChatGPT has the potential to precipitate numerous conflicts of interest and crises within STEM research and higher education development, particularly in areas such as research ethics, authorship, intellectual property rights, and ownership.
Discussion and implications
The potential ethical risks of AIED
The primary aim of this review was to identify and delineate the ethical risks inherent in AIED as evidenced in the pertinent literature. The findings illuminate a diverse spectrum of risks within AIED across three primary dimensions: technology, education, and society. The ethical risks encompass a range of concerns, including privacy invasion, data leakage, algorithmic bias, algorithmic error, black box algorithm, student homogenized development, homogeneous teaching, teaching profession crisis, deviation from educational goals, alienation of teacher-student relationship, emotional disruption, academic misconduct, exacerbating the digital divide, absence of accountability, and conflict of interest. Our analysis also uncovered intricate interplays between these dimensions, suggesting that risks in one area can influence or exacerbate issues in others, as illustrated in Fig. 4.
In the technology dimension, ethical risks primarily emerge from the complexities and biases in AI algorithms and data handling. The intricate nature of AI algorithms often results in a “black box” phenomenon, obscuring the understanding of their workings and leading to potential errors. Biased datasets contribute significantly to machine learning biases, as identified by Badran et al. (2023), while Saltz and Dewar (2019) highlight issues arising from data misinterpretation or misuse. If biases are presented in the data used to develop AI systems, the outputs of these systems are likely to exhibit discriminatory tendencies against specific groups, such as gender, race and minorities (Jang et al., 2022), and may lead to incorrect or inappropriate output (Dwivedi et al., 2023). Risks related to data security, such as vulnerabilities in storage and transmission, are exemplified by Meena and Sharma’s (2023) findings on wireless data transmission risks. Furthermore, less data is collected for learning and more data about personal information, which is more likely to be misused by organizations (Pammer-Schindler and Rosé, 2022). Such data abuse, leading to privacy breaches through data leakage, may pose significant threats to individual rights. Collectively, these technological risks compromise algorithm fairness, security, and privacy, underlining the need for vigilant and ethical AI implementation in education.
Technology risks may be transitioned into education risks, notably information cocoons and fostering AI technology dependence. Personalized algorithmic recommendations can foster information cocoons, narrowing the scope of learning content and constraining personalized development opportunities. AI technology dependence also emerges as a critical concern, leading to over-reliance on AI tools for learning and research. Firstly, this dependence can lead to students relying excessively on technologies like photo search and generative AI Q&A, potentially hindering their critical thinking. Secondly, the growing prevalence of AI may pose future threats to educators’ careers. Teachers will increasingly rely on AI to make decisions and become less critical and morally engaged (Mouta et al., 2023), creating estrangement between teachers and students and impeding emotional exchanges. Lastly, reliance on generative AI for academic research can inadvertently result in academic misconduct, affecting academic integrity. The culmination of these risks contributes to student homogenized development, homogeneous teaching, teaching profession crisis, deviation from educational goals, and alienation of the teacher-student relationship, which can bring alienation of values, academic misconduct, and academic fairness issues.
Risks in technology and education can be mapped to society, with the emergence of exacerbating the digital divide. The nature of exacerbating the digital divide reflects inequalities in digital utilization (Liao et al., 2022). Despite the significant convenience that AI brings to educators and students, the lack of universal application and discrepancies in AI literacy across various regions contribute to the widening digital gap. Moreover, corporate profit motives can disregard user rights, particularly in data collection, where data may be misused or traded without regard for user interests. Moreover, the absence of a robust regulatory framework creates management loopholes, making it challenging to establish accountability when issues arise. The consequences of these risks are far-reaching, with conflicts of interest undermining the public interest in education and the absence of accountability posing challenges to social governance. Consequently, this widening digital divide threatens societal fairness and inclusivity, highlighting a critical area for policy intervention and equitable technology deployment.
Solutions to tackle the potential ethical risks of AIED
Based on the ethical risks of AIED identified in 75 papers, this study conducted a comprehensive analysis of the types, causes, and potential hazards of ethical risks from three dimensions—technology, education, and society. We proposed relevant response strategies from the perspectives of stakeholders such as schools, teachers, and students in the above three dimensions, as presented in Fig. 5. These strategies can provide valuable practical implications for these stakeholders, guiding them in applying AI more ethically and effectively while mitigating potential risks in their respective roles and contexts.
Strategies in the technology dimension
Addressing the ethical risks of AIED in the technology dimension necessitates a comprehensive approach that focuses on two primary components: data life cycle control and algorithmic ethical principles guidance.
The data life cycle encompasses data collection, storage, and processing (Michener and Jones, 2012), and controlling the data life cycle is essential for preventing data leakage and privacy invasion. Governments play a crucial role in establishing and enforcing data protection legislation. The UK Department for Education (2023) required the protection of personal and unique category data by data protection legislation and prohibited intellectual property from being used for training generative AI models without proper consent or copyright exemption. Schools and corporations are the subjects of interest with the widest range and volume of data involved, so they should ensure informed consent for data collection from teachers and students and comply with data storage standards. Data storage should be followed by three criteria: “unconditional sharing”, “conditional sharing”, and “no sharing”, depending on the sensitivity of the data. For example, for highly sensitive personal information, such as students’ ID numbers, home addresses, and health records, “no sharing” should be enforced, and strict confidentiality should be maintained. Specifically, schools and corporations can improve their data storage measures in several areas, including organizational controls, personnel controls, physical controls, and technical controls, following ISO (2024). In addition, they need to regularly check detailed audits of all data processing activities to eliminate data abuse (Deng and Li, 2020). Developers should ensure the security of data transmission by the comprehensive implementation of data desensitization, which could reduce the identity recognition in sensitive data to protect personal privacy. Teachers should secure informed consent from students and parents when using AI for teaching and keep personal student data collected by AI strictly confidential if they use data for analysis. Students and parents should develop awareness of privacy protection, such as reducing the unnecessary provision of personal information, to maintain their rights.
In facing the risks, including black box algorithms, algorithmic bias, and algorithmic error, the three fundamental algorithmic principles of transparency and explainability, fairness, and security should be adopted to ensure a robust and sustainable algorithm. Governments should focus on developing and improving legislation specifically tailored to AI algorithms in educational contexts. The checking of algorithms by interdisciplinary experts should be organized by businesses to correct any bias in the algorithmic logic of AI promptly (Feng, 2023). Developers are urged to ensure openness in algorithmic procedures and operations, offering comprehensible insights to educators and learners using visual and straightforward indicators and supporting the autonomy and decision-making of teachers and students (The Institute for Ethical AI in Education, 2020; Luo et al., 2022). Developers should also improve the accuracy of their algorithms, for example, by ensuring the reliability and comprehensiveness of AI training data to avoid algorithmic errors and biases and modifying how models make predictions to improve fairness in AI systems (Sahlgren, 2023). Teachers need to review the fairness of AI output results to avoid potential biases, while students need the initiative to report their unfair treatment by the AI if any exists.
Strategies in the education dimension
The educational dimension encompasses strategies for innovating AI-enhanced teaching and learning methods, reshaping human-AI relationships in education, and maintaining academic standards.
To deal with student homogenized development, teaching profession crisis, homogeneous teaching, and deviation from educational goals, AI-enhanced teaching and learning methods innovation should be implemented. Educational departments should establish comprehensive guidelines for AIED to offer guidance for schools, teachers, and students on the responsible use of AI (UNESCO, 2021). Schools must prioritize AI literacy training, particularly in developing ethical literacy to optimize their use of AI (OECD, 2023). AI ethical literacy should be fostered in a multidisciplinary or interdisciplinary manner, incorporating diverse fields such as philosophy, accountability, inequality, and environmental sustainability to foster a comprehensive understanding of AI ethics (Javed et al., 2022). OECD (2021) advised to leverage AI technologies to support personalized education and enhance teaching outcomes and student learning experiences. Thus, developers need to develop AI that can support the personalized development of students and teachers by empowering them to make informed choices aligned with their individual needs and personality traits (Deng and Li, 2020). Teachers should ensure the use of AI is aligned with educational goals by following the principle of proportionality proposed by Hermerén (2012), which concerns whether AI is pedagogically relevant, is the optimal teaching tool, and is used without overdoing it. Teachers are also responsible for using AI tools to tailor instruction to individual student needs and should encourage students to think creatively and analyze critically to prevent them from losing their thinking about the issue. In addition, teachers should adhere to the nature of the teaching profession in the integration of AI and subject teaching and guide the compliant application of intelligent technology through the self-construction of the role of technology leader (Zhao et al., 2022). By maintaining the critical human element in education, it can avoid exacerbating the crisis in the teaching profession, ensuring that AI complements rather than replaces. Students should provide feedback on the effectiveness of AI-personalized learning experiences while regularly discussing their learning journey with teachers. Parents can set “AI-free” homework time when children solve problems without technological assistance.
In order to address the alienation of teacher-student relationships and emotional disruption, it is imperative to reshape teacher-student and human-AI relationships. Developers should strike a balance between AI and preserving teacher-student autonomy, recognizing teachers’ pivotal role in the educational process and adjusting human-centric factors to systems (Lee et al., 2023). Developers should also focus on the comprehensive integration of emotional data, leveraging multimodal emotion recognition technology to facilitate teacher-student emotional interactions (Zhao et al., 2021). Teachers should pay attention to the special advantages of human beings, such as humanistic care, emotional care, and earnest communication, to promote students’ emotional development (Liu et al., 2019). It is necessary for them to reasonably allocate roles between humans and AI and clarify the division of labor between each role to maintain the human-AI collaboration mechanism that is AI-assisted and human-centered. For example, teachers can enable AI collaborative learning tools that encourage peer interaction, group projects, and collective problem-solving, promoting peer-to-peer interactions and enriching the social and emotional development of students, enhancing their communication and teamwork skills. Students should emphasize face-to-face interaction and participation in social activities, and avoid becoming addicted to interactions with AI, which affects their physical and mental health.
To maintain academic integrity and prevent misconduct, we recommend academic standardization. Schools should define the scope of reasonable AI use in academics, clarifying the scenarios in which students are allowed to use AI, such as assisting in literature searches and grammar checking, versus prohibited behaviors, such as ghostwriting papers and generating code, and include them in academic integrity regulations. Teachers bear the responsibility of clearly delineating permissible AI usage within assignments, and they can employ AI detection software while recognizing both its utility and limitations in identifying potential academic misconduct. Teachers can also reform student assessment by increasing process assessment and using non-textual forms to avoid answers that can be generated by AI. Students are expected to adhere to guidelines, utilizing AI in a manner that is both reasonable and compliant with academic standards while conscientiously avoiding plagiarism.
Strategies in the society dimension
In the society dimension, the solutions include closing the digital use, design, and access divide, establishing accountability mechanisms, and maintaining the public interest in education.
To mitigate exacerbating the digital divide, governments should conduct regular needs assessments and broaden access to resources and funds in remote areas to enhance AI accessibility (Global Smart Education Network (GSENet), 2024). Businesses can provide discounted or free services to underserved schools (Khowaja et al., 2024). Schools should provide AI tools training and high-quality resources for teachers and students to build digital skills and understanding of digital technology (The Danish government, 2022). Schools should evaluate the potential effectiveness of AI tools before purchase, including data practices to prevent information monopolies (The U.S. Office of Educational Technology, 2024).
For the purpose of improving accountability mechanisms, governments should release laws and regulations, responsibility lists around the ethical norms (Hu et al., 2022), and work with other stakeholders to create AI audits, review guidelines, and improve disclosure standards (The U.S. National Telecommunications and Information Administration (NTIA), 2024). For schools, AI ethics committees should be set up within the school to monitor and deal with the ethical issues of AIED in the school. The committee should be composed of teachers, students, and administrators in order to operate the accountability mechanisms openly and transparently. Teachers and students are encouraged to actively participate in oversight.
So as to balance the interests of stakeholders, governments should introduce relevant regulations to ensure a trustworthy, public interest-oriented, and human-centric development and use of AI systems (UNESCO, 2022). Governments also need to introduce commercial technology norms of AI to limit commercial exploitation of educational data and encourage private sector companies to share the data they collect with all stakeholders, as appropriate, for research, innovation, or public benefits (Zhao et al., 2020; UNESCO, 2022). Businesses that provide generative AI products should recognize and protect the intellectual property of the owners of the content used by the model (UNESCO, 2023). Teachers and students should disclose any conflict of interest in research with the use of AI, especially when commercial interests are involved, to protect intellectual property rights.
Conclusion, limitation, and future study
This study systematically categorized the ethical risks of AIED into three dimensions—technology, education, and society, employing systematic review and grounded theory coding. Developed in response to these categorized risks, we offer targeted strategies for mitigating such risks across these dimensions, which guide more reliable and responsible AIED in the future.
There are limitations to this study. First, the study of the ethical risks of AIED is an emerging field, and the restricted sample size might impact the reliability of the results. The dynamic evolution of AIED might introduce new ethical risks beyond this study’s scope and timeline. Second, although we have identified various types of risks and their potential consequences, the lack of empirical evidence poses challenges in assessing the impact of these risks. Third, our proposed strategies, while focusing on key stakeholder groups, may not fully address the nuanced needs and contexts of all relevant parties within AIED.
To address these limitations and advance the field of AIED ethics, we propose the following areas for future research. First, strengthening AI ethics research. Future research should develop comprehensive methodologies for assessing the severity and potential impact of different ethical risks in AIED, which could involve creating standardized risk assessment tools and considering factors such as the likelihood of occurrence and potential harm to stakeholders. Moreover, future research should focus on establishing a comprehensive AI ethics normative index system and promoting the development of AI application guidelines to ensure ethical AIED. Second, future work should focus on empirical validation. On the one hand, the strategies offer a foundation for addressing the ethical risks, but each proposed strategy warrants further in-depth investigation and empirical validation. The strategies should be applied in various schools, regions, and educational environments to assess suitability and effectiveness. Researchers should conduct social experiments to thoroughly investigate the needs of stakeholders of AIED, analyze practical issues in implementation, and explore specific problem-solving strategies. On the other hand, prioritizing conducting meta-analyses and longitudinal studies will synthesize existing empirical evidence and provide a more robust understanding of the long-term implications of ethical risks and mitigation strategies. Finally, promoting international collaboration and establishing global standards. Future work could focus on creating a global AIED ethics platform, which establishes and maintains a global repository of AIED ethical case studies and best practices to facilitate knowledge sharing and collaborative problem-solving across the international AIED community.
Data availability
All data generated or analyzed during this study were included in this published article.
References
Adams C, Pente P, Lemermeyer G, Rockwell G (2023) Ethical principles for artificial intelligence in K-12 education. Comput Educ Artif Intell 4:100131. https://doi.org/10.1016/j.caeai.2023.100131
Akgun S, Greenhow C (2021) Artificial intelligence in education: addressing ethical challenges in K-12 settings. AI Ethics 2(3):431–440. https://doi.org/10.1007/s43681-021-00096-7
Anagnostou M, Karvounidou O, Katritzidaki C, Kechagia C, Melidou K, Mpeza E, Peristeras V (2022) Characteristics and challenges in the industries towards responsible AI: a systematic literature review. Ethics Inf Technol 24(3):37. https://doi.org/10.1007/s10676-022-09634-1
Badran K, Côté PO, Kolopanis A, Bouchoucha R, Collante A, Costa DE, Khomh F (2023) Can ensembling preprocessing algorithms lead to better machine learning fairness? Computer 56(4):71–79. https://doi.org/10.1109/MC.2022.3220707
Bearman M, Ryan J, Ajjawi R (2023) Discourses of artificial intelligence in higher education: a critical literature review. High Educ 86(2):369–385. https://doi.org/10.1007/s10734-022-00937-2
Berendt B, Littlejohn A, Blakemore M (2020) AI in education: learner choice and fundamental rights. Learn Media Technol 45(3):312–324. https://doi.org/10.1080/17439884.2020.1786399
Bhimdiwala A, Neri RC, Gomez LM (2022) Advancing the design and implementation of artificial intelligence in education through continuous improvement. Int J Artif Intell E 32:756–782. https://doi.org/10.1007/s40593-021-00278-8
Borenstein J, Howard A (2021) Emerging challenges in AI and the need for AI ethics education. AI Ethics 1:61–65. https://doi.org/10.1007/s43681-020-00002-7
Bu Q (2022) Ethical risks in integrating artificial intelligence into education and potential countermeasures. Sci Insights 41(1):561–566. https://doi.org/10.15354/si.22.re067
Celik I (2023) Towards Intelligent-TPACK: an empirical study on teachers’ professional knowledge to ethically integrate artificial intelligence (AI)-based tools into education. Comput Hum Behav 138:107468. https://doi.org/10.1016/j.chb.2022.107468
Chaudhry MA, Kazim E (2022) Artificial Intelligence in Education (AIEd): a high-level academic and industry note 2021. AI Ethics 2(1):157–165. https://doi.org/10.1007/s43681-021-00074-z
Chen Q, Zhang L (2023) Ethical thinking on educational artificial intelligence: phenomenon analysis and vision construction: based on the analysis perspective of “human-machine collaboration”. J Dist Educ 41(3):104–112. https://doi.org/10.15881/j.cnki.cn33-1304/g4.2023.03.010
Chounta IA, Bardone E, Raudsep A, Pedaste M (2022) Exploring teachers’ perceptions of Artificial Intelligence as a tool to support their practice in Estonian K-12 education. Int J Artif Intell E 32(3):725–755. https://doi.org/10.1007/s40593-021-00243-5
Dakakni D, Safa N (2023) Artificial intelligence in the L2 classroom: Implications and challenges on ethics and equity in higher education: a 21st century Pandora’s box. Comput Educ Artif Intell 5:100179. https://doi.org/10.1016/j.caeai.2023.100179
Deng G, Li M (2020) Research on ethical issues and ethical principles of educational artificial intelligence. e-Educ Res 41(6):39–45. https://doi.org/10.13811/j.cnki.eer.2020.06.006
Du J, Huang R, Li Z, Zhou W, Tian Y (2019) Connotation and construction principles of artificial intelligence ethics in the era of intelligent education. e-Educ Res 40(7):21–29. https://doi.org/10.13811/j.cnki.eer.2019.07.003
Dwivedi YK, Kshetri N, Hughes L, Slade EL, Jeyaraj A, Kar AK, Baabdullah AM, Koohang A, Raghavan V, Ahuja M, Albanna H (2023) “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. Int J Inf Manag 71:102642. https://doi.org/10.1016/j.ijinfomgt.2023.102642
EDUCAUSE (2022) EDUCAUSE Horizon Report | Teaching and Learning Edition. Retrieved February 16, 2024 from https://library.educause.edu/resources/2022/4/2022-educause-horizon-report-teaching-and-learning-edition
EDUCAUSE (2023) EDUCAUSE Horizon Report | Teaching and Learning Edition. Retrieved February 16, 2024 from https://library.educause.edu/resources/2023/5/2023-educause-horizon-report-teaching-and-learning-edition
European Commission (2022) Ethical guidelines on the use of artificial intelligence (AI) and data in teaching and learning for Educators. Retrieved February 16, 2024 from https://op.europa.eu/en/publication-detail/-/publication/d81a0d54-5348-11ed-92ed-01aa75ed71a1#_publicationDetails_PublicationDetailsPortlet_pa
Feng R, Sun J, Sun F (2020) Ethical risk and rational choice of artificial intelligence in the application of education. J Dist Educ 38(3):47–54. https://doi.org/10.15881/j.cnki.cn33-1304/g4.2020.03.005
Feng Y (2023) The application value, potential ethical risks, and governance paths of ChatGPT in the education field. Ideol Theor Educ (4):26–32. https://doi.org/10.16075/j.cnki.cn31-1220/g4.2023.04.013
Gao S, Yang D (2022) Research on the ethical risks of artificial intelligence educational applications and their countermeasures. High Educ Explor (1):45–50
Global Smart Education Network (GSENet) (2024). Global understanding of smart education in the context of digital transformation. Retrieved February 27, 2025 from https://cit.bnu.edu.cn/docs/2024-09/fa33299274b54209b1fc5344b4bfdfdf.pdf
Gouseti A, James F, Fallin L, Burden K (2024) The ethics of using AI in K-12 education: a systematic literature review. Technol Pedagog Educ 1–22. https://doi.org/10.1080/1475939X.2024.2428601
Guan X, Feng X, Islam AY (2023) The dilemma and countermeasures of educational data ethics in the age of intelligence. Hum Soc Sci Commun 10(1):1–14. https://doi.org/10.1057/s41599-023-01633-x
Guo H, Jiang N, Jiang H, Liu Z, Deng H (2024) The ethical risks of AI-driver educational reform and the ways to de-risk them. CET China Educ Technol (04):25-31
Hermerén G (2012) The principle of proportionality revisited: interpretations and applications. Med Health Care Philos 15:373–382. https://doi.org/10.1007/s11019-011-9360-x
Holmes W, Iniesto F, Anastopoulou S, Boticario JG (2023) Stakeholder perspectives on the ethics of AI in distance-based higher education. Int Rev Res Open Dis 24(2):96–117. https://doi.org/10.19173/irrodl.v24i2.6089
Holmes W, Porayska-Pomsta K, Holstein K, Sutherland E, Baker T, Shum SB, Santos OC, Rodrigo MT, Cukurova M, Bittencourt II, Koedinger KR (2021) Ethics of AI in education: towards a community-wide framework. Int J Artif Intell E 32:504–626. https://doi.org/10.1007/s40593-021-00239-1
Hu X, Huang J, Lin Z, Huang M (2022) AI enabled classroom teaching evaluation: ethical review and risk resolution. Mod Dist Educ Res 34(02):21–28+36
ISO (2024) Information technology—security techniques—storage security. Retrieved June 19, 2024 from https://iso.org/standard/80194.html
Jafari E (2024) Artificial intelligence and learning environment: Human considerations. J Comput Assist Lear 40(5):2135–2149. https://doi.org/10.1111/jcal.13011
Jang Y, Choi S, Kim H (2022) Development and validation of an instrument to measure undergraduate students’ attitudes toward the ethics of artificial intelligence (AT-EAI) and analysis of its difference by gender and experience of AI education. Educ Inf Technol 27(8):11635–11667. https://doi.org/10.1007/s10639-022-11086-5
Javed RT, Nasir O, Borit M, Vanhée L, Zea E, Gupta S, Vinuesa R, Qadir J (2022) Get out of the BAG! Silos in AI ethics education: unsupervised topic modeling analysis of global AI curricula. J Artif Intell Res 73:933–965. https://doi.org/10.1613/jair.1.13550
Kayalı B, Yavuz M, Balat Ş, Çalışan M (2023) Investigation of student experiences with ChatGPT-supported online learning applications in higher education. Australas J Educ Tec 39(5):20–39. https://doi.org/10.14742/ajet.8915
Khowaja SA, Khuwaja P, Dev K, Wang W, Nkenyereye L (2024) Chatgpt needs spade (sustainability, privacy, digital divide, and ethics) evaluation: A review. Cogn Comput 16(5):2528–2550. https://doi.org/10.1007/s12559-024-10285-1
Kitto K, Knight S (2019) Practical ethics for building learning analytics. Brit J Educ Technol 50(6):2855–2870. https://doi.org/10.1111/bjet.12868
Köbis L, Mehner C (2021) Ethical questions raised by AI-supported mentoring in higher education. Front Artif Intell 4:624050. https://doi.org/10.3389/frai.2021.624050
Kordzadeh N, Ghasemaghaei M (2021) Algorithmic bias: review, synthesis, and future research directions. Eur J Inf Syst 31(3):388–409. https://doi.org/10.1080/0960085X.2021.1927212
Lai T, Zeng X, Xu B, Xie C, Liu Y, Wang Z, Lu H, Fu S (2023) The application of artificial intelligence technology in education influences Chinese adolescent’s emotional perception. Curr Psychol 43(6):5309–5317. https://doi.org/10.1007/s12144-023-04727-6
Lee AVY, Luco AC, Tan SC (2023) A human-centric automated essay scoring and feedback system for the development of ethical reasoning. Educ Technol Soc 26(1):147–159. https://doi.org/10.30191/ETS.202301_26(1).0011
Liao SC, Chou TC, Huang CH (2022) Revisiting the development trajectory of the digital divide: a main path analysis approach. Technol Forecast Soc 179:121607. https://doi.org/10.1016/j.techfore.2022.121607
Lim WM, Gunasekara A, Pallant JL, Pallant JI, Pechenkina E (2023) Generative AI and the future of education: Ragnarök or reformation? A paradoxical perspective from management educators. Int J Manag Educ -Oxf 21(2):100790. https://doi.org/10.1016/j.ijme.2023.100790
Liu D, Liu J, Xu D (2019) Risk prediction and reflection on distress: the impact and transformation of artificial intelligence on the development of education: philosophical and ethical reflections. High Educ Explor (07):18–23
Luo J, Wang L, Liu L (2022) AI enabled classroom teaching evaluation: ethical review and risk resolution. Mod Dist Educ Res 34(2):29–36
Luo S, Tan A, Zhong Y (2023) Ethical risks and avoidance in educational application of artificial intelligence. Educ Sci China 6(2):79–88. https://doi.org/10.13527/j.cnki.educ.sci.china.2023.02.004
Lutz C (2019) Digital inequalities in the age of artificial intelligence and big data. Hum Behav Emerg Tech 1(2):141–148. https://doi.org/10.1002/hbe2.140
Masters K (2023) Ethical use of artificial intelligence in health professions education: AMEE Guide No. 158. Med Teach 45(6):574–584. https://doi.org/10.1080/0142159X.2023.2186203
Meena U, Sharma P (2023) An improved blockchain based encryption scheme for secure routing in wireless sensor network using machine learning technique. T Emerg Telecommun T 34(3):e4713. https://doi.org/10.1002/ett.4713
Miao F (2022) Ethics of AI in education: analysis and governance—educational overview of recommendation on ethics of AI. China Educ Technol (06):22–36
Michener WK, Jones MB (2012) Ecoinformatics: supporting ecology as a data-intensive science. Trends Ecol Evol 27(2):85–93. https://doi.org/10.1016/j.tree.2011.11.016
Mishra P, Oster N, Henriksen D (2024) Generative AI, teacher knowledge and educational research: bridging short-and long-term perspectives. Techtrends 68:205–210. https://doi.org/10.1007/s11528-024-00938-1
Mogavi RH, Deng C, Kim JJ, Zhou P, Kwon YD, Metwally AH, Tlili A, Bassanelli S, Bucchiarone A, Gujar S, Nacke LE (2024) ChatGPT in education: a blessing or a curse? A qualitative study exploring early adopters’ utilization and perceptions. Comput Hum Behav Artif Hum 2(1):100027. https://doi.org/10.1016/j.chbah.2023.100027
Moher D, Liberati A, Tetzlaff J, Altman DG, Prisma Group (2010) Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Int J Surg 8(5):336–341. https://doi.org/10.1016/j.ijsu.2010.02.007
Mouta A, Torrecilla-Sánchez EM, Pinto-Llorente AM (2023) Design of a future scenarios toolkit for an ethical implementation of artificial intelligence in education. Educ Inf Technol 1–26. https://doi.org/10.1007/s10639-023-12229-y
Nam BH, Bai Q (2023) ChatGPT and its ethical implications for STEM research and higher education: a media discourse analysis. Int J Stem Educ 10(1):66. https://doi.org/10.1186/s40594-023-00452-5
Nemorin S, Vlachidis A, Ayerakwa HM, Andriotis P (2023) AI hyped? A horizon scan of discourse on artificial intelligence in education (AIED) and development. Learn Media Technol 48(1):38–51. https://doi.org/10.1080/17439884.2022.2095568
Nguyen A, Ngo HN, Hong Y, Dang B, Nguyen BP (2023) Ethical principles for artificial intelligence in education. Educ Inf Technol 28(4):4221–4241. https://doi.org/10.1007/s10639-022-11316-w
OECD (2021) OECD Digital Education Outlook 2021: pushing the frontiers with artificial intelligence, blockchain and robots. Retrieved June 10, 2024 from https://oecd-ilibrary.org/sites/f54ea644-en/index.html?itemId=/content/component/f54ea644-en
OECD (2023) OECD Digital Education Outlook 2023: towards an effective digital education ecosystem. Retrieved June 10, 2024 from https://doi.org/10.1787/c74f03de-en
Pammer-Schindler V, Rosé C (2022) Data-related ethics issues in technologies for informal professional learning. Int J Artif Intell E 32(3):609–635. https://doi.org/10.1007/s40593-021-00259-x
Park YJ, Lee H, Jones-Jang SM, Oh YW (2022) Digital assistants: inequalities and social context of access, use, and perceptual understanding. Poetics 93:101689. https://doi.org/10.1016/j.poetic.2022.101689
Pierrès O, Christen M, Schmitt-Koopmann F, Darvishy A (2024) Could the use of AI in higher education hinder students with disabilities? A scoping review. IEEE Access 12:27810–27828. https://doi.org/10.1109/ACCESS.2024.3365368
Prahani BK, Rizki IA, Jatmiko B, Suprapto N, Amelia T (2022) Artificial intelligence in education research during the last ten years: a review and bibliometric study. Int J Emerg Technol 17(8):169–188. https://doi.org/10.3991/ijet.v17i08.29833
Qin F, Li K, Yan J (2020) Understanding user trust in artificial intelligence‐based educational systems: evidence from China. Brit J Educ Technol 51(5):1693–1710. https://doi.org/10.1111/bjet.12994
Sahlgren, O (2023) The politics and reciprocal (re)configuration of accountability and fairness in data-driven education. Learn Media Technol 48(1):95–108. https://doi.org/10.1080/17439884.2021.1986065
Salas-Pilco SZ, Yang Y (2022) Artificial intelligence applications in Latin American higher education: a systematic review. Int J Educ Technol H 19(1):21. https://doi.org/10.1186/s41239-022-00326-w
Saltz JS, Dewar N (2019) Data science ethical considerations: a systematic literature review and proposed project framework. Ethics Inf Technol 21:197–208. https://doi.org/10.1007/s10676-019-09502-5
Schiff D (2022) Education for AI, not AI for education: the role of education and ethics in national AI policy strategies. Int J Artif Intell E 32(3):527–563. https://doi.org/10.1007/s40593-021-00270-2
Šedlbauer J, Činčera J, Slavík M, Hartlová A (2024) Students’ reflections on their experience with ChatGPT. J Comput Assist Lear 1–9. https://doi.org/10.1111/jcal.12967
Shamsuddinova S, Heryani P, Naval MA (2024) Evolution to revolution: Critical exploration of educators’ perceptions of the impact of Artificial Intelligence (AI) on the teaching and learning process in the GCC region. Int J Educ Res 125:102326. https://doi.org/10.1016/j.ijer.2024.102326
Shen Y, Wang Q (2019) Ethic arguments of Al in education: an analysis of the EU’s ethics guidelines for trustworthy AI from an educational perspective. Peking Univ Educ Rev 17(4):18–34
Slimi Z, Carballido BV (2023) Navigating the ethical challenges of artificial intelligence in higher education: an analysis of seven global AI ethics policies. Tem J 12(2):590–602. https://doi.org/10.18421/TEM122-02
Sweeney S (2023) Who wrote this? Essay mills and assessment–considerations regarding contract cheating and AI in higher education. Int J Manag Educ -Oxf 21(2):100818
The Danish government (2022) National Strategy for Digitalisation. Retrieved February 27, 2025 from https://en.digst.dk/strategy/the-national-strategy-for-digitalisation
The Institute for Ethical AI in Education (2020) Towards a shared vision of ethical AI in education. Retrieved June 10, 2024 from https://www.buckingham.ac.uk/wp-content/uploads/2021/03/Interim-Report-The-Institute-for-Ethical-AI-in-Educations-Interim-Report-Towards-a-Shared-Vision-of-Ethical-AI-in-Education-1.pdf
The U.S. Department of Education (2023) Artificial intelligence and the future of teaching and learning. Retrieved February 16, 2024 from https://tech.ed.gov/ai-future-of-teaching-and-learning
The U.S. National Telecommunications and Information Administration (2024) Artificial intelligence accountability policy report. Retrieved June 21, 2024 from https://www.ntia.gov/sites/default/files/publications/ntia-ai-report-final.pdf
The U.S. Office of Educational Technology (2024) National Education Technology Plan. Retrieved June 22, 2024 from https://tech.ed.gov/netp/
The UK Department for Education (2023) Generative artificial intelligence (AI) in education. Retrieved June 10, 2024 from https://www.gov.uk/government/publications/generative-artificial-intelligence-in-education/generative-artificial-intelligence-ai-in-education
UNESCO (2021) AI and education: Guidance for policymakers. Retrieved February 16, 2024 from https://doi.org/10.54675/PCSP7350
UNESCO (2022) Recommendation on the ethics of artificial intelligence. Retrieve June 23, 2024 from https://unesco.org/en/articles/recommendation-ethics-artificial-intelligence
UNESCO (2023) Guidance for generative AI in education and research. Retrieve February 27, 2025 from https://www.unesco.org/en/articles/guidance-generative-ai-education-and-research
Vetter MA, Lucia B, Jiang J, Othman M (2024) Towards a framework for local interrogation of AI ethics: a case study on text generators, academic integrity, and composing with ChatGPT. Comput Compos 71:102831. https://doi.org/10.1016/j.compcom.2024.102831
Wang X, Li L, Tan SC, Yang L, Lei J (2023a) Preparing for AI-enhanced education: conceptualizing and empirically examining teachers’ AI readiness. Comput Hum Behav 146:107798. https://doi.org/10.1016/j.chb.2023.107798
Wang Y, Wang D, Liu C (2022) From technology for good to human good: core principles for constructing an ethical code of educational artificial intelligence. Open Educ Res 28(5):68–78. https://doi.org/10.13966/j.cnki.kfjyyj.2022.05.008
Wang Y, Wang D, Liang W, Liu C (2023b) Ethical risks and avoidance approaches of ChatGPT in educational application. Open Educ Res 29(2):26–35. https://doi.org/10.13966/j.cnki.kfjyyj.2023.02.004
Wogu IA, Misra S, Olu-Owolabi EF, Assibong PA, Udoh OD, Ogiri SO, Damasevicius R (2018) Artificial intelligence, artificial teachers and the fate of learners in the 21st century education sector: Implications for theory and practice. Int J Pure Appl Mat 119(16):2245–2259
Wolfswinkel JF, Furtmueller E, Wilderom CP (2013) Using grounded theory as a method for rigorously reviewing literature. Eur J Inf Syst 22(1):45–55. https://doi.org/10.1057/ejis.2011.51
Xie J (2023) The integration and innovation of artificial intelligence and education: ethical connotation and realization path. Dist Educ China (2):1–8. https://doi.org/10.13541/j.cnki.chinade.2023.02.001
Yang Z, Wu JG, Xie H (2024) Taming Frankenstein’s monster: ethical considerations relating to generative artificial intelligence in education. Asia Pac J Educ 1–4. https://doi.org/10.1080/02188791.2023.2300137
Zhang L, Liu X, Chang J (2021) Ethical issues of artificial intelligence education and its regulations. e-Educ Res 42(8):5–11. https://doi.org/10.13811/j.cnki.eer.2021.08.001
Zhang J, Zhang Z (2024) AI in teacher education: unlocking new dimensions in teaching support, inclusive learning, and digital literacy. J Comput Assist Lear 1–15. https://doi.org/10.1111/jcal.12988
Zhao L, Wu X, Zhao K (2022) Responsibility ethics: an appeal of the times for educational AI risk governance. e-Educ Res 43(6):32–38. https://doi.org/10.13811/j.cnki.eer.2022.06.005
Zhao L, Jiang B, Li K (2020) The dilemma and governance path of artificial intelligence ethics in education. Contemp Educ Sci (5):3–7
Zhao L, Zhang L, Dai R (2021) Ethics of artificial intelligence in education: fundamental dimensions and risk mitigation. Mod Dist Educ (5):73–80. https://doi.org/10.13927/j.cnki.yuan.20210831.005
Acknowledgements
This research was funded by the 2024 Zhejiang Social Science Key Project: Ethical Risks and Precautionary Measures for New Generation AI in Education (No: 24QNYC14ZD) and the 2023 Zhejiang Social Science Key Project: Ethical Risks and Solutions of AI in Education for Infants and Preschool Children (No: 23SYS10ZD).
Author information
Authors and Affiliations
Contributions
HTZ, YS: collecting literature; writing in the full process; screening of literature for two rounds. JFY: research design, planning, and administration; language proofreading and modifying; revising the full article.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Ethical approval
Ethical approval was not required as the study did not involve human participants.
Informed consent
Informed consent was not required as the study did not involve human participants.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
About this article
Cite this article
Zhu, H., Sun, Y. & Yang, J. Towards responsible artificial intelligence in education: a systematic review on identifying and mitigating ethical risks. Humanit Soc Sci Commun 12, 1111 (2025). https://doi.org/10.1057/s41599-025-05252-6
Received:
Accepted:
Published:
DOI: https://doi.org/10.1057/s41599-025-05252-6