Diana Adela Martin (Ethics SIG co-chair) It is generally acknowledged that the challenges we face…
The potential use of advanced digital tools in education extends over algorithmic grading, individualized feedback, attention measuring using eye-tracking and other tools, the use of virtual reality or augmented reality in classrooms, etc. Moreover, the questions of whether and how to utilize software that generates text, such as ChatGPT, or solves potential homework tasks, such as GitHub Copilot, arise. The epistemic challenge in front of contemporary education is to find creative solutions for utilizing new technologies for student learning.
In parallel with the exciting practical challenges regarding the use of AI in education, researchers, software developers and content creators need to take into account the ethical risks in this field. From the legislative perspective, certain applications of AI in education are considered to be high-risk, e.g., algorithmic grading. This is a consequence of the large impact that grades and admissions to educational institutions can have on people’s lives. Biases are one of the main concerns, because predictive algorithms that learn from biased data mirror or even reinforce them (Akgun and Greenhow, 2022). For example, a flawed algorithm could consider females less suitable candidates because there are currently fewer females enrolled at certain universities.
Another and related ethical risk associated with the use of AI in education is concerning privacy. AI can provide children with personalized and stimulating materials. For example, gamification can motivate children to exercise and compete for badges or records, while virtual reality can recreate historic events and geographic locations. However, individual learning success can be (mis)used for predicting success on the job, and failed courses during adolescence could be evaluated by future employers. Hence, it is essential to protect the data of every user, especially if these users are minors.
In the context of generating training data for AI, it is also morally challenging to define who has the right to consent if the user is minor. The general terms and conditions of many applications targeting children are a cautionary tale. For example, TikTok was recently fined more than ten million euros for misusing children’s data.
Together with Aleksandra Vučković (University of Belgrade), I explored the consequences of the use of AI in education in the context of global justice. AI can help disadvantaged populations, e.g., automated translation tools can improve access to information and AI-based solutions can reach students at remote locations. However, we must ensure that everybody can access these technologies and that they are not monopolized by a few companies. Furthermore, we need to ensure that our moral and epistemic values are passed on as well. Principles such as inclusion and justice have to be considered when AI applications are developed and used. Therefore, when creating responsible AI-based education for the future, it is important to be open to revising and updating the existing standards. This can be accommodated by having a human-in-the-loop approach and by constantly keeping in mind the needs of the users – students and teachers. After all, this is a process in which not only students, but also teachers and education experts learn together.
To address these challenges, together with my colleagues from the Hector Research Institute of Education Sciences and Psychology of the University of Tübingen and the Leibniz Knowledge Media Research Center, I am working on a systematic review of research on AI in education. More information can be found on my website.