Following a two-day working conference on the topic of artificial intelligence (AI) in Education at the Council of Europe in Strasbourg on the 8th and 9th of October, I wanted to take the time to capture my reflections and open them up for discussion.
The aim of this third working conference was to introduce the Council of Europe’s Compass for AI in Education, a new strategic tool which is structured around four components – Literacy, Practice, Evaluation, Regulation and Governance. The meeting brought together a group of about 100 experts from research, policy, practice, and private industry as well as student and parent representatives to provide feedback on the Compass and accompanying legislation in a series of workshops. We were invited to present findings from our EdTech Evidence Board project as part of a panel including representatives from UNICEF and eduCheck Digital, a German initiative.
Education in times of AI – Societal and ethical implications
Rights-based issues
A lot of our discussions, particularly on day one, centred around the impact AI already has and may continue to have on society and thus education. These discussions did not focus on the day-to-day practice of teaching but rather the societal impact of AI in terms of democracy, human rights and the rule of law. The strong focus on human and children’s rights, in particular the right to a quality education, struck me as particularly pertinent in these discussions. Experts like Jen Persson, who was also present and who spoke at our last Annual Lecture (you can watch a recording here), rightly keep reminding us to consider children’s rights in any decisions relating to EdTech and AI.
In addition to their right to a quality education, we also need to consider their rights to their future selves and data protection rights. The use of data by large language models (LLMs) is a very complex issue. LLMs need user data to learn and develop but in most cases it is unclear where this data actually ends up and who has access to it. The output generated by an LLM can only ever be as good as the data it is based on, so there is an argument for the need to input rich data to further improve the models. Therefore, restricting its access to user data from specific geographical contexts, e.g. Europe, by passing the according legislation could have the unintended consequence of increasing existing biases for English-speaking, US-based content even further, leading to even further marginalisation of linguistic and cultural minorities. So how can we ensure that diverse user data is included while also protecting users’ rights?
The ethical implications of AI-driven decision-making were also discussed in detail. What happens when an AI system rather than a lawyer, teacher or doctor is used to make a high-stakes decision based purely on the (incomplete) data that is available to it? Some argue that decisions could be fairer as they do not involve human bias but then, of course, AI systems are inherently biased and purely rational decision-making can ignore important ethical dimensions. Issues around accountability come into play here. Whose fault is it if an AI system makes the wrong decision? Let us take the example of AI-supported assessment. Would it be ethical to use an AI system to mark all of a student’s work and for a teacher only to act upon the summarised results without ever reading students’ original work? If not, what percentage of AI-marked assessments would we find acceptable? Or is it a question of stakes? Would AI-marked work be acceptable for coursework but not for high-stakes assessments, given the far-reaching implications the latter have for students’ educational careers and life outcomes? We need to develop clear frameworks, weighing up the potential impact of AI use on both students and teachers, to ensure that nobody gets (inadvertently) disadvantaged in the process.
Future jobs
The potential societal implications of AI were also considered. It was discussed that AI systems can already be used to complete a lot of tasks that would typically be carried out by interns or people in entry-level positions. What does this mean for the job market? Do we need to amend school/university-leaving certifications, so they better prepare students for more highly-qualified jobs and those that AI systems cannot complete? Is it realistic to assume that one could move straight into a mid-level management position without having been through entry level positions? If not, how can we ensure that entry-level positions do not disappear? These questions have significant implications for education as they potentially influence what and how students need to learn to be prepared for an AI-dominated job market.
Environmental impact
The environmental impact of AI use on water and energy consumption was also discussed. Given the substantial strain put on resources by AI, should we use it, just because we can? What are the real added advantages of using AI over more traditional approaches to searching and summarising and are they worth the added negative impact on an already-strained environment? It is important that these questions are tackled at policy level and discussed in schools and that we have them in mind as educators and individuals whenever we intend to use AI.
AI in education
AI literacy
The concept of AI literacy was discussed at length as it forms one key pillar of the proposed Compass. What do we mean by AI literacy? What do students need to know to become competent and critical users of AI? Do they need to understand the technological details of LLMs and the code that they are built with? Or is a broad brush understanding enough? How important is it for students to understand the data the LLMs use to inform their output? A concrete example that was presented helps to understand why data literacy needs to be a crucial component of AI literacy.
The example of an AI system used in healthcare was discussed as part of a panel on day two. This AI can apparently be used to make very reliable predictions relating to the treatment of Type 2 diabetes. The caveat is, however, that it is entirely based on data from South Korean and Japanese populations, who are likely distinctly different from populations in other parts of the world, notably in relation to their lifestyle, including nutrition, which is a strong contributor to Type 2 diabetes. This may not be a problem as long as doctors using this AI are aware of the data it is based on. After all, they may well be using research articles that are based on populations that are different to their own to inform their decision-making, particularly when data from their own contexts are lacking, but they combine this evidence with their own clinical understanding to inform their decisions.
This is why human-centred decision-making is important and AI should only ever be considered as a tool that can support, but never replace, human decision-making.
The wider implications of what this means for professionals, their agency and sense of professionalism was also discussed. What is the role of the teacher in an age of AI and how can it be protected? The role of human interactions for teaching and learning were central to these debates and it became clear that we need to (re)focus on social constructivist theories of learning to complement evidence from cognitive and neurosciences to ensure that human interaction remains at the heart of education.
Crowded curricula
When discussing how students can best be prepared for a future with AI, curriculum reform was mentioned repeatedly. Participants explored if and how teaching needed to change to better prepare students to engage critically with AI and develop AI literacy skills. An important caveat that was raised by practitioners in the room was curriculum over-crowding. If we decide to add AI literacy as a separate or overarching topic in education, what subject, skill or piece of knowledge are we willing to let go of to include it? In a world of curriculum over-crowding and in light of findings from the cognitive sciences, which highlight the need for spaced practice and sufficient room to revisit content, it is essential that we reflect as much on what needs to go as what needs to be added. This seems particularly relevant in light of the imminent publication of results from the Curriculum and Assessment review in England.
Teacher development
The important role of teachers in preparing students for a world in and with AI was explored in much detail but it was stressed that teachers need the necessary knowledge and skills to support students appropriately. Therefore the necessary resources need to be invested in teacher development. The online course we have developed with Chiltern Learning Trust, for the Department for Education, on effective and ethical uses of AI in education as well as our special issue of Impact are one important step towards upskilling teachers in this area.
Strength in diversity
It quickly became clear that we all operate in very diverse environments and while we all agree about the importance and potential threat of AI in and to education, approaches to legislation and use in education could not be more different. While some countries have restricted the use of AI to upper secondary education, others have few if any restrictions in place but no matter where we stand as individuals or countries, it is essential that we reflect critically on the impact of AI for education and society. I hope this piece can contribute in a small way to informing these discussions.