There’s lots of excitement about AI at the moment – as well as plenty of fear. But how much of this is hype? And what can we practically do to make the best use of AI and other digital technologies in key areas such as education, whilst ensuring we don’t lose sight of core values such as fairness, equality and privacy?
This is a topic that I’ve been thinking about a lot recently, as I look forward to participating in an open online panel discussion at the Oxford Forum on how we retain integrity and quality in education with AI and digital technologies developing at a rapid pace.
There is no denying that AI will play an increasing role in schools, but many of the more extreme claims being made simply go too far — we don’t yet know how this is going to pan out.
But we can and should start by drawing lessons from the existing use of technology in the classroom. Just as with previous innovations, AI and other digital technologies cannot be ignored, but should be moulded to serve the interests of students, schools and instructors. Teachers need to be equipped and empowered to make the best use of AI, safely and fairly, to benefit pupils’ education and prepare them for our increasingly AI-enabled world.
Concern about new technology in education isn’t unusual. Previous generations worried about learners using pocket calculators or having access to the free, open Internet via Google.
We now know that with the right tools and skills in place, these emerging technologies can be a real benefit with wide-ranging implications for how and what we learn. Today we’re used to using calculators and acknowledge that students don’t necessarily need to complete complex mental maths and can instead focus on understanding how to apply these calculations to solve problems. Access to Google and other search engines means we don’t need to memorise the dates of reigning kings and queens any longer. But we do need to know how to find accurate information sources online and develop the critical skills to assess them.
That said, AI comes with new risks. Given its impact on children’s life chances, education should be considered a high-risk environment and needs protections put in place to ensure fundamental rights and key interests are respected. There are major ethical and logistical challenges in rolling out AI technologies in schools including concerns about potential bias, privacy infringements, accuracy and efficacy. In this light, as my colleagues from the Oxford Internet Institute wrote earlier this month, we need a proactive regulatory approach now to make sure we can safely take advantage of these emerging technologies across society, and do so in a consistent and fair way.
As my colleague Rebecca Eynon and co-authors have suggested, we need to classify the different usages of AI and assess the risks and opportunities involved. For example, there are risks that large language models (such as Chat GPT) could perpetuate or even enhance existing inequalities. At the OII, our students and faculty are doing work to see how these LLMs could be more inclusive, for example by allowing personalisation to reflect different users’ values or needs.
Embracing a new reality
Being aware of potential risks does not mean we should shy away from technological innovation. We must accept that children are growing up in a world in which AI will be increasingly prevalent and prepare them accordingly. If we fail to teach our students how to learn and work with AI, we risk widening the digital divide between those who have access to and are able to learn about AI at home, and those who may not have a reliable internet connection at all. Experiences of remote learning during the pandemic should have reminded us how wide such disparities remain, and how harmful these can be.
So, how can teachers and learners be supported to get the most from AI whilst ensuring fairness, quality and integrity? The most important factor is that these new tools are adopted deliberately and used reflectively. The language and imagery used in the marketing of educational technology products can make them sound as if they will solve every educational problem, but the reality is that few such silver bullets exist. With careful use, though, there are some promising areas for exploration.
Given continued frustration with teacher workloads, there is clearly appetite for the employment of AI tools to perform time-consuming tasks, such as marking homework, reviewing performance or checking for plagiarism. Google’s Future of Education report highlighted the potential for AI to save teachers time in a context where there is a shortage of teachers, ideally leaving more time for coaching, mentoring, and advising students. This of course only works if schools resist increasing class sizes as a response.
The potential for personalisation of resources is also clearly interesting. Where apps or online programmes are used to complete tests or assignments, learning technologies can provide instant feedback to help learners identify areas they need to focus on. In this case it will be vital that AI-based educational tools are designed with educational experts if high quality outcomes are to be ensured.
Finally, students themselves must learn how to use innovative AI tools such as ChatGPT, and how to do so responsibly and critically. Just as calculators changed the way we do maths, large language models may change the way we write and demonstrate knowledge. Schools and universities have a central role to play in setting these new norms and expectations.
AI is one tool in our toolset. We need it to improve education by building our existing experiences and to do that we need take time to reflect and regulate its use, and avoid rushing into something we don’t yet fully understand.
Hear more from Professor Nash at the Oxford Forum
Professor Nash will join a panel of expert practitioners to explore If AI and digital technology is the key to a sustainable education system, how do we retain integrity and quality?
Sign up to join us at the Oxford Forum, a free online event on 25 April 2023.
With thanks to the team at OUP for their contributions to this article.