16. Opportunities, guidelines and guardrails for effective and equitable use of AI in education

These opportunities, guidelines and guardrails for effective and equitable use of AI in education (Guidelines) represent positions on the development and use of Artificial Intelligence (AI) and digital education developed by the OECD Secretariat and Education International. The Guidelines aim to help educational jurisdictions and organisations representing teachers and educators alike in navigating what are fast moving developments in AI. A first version was given to education ministers and teacher union leaders at the International Summit of the Teaching Profession 2023. The Guidelines build on the OECD Council Recommendations on Artificial Intelligence (2019) and on Broadband Connectivity (2021). They also build on the ten principles on Effective and Equitable Educational Recovery (2021) developed by the OECD Secretariat and Education International in 2021.

One of the legacies of the COVID-19 pandemic is the increased use of and attention given to digital technology in education. Most school systems have used remote online teaching and learning at some point during the health crisis, and teachers, learners and families have realised the potential of digital technology for teaching and learning, as well as its limitations. The massive shift to digital learning has also exposed persistent inequalities in access to technology and connectivity as well as the crucial role of schools as social in-person places contributing to learning but also the wellbeing of students.

The irruption of digital technology continued in 2022-23 with the sudden visibility of generative AI applications (for example based on large language models such as ChatGPT or SAGE). These advances have made the power of AI visible to the public and raise fundamental questions about tasks and skills where the activity of humans and machine complement and/or substitute for each other. How does AI enhance human capacity? Does it lead to cognitive off-loading where AI performs or even outperforms at existing human skills level? Does it lead to human skill attrition when this off-loading occurs and these skills get less exercised? For educators, AI challenges a number of educational activities such as traditional models of homework assignments and assessment. This general-purpose technology has the potential to lead to another “industrial revolution” and will not leave education models untouched.

Both sets of experiences have pointed to the growing importance of the digital transformation of education (as well as in our societies) and cast light on the opportunities and challenges of embedding digital technology in education – and dealing with general purpose digital technology such as generative AI, which can be disruptive for teaching staff.

The use of digital technologies in education holds significant promise. When applied to education, technologies such as artificial intelligence, machine learning and robots, have the potential to improve the quality and equity of learning, free teachers’ time to focus on their teaching and provide students with new routes to learning. These educational objectives may become a reality with the support of technology, provided that teachers and learners are given the right conditions to use such technologies.

First, AI-powered adaptive learning tools can help track learners’ progress and pinpoint where learners need help and where they excel. They may support teachers in providing greater personalised teaching and learning in the classroom and allow learners to work with more autonomy and engagement under the supervision of their teachers. Some may help students remaining engaged in their learning. Social robots may support teachers and other educators as tutors, peers or instructors. Classroom analytics that provide feedback to teachers in real time about the management of their class or give them feedback about their teaching after class may also help them to improve their teaching and their students’ learning. Simulators, virtual and augmented reality may allow learners, especially those in vocational education and training programmes, to develop practice-oriented skills in a safe environment which mimics the workplace.

Second, AI-enabled technologies can support inclusive education and equity. AI-based accessibility tools using techniques such as speech-to-text and auto-captioning can serve visually- or hearing-impaired learners to better participate in classroom activities. Other learning difficulties such as dyslexia, dyscalculia, dysgraphia could also be detected sooner and addressed with a mix of technology and human interventions. A number of countries have made (or are in the process of making) such applications available to schools and higher education institutions to support students with special or specific needs. In countries with advanced information systems, early warning systems powered by AI could help identify students at risk of dropping out, who often come from disadvantaged backgrounds, and support teachers and administrators in designing appropriate interventions. Where students have access to the Internet and devices, technology may have the potential to make learning resources and knowledge accessible to broader audiences, including in lower income countries, and to help develop social and collaborative skills among students and teachers.

The effective use of AI tools in education depends on having trained and qualified teachers, who have the confidence and the autonomy to choose both the digital tools and how they are applied in the classroom. Some technology applications are currently designed to support the teaching profession. If implemented effectively, they may allow teachers to personalise their teaching, to receive feedback on it, but also to delegate or make some of their administrative tasks less time consuming. They may also help to remove burdensome tasks and free up teachers’ time for instructional design and activities.

Besides serving individual learners and educators, technology can help build communities of learners and make learning more collaborative, providing new means to enhance goal orientation, motivation, persistence, and the development of effective learning strategies. Similarly, technology can build communities in which educators share and enrich educational resources and practices and collaborate on professional growth and the institutionalisation of professional practice. It can also help system leaders and governments develop and share best practice around curriculum design, policy, and pedagogy, and allows doing certain types of research at an unprecedented pace.

Generative AI applications (such as large language models) present both opportunities, unknowns and risks. They can support teachers in generating draft lesson plans and providing opportunities to develop their students’ critical thinking in the classroom. These applications can support a shift in pedagogical models from having students learn answers towards supporting them in asking the right questions, navigating ambiguity and competing claims, and distinguishing fact from opinion. As the technology continues to develop, it may not only become a powerful learning tool for students and a convenient aide for teachers, but also contribute to the enrichment of dedicated technology solutions such as adaptive learning systems or customised AI answers to students’ questions based on their learning journey. All this is dependent on teachers and learners having the capacity to review and adjust what AI creates.

There are also risks to the use of digital technologies in education. One risk is that inequalities can result from unequal access to technology or from stronger effectiveness of those tools for advantaged students; weaker usage by students and educators intimidated by technology; disparities in the capacity of educators and learners to make full use of their potential; and challenges to assure the quality of digital resources.

Concerns also include the privacy, security and use of learners’ and teachers’ personal information and data as well as excessive time spent on technology-based activities, especially for young children. The use of algorithms to make automated decisions on learning interventions (e.g., identify potential early school leavers), progression or admission could, as with similar human decisions, entail risks of bias of developers, society and past datasets vis-à-vis certain student groups, resulting in different forms of discrimination and perhaps amplifying and making them systematic compared to their occurrence in traditional, fully human-driven education. Another challenge is that the identification of these risks could lead to unethical human behaviour, for example expelling students at risk of dropping out or stigmatising students with special needs rather than supporting them.

The effectiveness of many AI-enabled tools and whether they contribute to improving learning outcomes or decreasing educators’ workload is not always well established. AI-enabled tools may be more suitable for some subjects, given that not all forms of knowledge or processes of learning are easily transferable to a digital format. This could lead to a prioritisation of forms of learning that are easily digitised, jeopardising the breadth of the curriculum and the quality of education. The increased use of technology could also lead to the atrophy of human skills and agency and an increased dependency on the availability of AI and other technology, including for skills that are essential for success and well-being. This is particularly the case for generative AI, which entails some other risks such as the reliability and traceability of information, the risk of cultural bias, and raises new challenges for traditional assessments.

The pandemic has highlighted the importance of the teacher/pupil relationships and the social dimension of schooling. Too much time spent on technology can lead to the social isolation of students (and adults), which can have a negative impact on mental health as well as learning outcomes, especially for younger learners. In some cases, AI-enabled tools could add to the workload of teachers rather than an aide, especially when tools are not designed for and in collaboration with the teaching profession.

There are also new risks for teachers in terms of access to technology, wellbeing, professional development opportunities, as well as regarding the use of teacher data. One of the risks lies in an unethical use of the data collected about teachers’ performance in the classroom.

Addressing these risks requires a coordinated effort across all education levels and all policy areas, from investing in digital infrastructure and equipment for education institutions, learners and teachers to developing sound regulations addressing issues such as data protection and privacy, cyber security, educators’ digital competences, curricula with a meaningful integration of digital technologies, and quality assurance.

As with many new technologies, the rise of AI and other technologies highlights the importance of using and developing them in an ethical way, based on human and labour rights. This is particularly important for education where the development of cognitive, social and emotional skills is vital for a whole child education.

The guidelines proposed below aim to support governments, teacher unions, teachers and other educators to engage in a constructive dialogue to harness the opportunities offered by AI and other advanced technology and mitigate its risks for educational goals that are shared by the education community: equity, quality and efficiency.

1. Equitable access to affordable, high quality connectivity. Educational jurisdictions should create digital learning infrastructures at a system level that are accessible to all learners and educators in and outside of school. This strategic physical infrastructure should allow for a quick and equitable shift to remote learning if necessary.  
        

The pandemic exposed existing inequalities in the quality access to the Internet and digital devices, including in the most affluent countries and jurisdictions. Good quality connectivity and access to the Internet is a pre-requisite to the equitable widespread use of advanced technology for learning, and governments should ensure that a comprehensive and reliable physical infrastructure is available to all schools and learners, both at home and at school. This will contribute to equity in availability, quality, and affordability of devices and connectivity.

Digital transformation can exacerbate existing inequity if access to the Internet and thus learning tools and resources are unevenly distributed among learners. Some solutions that were explored during the pandemic could be continued, such as providing specific educational websites or learning resources at zero cost when accessed through mobile Internet or the lending of equipment to families who need it. This is particularly important to preparation for another crisis, whatever its nature, which could lead to the return of remote education – or for evolving the current schooling model.

At the same time, the uneven distribution of connectivity or equipment should not, in itself, be a reason not to reap the benefits of technology where possible. As it was the case during the health crisis, innovative solutions can be designed to make digital technology accessible to a majority of teachers and learners, including developing tools and platforms that can work with intermittent access to the Internet or with unstable or low connectivity. While providing access to effective connectivity for all members of the population is becoming a high priority responsibility for governments, AI-enabled tools can still be used where this is not yet a reality.

2. Equitable access to and equitable use of digital learning resources. Educational jurisdictions should make available a set of quality digital learning resources to teachers and students, accessible in school and at home. Teachers should be able to use them at their professional discretion within the context of school and jurisdiction policies. Jurisdictions should provide guidance about usage expectations, in consultation with teachers and other education stakeholders, so that all learners, including educators, can have adequate opportunities to develop their digital skills. This soft infrastructure made up of digital learning resources and tools could provide the positive conditions for a quick and equitable shift to remote learning if necessary.  
        

Beyond connectivity and devices, governments should ensure that teachers and learners have access to high quality digital learning resources to support their teaching and learning in and out of class. Making digital learning platforms and resources easily usable on mobile devices may enhance access and use. The pandemic has led many countries to expand their platforms of digital learning resources or to expand their licenses with education publishers. There should be strong emphasis on user-friendly access to digital resources and the provision of a variety of resources that allow teachers to select those that correspond to their teaching preferences (and learners to their learning preferences).

In the case of learners, the provision of adaptive learning systems that can be used in school or at home should be considered, as this provides a means to possibly alleviate loss of learning opportunities at home. As examples and evidence accumulates that specific digital solutions can support learners with special and specific needs, those should be mainstreamed across all digital learning resources platforms.

In the case of teachers, short videos, simulations or other materials that can easily be integrated in lesson plans and learning scenarios could be made available. Other digital tools that could help them design their lessons or generate easily learning materials and examples should also be considered.

For learning resources that are (still) relatively expensive, for example augmented or virtual reality tools, mutualising their use across schools could be an option.

In addition to learning resources, digital tools that support teachers in their administrative tasks could free them to design their lessons, to teach, and support students in their academic learning and socio-emotional development.

While equity in the access of decent quality learning resources must be a key objective, variations and inequity can come from a variety of use across classrooms and schools. While respecting teachers’ pedagogical autonomy, jurisdictions should provide clear guidance about the types of digital competences learners should develop, and how. Typically, it should be across all subjects rather than by solely requiring a separate focus on “technology” or “computer science” as a subject. Curricula and other forms of guidance for teachers could be reviewed and designed, in partnership with teachers and their representative organisations. Providing training on the use of generative AI may become an important step to avoid new equity gaps based on differing abilities or confidence to use such applications.

3. Teacher agency and professional learning. The critical and pedagogical uses of up-to-date digital learning resources should become an integral part of teachers, school principals’ and other educators’ professional competences, fostered in initial education but also within continuous professional learning opportunities and professional collaboration. Recognising the importance of teacher agency, efficacy and leadership is key for allowing them to make a critical use of digital learning resources and design rich learning scenarios with their students.   
        

The rapid pace of development of AI-enabled technologies raises new challenges for all professionals, and this is also true for teachers and other education practitioners. Jurisdictions should recognise that the effective use of AI in education depends on a trained and qualified workforce, which is trusted and supported to apply AI-enabled tools as and when it augments their teaching and enhances the relational and social experience of learning.

While most initial teacher education programmes include some introduction to digital tools for learning, the use of and critical engagement with digital resources in teaching should be mainstreamed in all subjects in initial teacher education programmes, so that student teachers feel at ease with the use of digital tools in the learning scenarios they offer their future students. Teachers’ AI literacy should be cultivated, so they understand AI techniques, can critically assess AI productions and recommendations, and creatively use AI in their teaching.

While initial education is important, learning on the job is what makes a good teacher a great one, and continuous professional learning for teachers should include the use of technology in teaching and learning. Sustained, relevant, accessible and timely training options should be offered to teachers, but some other solutions could also be explored. Teachers who focus on developing advanced digital skills, such as “teacher coaches” or “technology champions” within schools or at the regional level, represent an approach that seems to have been efficient in different jurisdictions. At the school level, such teachers can support their peers interested in expanding their use of technology in their teaching, either within specific schools (or a school network/group). It is important to ensure opportunities for professional collaboration and peer learning as well as mentoring schemes. At the regional level, some expert teachers could also curate and disseminate ideas about the effective use of technology in education to their peers. Working conditions need to be fostered whereby teachers are enabled to establish professional learning networks and teacher leadership across schools to evaluate the quality of AI applications and contribute advice on what applications should be useful for teachers in the future Ultimately teachers should have the pedagogical space to make choices around EdTech in the classroom.

Consistent, high-quality professional learning and development is vital for all teachers if they are going to be able to use information and communication technology confidently and effectively. Teachers should be able to decide on the form of professional learning they receive. Many teacher unions provide such professional learning opportunities and those that do not should be supported to do so.

The greatest risks of technology may come from an uncritical use of digital resources. Teachers need the time, professional development and working conditions to be able to design and combine digital resources to use in and out of class. While they do not need to become data scientists, they must be at ease with quantitative information and dashboards – as well as other forms of information generated by AI and other technology. Conversely, dashboards and information provided by digital technology should become more teacher- and user-friendly.

Digital technology is a powerful tool itself to support collaboration and peer learning among teachers, thanks to public (and private) dedicated platforms. Jurisdictions should provide such open source and teacher-curated platforms and enable teachers, within the school day, to share materials, ideas, comments on other existing ones across schools. Teacher contributions should receive some form of recognition. Indeed, learning through professional learning communities is usually the most effective for teachers and other professionals.

4. Student and teacher wellbeing. The use and development of AI-enabled technology should put learners’ and teachers’ wellbeing and mental health to the forefront, including by keeping a good balance between digital and non-digital activities. Ethical guidelines on digital communications which recognise that learning is a relational and social experience involving human to human interactions should be created in partnership with teachers and their organisations.  
        

While digital technology has the potential to improve teaching and learning, for example by diversifying learning scenarios for students or by making education more aligned with contemporary society, the excessive usage of digital technology and expanded possibilities of diffusion of unethical content present risks for the wellbeing of learners and teachers. The pandemic has highlighted the fact that education is a relational activity. Indeed, studies show that a large majority of students and teachers prefer in-person teaching and learning and appreciate the social and emotional interactions offered by in-presence schooling.

Beyond a certain point, the use of digital devices correlates with lower learning outcomes. Despite the lack of conclusive evidence on this correlation, it is reasonable to limit time on digital technology, not the least to ensure that future generations can still enjoy activities that have been valued by human beings for centuries and which will help them value and continue human heritage and culture into the far future. While the exposure to technology has become a part of current society that education systems cannot ignore, learning activities that do not involve digital technology should remain an important part of children’s development and students’ formal education. Calibrating the right approach and technology use to the right learners will be key. Teachers should not be expected to be constantly in front of their computer screens analysing data or responding to management or parental requests.

Yet digital technology also has the potential to support students’ and teachers’ wellbeing, for example by diagnosing at-risk students or teachers who may require emotional or clinical support, in tandem with robust teacher and student wellbeing policies and programmes. Specific tools could be designed which help detect bullying (and cyberbullying). AI may also help address student well-being through data analytics connected to digital tools and human services related to socio-emotional learning. This could help give feedback to teachers about how they respond to students’ socio-emotional needs both in and outside the class. More generally, ensuring safe and conducive learning environments requires a pro-active approach to AI literacy for both teachers and students, making the understanding of its evolving strengths and limitations a fundamental part of modern education.

5. Co-creation of AI-enabled digital learning tools. Jurisdictions should encourage the involvement of teachers, students and other end users as co-designers in the research and development process of technology to help ensure the usefulness and use of AI-enabled digital tools. An innovation-friendly ecosystem that makes innovation and continuous improvement a culture should allow technology developers to experiment and pilot some tools with the support of teachers and learners.  
        

Technology can sometimes be “in advance” of what stakeholders find appropriate, and sometimes “irrelevant”. Citizens should have a say in the use of solutions designed to support them.

While some technology companies have emerged which focus on education, most past solutions deployed in education have been just derivatives of applications that had been developed for other sectors of society. Some general-purpose tools such as generative AI show that they can be extremely powerful despite not necessarily being “educational” in purpose.

Education technology companies have technology competences that many teachers typically do not have. This is why a constructive dialogue with them is necessary and desirable. For education technology companies to develop “useful” tools for teachers, teachers need to be involved in the design process, piloting and monitoring and evaluation of these tools. Pedagogically sound and culturally relevant tools can be ensured through such a co-design with the teaching profession. This may require building further capacity within the teaching profession. As this may increase the use of those tools, this would also be aligned with developers’ economic incentives. Teachers also need to feel that they are protected from being used for testing inappropriate products, and jurisdictions should elaborate some clear rules in that respect.

Government-funded institutional programmes could involve government, university researchers, industry, teachers and other education stakeholders in defining which types of tools should be prioritised and in researching their effective use within schools. Some governments have already developed such programmes. This practice-engaged research and development programmes should go beyond the functionality of technology to analyse how technology is used in context and its impact on both equity and quality. They should also work on the social and legal adjustments that would be required for the widespread adoption of the solution they propose. This “co-creation” principle should be a principle even when it is challenging to involve end users, for example, students with special needs.

One important, positive side effect of these programmes would be to help understand and shape the social context in which AI-enabled education technologies would best be used (the classroom, home, etc.), and facilitate social negotiation and acceptance of these tools by the teaching profession and society.

6. Research and co-creation of evidence through disciplined innovation. Jurisdictions should foster research about the effective use of digital tools in education, including practice-engaged research projects that allow teachers to innovate in their classrooms, co-design the uses of technology with researchers that evaluate and document the conditions under which technology use works and for whom. Researcher-led projects can cast light on the most effective uses of AI-enabled technology. In principle, digital transformation enables quicker feedback and improvement loops than in the past, which education systems should benefit from through an active focus on research.  
        

The pandemic has shown that education systems can be innovative when needed and has also shown how much teachers and school leaders are able to develop their own “micro-innovations”. Beyond decisions related to the expected use of digital tools in the classroom, educational jurisdictions should work to establish opportunities for teachers to co-design new pedagogically sound and culturally relevant classroom tools. Teacher unions could contribute to that process.

Co-creation is essential for education systems to have useful digital tools for teaching and learning, but teachers can also contribute to the generation of research evidence by collaborating with researchers about its effective uses. Innovation can only take place in a climate which is dominated by trust that those innovating will be able to learn from any failure without receiving a punitive response, and failure and risk taking should be tolerated within reasonable expectations. Teachers should thus be able to propose research evaluation of their practices or ideas.

Research projects about uses that are envisioned and developed by university researchers are also valuable and should also be encouraged. Jurisdictions should support researchers to carry out such research and share the findings of research projects involving teachers to evaluate how digital technology can effectively be integrated in their teaching and benefit their students’ learning and socio-emotional development. Education jurisdictions should thus encourage and make it possible for teachers to be part of research projects making these uses more visible to the teaching profession. In most cases, the focus of the research should not be technology in itself, but the use of technology and how it benefits learners or teachers and in what conditions.

The use of digital platforms can allow for research designs that generate much quicker results and improved solutions than past “analogic” methods. This type of research should be encouraged, under the condition that its results are publicly shared for the benefit of the whole education community.

Research into the safety, efficacy, and equity implications of AI-enabled tools in education should be emphasised, for example on their impact on cognition and child development.

7. Ethics, safety and data protection. Data protection policies should ensure that the collection of data contributes to securing effectiveness and equity in education while protecting students’ and teachers’ privacy. Educational jurisdictions should provide schools and teachers with clear guidance about data protection and possibly pre-negotiated contracts or guidelines when they resort to commercial solutions. They should ensure that safety or possible algorithmic bias are tested and addressed in their policies. Clear ethical guidelines should also be developed. The ethical use of data about teachers should be negotiated with teachers and their representatives as part of bargaining agreements.  
        

The use of AI-enabled technology raises new concerns about data protection and privacy. Many jurisdictions have strong data protection regulation in place, which apply to education and the access to student and teacher data. Some have specific education data protection policies. In particular, the access to administrative data tends to be strongly regulated. Data protection implies strong underlying cybersecurity. The rise of generative AI also raises new forms of data protection issues where “children data” are not protected by law from reuse and resale. Where not already the case, data protection policies should also extend to biometric data.

Jurisdictions should also consider regulations and policies related to the safety, effectiveness, and possible bias of digital tools before they are introduced in education systems. New classes of concerns beyond data protection, privacy or safety have appeared with AI-enabled digital tools as they allow for stronger pattern recognition and greater automation than other technologies. This implies that policies should go beyond privacy and safety and include some institutional mechanisms to monitor the effects of using AI and other digital tools on equity and quality in education.

Privacy and data protection must be balanced against other important educational objectives such as equity or effectiveness, which may require the collection of personal data, including sensitive data. For example, while it is preferable to avoid demographic characteristics as key parameters in AI algorithms, when possible, the possibility to identify and address algorithmic bias and thus improve fairness depends on the collection of personal data. Algorithmic bias is one of the new risks related to the emergence of AI in education, as some groups may be discriminated against based on past data, the training model of the algorithm or just work better for some groups than others: countries should ensure that new digital tools are tested to avoid possible biases. Even in the absence of biases, as AI effectiveness is largely based on detecting “profiles”, the risk of human stigmatisation of students (or teachers) in different categories should be addressed. There is a general need for monitoring and evaluation of the effectiveness of digital tools on a variety of dimensions.

In many instances, when it comes to the use of digital tools in school, school boards, school principals or even teachers are left with the responsibility of interpreting or implementing regulations regarding personal data protection and other policies with limited guidance, for example when contracting some digital tools. Jurisdictions should provide guidance and support to school principals and teachers to implement these rules in a way that does not place additional burdens on teachers.

The regulatory and ethical use of teacher data by their employers should be defined and negotiated with teacher unions and other relevant stakeholders. As a matter of principle, an ethical use of data collected about teachers should support the quality, effectiveness and fairness of their teaching and the learning of their students, regardless of their personal characteristics.

8. Transparency, explainability and negotiation. When using digital tools based on advanced technology that are high stakes for students, teachers, or educational establishments, such as digital forms of evaluation and assessment, educational jurisdictions should be transparent about the objectives and processes by which algorithms reach their recommendations. The uses of high stakes digital tools must be discussed and negotiated with all educational stakeholders.  
        

One challenge of AI-enabled technologies is that most people do not understand how they work and what can be expected of them. Generative AI is a striking example as it is currently difficult to fully explain the details of how it operates.

Transparency is essential for uses of education technology, particularly in cases that are high stakes (if and when they happen) – as well as the verification of the accuracy of their performance for all sub-groups of the target populations in education. Explaining how they work to teachers, students and families is important, and information, education and training about them should be provided. At the very least, explaining the criteria or factors which they take into account when describing their objectives and functioning is essential. Policy makers should balance the expected effectiveness of tools against their explainability or transparency.

In any event, jurisdictions should always be transparent about the objectives, functioning and possible limitations of the digital tools they (or their schools) use, notably when they have high stakes for individuals. It should be standard practice that there is constant, constructive dialogue between jurisdictions, teacher unions and other stakeholders about the introduction of broad-impact digital tools. This matches democratic values while making their trustworthy use easier.

9. Human support and human alternatives. As AI-enabled digital tools will allow for increased automation of parts of educational processes, from administration through to teaching and learning, jurisdictions should ensure that learners, teachers, and other education stakeholders, can receive timely human support when they face a problem, and, when appropriate, a human alternative to the AI-enabled tool.  
        

As the use grows of AI-enabled tools by administrators, learners, teachers and other educators, decision processes based on AI diagnosis and suggestion may be relied on more and more by human beings as the default position. In some cases, automated processes could make it more difficult to ask questions and receive a better understanding of the rationales of decisions that affect education stakeholders. In this context, it is important for jurisdictions (or other relevant bodies) to be able to provide human help in a timely manner when education stakeholders have a problem, for example when they believe that the automated process has led to a mistake (for example in the case of an assessment or an admission process), or when they would need advice about how the system will use the information they input (for example in a school application process). AI tools are more trustworthy if they are framed with the “human -in-the-loop” idea.

It is not always possible or desirable to allow people to “opt out” of the use of digital tools. For example, the use of data in contributing to the improvement of education, particularly of disadvantaged groups, relies on a comprehensive participation in data gathering. It is also not practical for families to individually opt out of digital solutions chosen by educational institutions to support their children’s learning. This does not mean that human alternatives should not continue to be considered. For example, evaluations that are high stakes for learners or teachers require a human alternative. While the pandemic showed that AI-enabled remote proctoring can help students take exams or tests remotely when in-presence exams were very difficult to offer, their continued use should include an alternative human proctoring option given that students from different households have very different levels of connectivity, living space and examination conditions when at home. Jurisdictions should thus consider whether human alternatives to AI-enabled technology should be provided, when appropriate.

Legal and rights

This document, as well as any data and map included herein, are without prejudice to the status of or sovereignty over any territory, to the delimitation of international frontiers and boundaries and to the name of any territory, city or area. Extracts from publications may be subject to additional disclaimers, which are set out in the complete version of the publication, available at the link provided.

© OECD 2023

The use of this work, whether digital or print, is governed by the Terms and Conditions to be found at https://www.oecd.org/termsandconditions.