13. Multi-stakeholder collaboration and co-creation: towards responsible application of AI in education

Inge Molenaar
National Education Lab AI, Radboud University Nijmegen
Peter Sleegers
BMC Consultancy

There are few moments in history when artificial intelligence (AI) in education has been more in the news than November 2022 after the release of ChatGPT 3.5 (Kasneci et al., 2023[1]). Many questions were raised by schools and in ample public debates, the use of generative AI in education was the subject of discussion. This contributed to a broad awareness that AI may impact our lives substantially and will lead to changes in work, social interaction and schooling as predicted by policymakers in numerous reports, for example from OECD (2019[2]) and NESTA1. How profound this impact will be is hard question to answer. AI will clearly take over specific tasks for humans and increasingly play a role in our day to day activities. Harvard professor Chris Dede suggests that AI is especially good at reckoning, fast data analysis and pattern recognition. The opportunities provided by AI systems lie in fast collection of data, diagnostics, and enactment, enabling the personalisation of education (Holmes et al., 2018[3]; Molenaar, 2022[4]; OECD, 2021[5]). According to Dieterle, Dede and Walker (2022[6]), humans are good at judgement. Their added value lies in social-emotional considerations, connecting new elements, problem-solving abilities, and social reasoning. These human skills are beyond AI's current abilities and will become increasingly important. This raises the question of what we should educate for and what it means for the way we use AI in education.

Research and development around AI have only slowly reached educational institutions. Innovation of AI in education is concentrated in separate communities and takes place in isolated worlds. Scientific dialogue and exchange is taking place in different scientific societies, such as the International Artificial Intelligence in Education Society (IAIED, https://iaied.org/about), the Society of Learning Analytics Research (SOLAR, https://www.solaresearch.org/) and the European Association of Technology Enhanced Learning (ECTEL, https://ea-tel.eu/). Schools and educational professionals interact in national and international communities of practice, such as Kennisnet (Netherlands) and European Schoolnet (http://www.eun.org/) (EU countries) or the International Society for Technology in Education (ISTE, https://iste.org/) (United States). Companies also mostly move in their own network communities: for example publishers of education materials collaborate in the International Publishers association (IPA, https://www.internationalpublishers.org/our-work/educational-publishing/educational-publishers-forum), and start-ups and scale-ups find each other in the European Edtech Alliance (https://www.edtecheurope.org/) and other similar associations around the world. Moreover, educational professionals and institutions often lack the expertise to formulate their needs for AI technologies, despite recent efforts to support teachers and other education stakeholders to understand how to use AI (e.g. the European Commission supports several European and national projects such as the AI4T project).2

As a result, scientific researchers are mainly focused on fundamental research and are insufficiently able to meaningfully use the developed intelligent algorithms in the educational context. Entrepreneurs often face difficulties to connect with educational professionals and often lack hands-on educational and pedagogical expertise. In addition, the current use of AI in educational practice and scientific research on AI in education do not reflect the anticipated, longer term upward movement toward higher levels of automation between humans and artificial intelligence, levels in which AI controls most or all tasks (Molenaar, Forthcoming[7]). More targeted research is needed to fulfil future expectations about AI in education to forward the transformative development of the educational sector. Hence, the quality and speed of innovation are currently reduced by structural problems at different system levels.

This system-level fragmentation limits the possibilities for the technology to be developed in ways that are beneficial to the education community, that it aligns with its goals and values while upholding an interdisciplinary perspective on the responsible use of AI in education. Therefore, this chapter argues that cooperation and co-creation between different stakeholders in innovative networks is needed to address the challenges of AI in education and develop and evaluate meaningful AI technologies for education. This requires an innovative ecosystem around AI that encourages the co-evolution of knowledge and innovation and creates a common language for different stakeholders around AI in education.

The chapter first outlines how innovation benefits from multi-stakeholder collaboration and a dynamic approach of innovation. These elements are connected by a common language to promote and progress the dialogue about AI in education. We the present different examples of labs and expert centres from OECD countries using partnership models from the lens of multi-stakeholder collaboration and dynamic approaches to innovation. Comparing these initiatives provides insights into how these partnerships are formed and can contribute to the ecosystem needed to support innovation and the responsible use of AI in education. The chapter then provides some recommendations for possible supportive future policy initiatives.

As described above, current innovation of AI in education is concentrated in separate communities and takes place in isolated worlds. Moreover, the linear model of innovation prevails (Carayannis and Campbell, 2009[8]). In this model, innovation starts with basic university research that is later converted into applied research of university-related institutions and finally transformed into experimental development and commercial market applications by firms. To address the challenges and threats of AI for education, more complex, non-linear models of innovation and dynamic processes of knowledge creation, diffusion and use are needed (Elias and E., 2016[9]; Carayannis et al., 2018[10]). In contrast to the traditional linear model of innovation, non-linear models underscore a more parallel coupling of basic research, applied research and experimental development. They emphasise the collective interaction and exchange of knowledge in hybrid innovation networks that tie together interactions between universities, commercial and academic firms and governments.

This innovative ecosystem, often referred to as the Triple Helix model, places a strong focus on cooperation in innovation and the process of co-creation of different actors from three different sub-systems: universities/science (educational sub-system), industries/firms (economic sub-system) and governments (political sub-system). The equal role of each one of these categories of actors in the development of new innovative products, services, and solutions is stressed. Carayannis and Campbell (2016[9]) have proposed to add a fourth sub-system, the civil society, to the Triple Helix model. This Quadruple helix model bridges the social ecology with knowledge production and innovation and puts innovation users (civil society) at its heart: they own and drive the innovation processes. In the case of education, this implies to engage with teachers, school principals but also students and families and appropriate.

The current departmentalisation of stakeholders does not help to drive innovation in AI in education. It is an international challenge to use the Triple Helix approach as an operational strategy to facilitate cooperation and co-creation between different stakeholders in innovative networks around AI in education. Different partnership models of innovation labs and expertise centres for co-creation and multi-stakeholder contribution have been established and are evolving around the globe. Governments may facilitate innovation labs and expert centres to foster cooperation and co-creation in hybrid innovative networks and contribute to the responsible, social and ethical application of AI in education. By encouraging the integration of different modes of knowledge creation, diffusion and use of AI in education, these innovation ecosystems will help to develop and evaluate meaningful AI arrangements that are beneficial to the education community.

Cooperation and co-creation between diverse interdisciplinary scientists, educational professionals, and entrepreneurs could lead to transformative practices, driven from a pedagogical-didactical perspective, in the development and uses of AI applications in education (see Figure 13.1).

Including educational scientists and practitioners with a strong understanding of learning processes, AI scientists with an interdisciplinary interest in applying AI in education and philosophers and lawyers with a special interest in privacy and ethical aspects of these innovations is important.

The first step in the co-creation process is to develop a shared conceptual understanding of the educational setting in which the technology will be applied. Next, the functioning of AI in this context and consequent roles of learner, teacher and AI can be detailed. In an iterative design cycle (McKenney and Reeves, 2013[11]), developing a new technology can advance current technical and pedagogical-didactical knowledge, creating mutual understanding and further conceptualisation of the AI technology. The technological elements can then be developed, the pedagogical-didactical approach specified, and supporting materials for teachers and learners developed. User experiences in the classroom will further illuminate how technical and pedagogical-didactical innovations reinforce each other. We envision these new technological developments to materialise in collaboration between researchers and educational professionals in iterative cycles. The close integration of development and application in classroom practice may help mature AI, tune the actors’ roles, and develop the interface to support reciprocal interaction. The developed AI technologies support implications and set boundaries to fit with educational norms and standards. Designing these technologies in partnership will result in new reciprocal human-AI relationships within the field of education, which is likely to be beneficial for innovation in education and enhance the understanding of hybrid human-AI relations in this context.

The final step will be to validate these new technologies with AI at scale, to assess the value for educational and to understand how they contribute to advancing learning and teaching. In this validation process, developed technologies are tested in multiple classrooms and schools to assess the learning technology applicability and suitability in multiple contexts. At the same time, effects of the technologies on learning outcomes or other learning metrics are assessed. The learning technologies are instrumented in this phase which mean that researchers can execute random control trails (RCT) or undertake other meaningful forms of research to generate robust evidence in close collaboration with schools. Also this learning engineering infrastructure can help support ongoing experiments at scale in learning technologies (Koedinger, Corbett and Perfetti, 2012[12]). This long-term partnership between research and educational institutions can both enhance technologies and forward our theoretical understanding of how AI technologies can support learning and teaching.

Cooperation and co-creation between diverse scientists, educational professionals, and entrepreneurs can contribute to developing a common language about AI in education. Such common language may help articulate a shared perspective on the role of AI in education and the development of hybrid human-AI arrangements.

There are many ways in which AI can contribute to education, but there is also an important difference with other application domains (Selwyn, 2019[13]; Holmes et al., 2018[3]). AI in education aims to optimise human learning and teaching in a system where human and artificial intelligences are combined in a meaningful manner (Molenaar, 2022[4]). AI in other application domains often aims at replacing humans. For example, AI will ultimately take over the human driver’s role in self-driving cars. The augmentation perspective on optimising human learning and teaching comes from the notion that human intelligence and artificial intelligence have different strengths. Artificial intelligence is good at quickly collecting, analysing, and interpreting large amounts of data; humans are still better at judging, social interaction, creativity and problem-solving (Dieterle, Dede and Walker, 2022[6]).

These strengths underlay the augmentation perspective on artificial intelligence (AI) in education. This perspective emphasises the role of AI in facilitating and strengthening student learning with the teacher. This differs from a replacement perspective that implicitly assumes that AI alone can optimise student learning. The augmentation perspective aligns well with the notion of hybrid intelligence, conceived as the meaningful combination of human and artificial intelligence (Akata et al., 2020[14]). In hybrid intelligence, humans and AI are considered to be team players who perform and solve tasks in collaboration. Hybrid intelligent systems aim to coordinate and supplement artificial and human intelligence so that they are stronger together than each separately. This entails that the roles of teachers, students and AI must be articulated when developing intelligent innovations. In order to understand this interaction, more insight is needed into two coherent characteristics: 1) the functioning of AI and 2) the division of roles between AI, teacher and student.

To gain more insight into these two characteristics, we discuss Molenaar’s Detect-Diagnosis-Act framework (the functioning of AI), followed by the six levels of automation model (division of roles between AI, teacher and student) (Molenaar, 2022[4]). These models are the core elements of a common language that can help different stakeholders to discuss use cases of AI in education. Teachers and educational professionals can relate to this relatively simple explanation of AI and it helps them to understand different applications in the educational domain. Scientists from different disciplines can compare use cases and discuss implications from a shared understanding of the role of AI in education. Finally, the general description of technological solutions helps companies to position their product in the EdTech market.

The detect-diagnosis-act framework distinguishes three mechanisms that underlie the functioning of AI: detect, diagnose and act (Figure 13.2).

In “detect” mode, the data that the AI uses to understand the learning of the student or the teacher's teaching are made explicit. For example, many adaptive learning technologies use students' answers to measure the student understanding of a domain. In “diagnose” mode, the constructs the AI analyses to understand the learning or teaching process are outlined: this can be a student's knowledge, skills or emotions. For example, the vocabulary of a student in a foreign language or knowledge about fractures is diagnosed by the AI. Finally, the “act” mode describes how the translation from the construct measured to a didactic pedagogical (or other kind of) action is made. The system can translate the diagnosis into information, for example presented in a dashboard, or the diagnosis can lead to activities carried out by the AI, for example, the technology selects feedback for a student. This model supports a basic understanding of how AI functions among multiple types of applications for stakeholders and helps to promote and progress the discussion of the application of AI in specific technologies.

Current AI solutions for education often use log data to detect, they mostly diagnose students’ knowledge in specific domains and consequently act by providing feedback, adjusting task or selecting the next topic to learn. Current technology frontiers lay in including multiple data streams in “detect” phase, such as physiological measures or contextual measures; including broader constructs of the learners in the “diagnose” phase, for example emotional development or self-regulated learning skills (Molenaar et al., 2023[16]). And finally getting toward more diverse actions patterns, including interacting with learners on multiple levels.

Second, the division of roles between AI, teacher and student needs to be articulated. Articulating this division can be done by using the “six levels of automation model” (see Figure 13.3). This model distinguishes six different levels of automation depending on the degree of control by the teacher, the student and/or the system (Molenaar, Forthcoming[7]).

The model starts with full teacher control and ends with full automation, hence AI control. This model helps understand various combinations of automation between humans and artificial intelligence. For example, when teachers are assisted with data and classifications made by AI, the control remains with the teacher, but teacher’s behaviour is influenced by the information from the AI. On the other hand, when students are using generative AI applications for their homework assignment, the answer is generated by the AI and without human oversight (unless students revise it or just use it as an input to their own work).

This model helps us to understand relations between AI, teachers and learners to specify requirements for the technology and the needed interface to support teachers in using the technology (Molenaar, 2022[4]). It can also help teachers to integrate the technology in pedagogical approaches to develop new teaching practices (Cijvat et al., 2022[17]; Van Schoors et al., 2023[18]). Additionally, this model can promote the dialogue between different stakeholders about AI's current and future uses in education and help build a shared perspective on the development of hybrid human-AI arrangements. It helps both understand how AI can support rather than replace human tasks and work, but also reflect on when and how humans should intervene when AI is involved.

In this section, we describe eleven examples of innovation labs and expert centres. These labs are located in Europe, the United States and Australia (see Table 13.1). The represent different partnership models aimed to create multi-stakeholder collaboration and co-creation in the development of AI in education Their full description is available in Annex 13.A.3

This selection presents an overview of how different innovation labs and expert centres aim to make AI in education more useful to education stakeholders. Different combinations of stakeholders and innovation approaches are followed in different countries. Most of the initiatives focus on all levels of formal education (SMART@EDUCATION, Colider, TüCeDE, EDUCATE), except NOLAI and Engage that focus on primary and secondary education, and AI-ALOE that is directed at vocational, higher education, and professional development. With respect to funding, most initiatives are funded by (local) governments and universities. Some initiatives are co-funded by industry. The amount of funding differs, ranging from EUR 1 to 90 million. Most initiatives have mid-term duration (5 years) mandates or objectives, with some long-term examples (10 years) too.

In most initiatives, the quadruple helix actors (university, industry and educational institutions as end-users) are included and governments are part of the supporting system. The role of the actors involved clearly differs across the labs. Universities are central, with industry collaborating, and implementation end-users in educational institutions often have an implementation role (Engage, AI-ALOE, AI Playground).

The role of end-users is more central in the examples of NOLAI and IMEC Smart education, where innovation is initiated based on end-user questions and co-created with science and industry. In two labs, EDUCATE and Edtech Collider, industry is more central, with universities as a facilitating actor. TüCeDE is the only initiative in which the delivery of scientific insights to educational institutions is central. Finally a number of labs (CELLA, RCL, GRAIL) focus on scientific collaboration to support international exchange of scientific knowledge.

The governance models used in these labs represent the dynamics among the partners involved. Universities are mostly leading with EDUCATE as the exception, a spin-off company from a university. Some initiatives include independent research institutes (TüCeDE), national innovative companies (SMART@EDUCATION), non-profit organisations (Engage & AI-ALOE) or Foundations (Colider, CIRCLES & CELLA) as partners. Business development organisations lead industry-dominant labs and are partners only in other initiatives (NOLAI, IMEC smart education and AI-ALOE). Surprisingly, school boards (or education practitioners) are only represented in the governance model of NOLAI; in the other labs, schools are collaboration partners but not represented directly in the governance.

Although each initiative has a unique focus and areas of expertise, different approaches to innovation seem to underlie the different labs.

A first approach reflects and adapts the linear approach to innovation: scientific research is translated and transformed into product development and market applications. For example, new products are developed in which AI research is leveraged, such as narrative-centred learning environments and advanced chatbots in Engage. Another example is the Cognitive Support in Manufacturing Operations project (COSMO) in IMEC smart education, which develops data-driven techniques to create content for assembly training applications in augmented and virtual reality (AR and VR). The aim is to adapt training contents to the operator’s skill level. In these examples linear innovation is accelerated and enriched with industry collaboration and inclusion of end-user educational professionals.

More non-linear innovation models are also represented in these labs. For example, in AI playground, scientific understanding of children’s learning is combined with new industry developments in AI processing of video observations to detect learner behaviour. This allows for new approaches to science education and teaching computational thinking. In NOLAI, questions from educational professionals are the starting point and thus central in the innovation process. These questions are translated in envisioning future use cases based on new scientific insights and novel industry developments. For example, the “happy readers” project combined the request of primary school teachers to have more insight into how students’ technical reading skills develop over time with what university and industry partners know about current affordances of technology: new insights from reading research together with novel automated speech recognition algorithms developed by a start-up company can allow for new approaches to reading education.

A second approach is more geared towards industry development and aims to help start-ups to improve product (propositions) with scientific insights and enhance the ecosystem for companies to thrive and scale up. Business development activities in these initiatives are diverse: from supporting prototype development, optimising products in multiple schools, diversifying to new sectors in education, and validating the effectiveness of products to support an evidence-based development of Edtech tools and resources in schools. In labs focusing on business development, such as EDUCATE and Collider, co-creation is starting in industry, in collaboration with testbeds in schools (Collider) and by stimulating Edtech to collaborate with researchers (EDUCATE). For example, the Swiss Edtech Collider helps companies to set up testbeds in schools to support product development and enhance the dialogue with practitioners. EDUCATE also helps companies articulate a theory of change and provides a strategy dashboard to drive product development. This approach has been tested in several calls for projects with the aim to foster collaboration between Edtech, academics and teachers and pupils, for example within the European Institute of Technology’s “Community AI” at the European Union level or within some countries, for example the AI innovation partnership for learning French and mathematics in France.4 All these actions support Edtech companies to mature, grow their market by making their products more relevant to education practitioners and increase the integration of new scientific insights into product development. Again non-linear innovation modes are developed, but the focal point is the industry side.

A third approach aims to apply scientific insights to improve teacher professional development and school innovations. The pathway from university to drive innovation in educational institutions is central here. For example, TüCeDE aims to improve teachers' understanding of technology and how it can enhance learning and teaching. In implementing different EdTech tools in schools, successful and responsible usage of AI in classrooms is developed. This example represents a relatively linear innovation mode, from science to educational institutions, which are often overlooked as essential to ensure innovation in public education.

Finally, a fourth approach focuses on improving collaborations and orchestrating international research (CELLA, CIRCLS & GRAIL). These initiatives support exchanges of scientific knowledge and methods to spur scientific research in this domain. For example, in the Center for living and learning with AI (CELLA) five research groups are executing a combined research agenda to develop understanding of how to support young learners in their skill development for an AI era. Unique is the international comparative study that investigates young learners' self-regulated learning in the context of an AI-empowered learning technology, which attempts to help us identify learners' skillset that may be required to learn with AI around the globe. Consequently, industry and educational institutions can use this to develop new products and transform educational practices. This example highlights a more traditional ecosystem to spur scientific understanding and innovation.

Despite the different approaches, the initiatives mostly develop innovative technical products with AI and support transformative innovative practices in schools. Other outputs are scientific publications describing design processes, algorithms developed, and evaluation studies, including randomised control trials (RCTs). Most initiatives also create professional publications for educational professionals discussing AI in education, current developments in AI and its effects on learning and teaching.

The different initiatives represent diverse approaches to supporting a good use of AI in education, involving different stakeholders in central positions and different dynamics to innovation. More linear approaches are combined with non-linear processes of innovation that are present in all initiatives – but the starting point and the central actor driving these processes are different across initiatives. From scientific research leading to product development with industry to driving innovations in educational institutions to spur the professional development of educational professionals. Universities play an important role with several non-profit organisations and foundations. However, there is little involvement in the leadership of schools and educational professionals in these initiatives. Involving and engaging teachers and educational professionals from the start of the co-creation process may be important to overcome the gap between science and practices and adhere to schools' needs while developing and evaluating meaningful AI arrangements in education.

Knowledge development is central in most of the initiatives, with a focus on knowledge transfer. Scientific interdisciplinary research mainly drives and supports co-creation processes aimed at developing meaningful AI arrangements in education. In order to develop state-of-the-art new prototypes, those initiatives ensure a sound basis from a pedagogical-didactical perspective, integrate knowledge from the learning sciences, and strong technological development is realised by combining computer sciences, artificial intelligence research, learning engineering and learning analytics. User studies and design-based research ensure practical usability. The involvement of educational scientists and teacher trainers focuses on proper implementation strategies in schools and other educational institutions. Involving and engaging teachers and schools right from the start of the co-creation process is important to implement meaningful AI arrangements that effectively suit education professionals' needs.

The described initiatives clearly show that a multi-stakeholder approach driving and supporting interdisciplinary collaboration and co-creation is being used to develop meaningful applications of AI in education. Collaboration of multiple stakeholders can not only facilitate an orchestrated dialogue by using a common language to discuss use cases of AI in education but can also ensure that the technology is developed and used in ways that benefit the educational community. The initiatives show promising institutionalised ways to break through the current departmentalisation of stakeholders. The challenge is to find sustained institutional ways to engage teachers/schools, scientists, industry and governments in this orchestrated dialogue, based on constructive collaboration and co-construction, and experiment different models of effective partnerships needed to drive AI's integration in the future of education.

In this chapter, we showed that developing and using sounds AI applications in education is a complex endeavour that needs interdisciplinary, multi-stakeholder actors to collaborate in non-linear dynamic approaches to innovation. This approach will help to overcome system-level failures related to the departmentalisation of different actors and, too often, linear approaches to innovation that ignore the end-user perspective in educational innovation. These system failures stand in the way of digital innovation in the education sector, which is problematic in the AI era. The development of hybrid human-AI technologies in the educational field can improve human learning and teaching and support the needed upskilling of humans. The need for an upward movement towards higher levels of automation in education is acknowledged by international stakeholders, but the current innovation ecosystem is insufficiently equipped to realise this task.

We argue that the advancement of AI in education can only happen in collaboration with end-users to ensure uptake and future scaling of new technologies in school. The emerging understanding of responsible use of AI in education needs an interdisciplinary scientific perspective, including learning sciences, educational psychology, teacher professionals, computer scientists, AI scientists, philosophers and embedded ethicists as well as lawyers. A highly interdisciplinary scientific field combining these disciplines could be envisioned to fully understand upskilling in the AI era and how AI in education can facilitate the transition to teaching skills for the future to young learners.

We discussed how multi-stakeholder approaches address challenges to improve developments of AI in education in a promising way. Critical components are a rich and diverse ecosystem, a common language, and active involvement of governments to drive upward progression. Different innovation labs and expert centres already explore this idea and the study of different approaches will help countries to move forward. These different initiatives address the needs of different stakeholders and make different choices a to the key focus, depending on the actor(s) in the main driving position of their innovation ecosystem. All these initiatives highlight the need for (and possibility of) capacity building in this domain to help educational institutions, industry and science to make collaborative progress. Transitional boundary crossing in multiple forms is essential to move forward and structurally rethink how we learn as humans, teach for upskilling, and how AI will affect education in a positive way.

References

[14] Akata, Z.; D. Balliet; M. De Rijke; F. Dignum; V. Dignum; G. Eiben and M. Welling (2020), “A research agenda for hybrid intelligence: augmenting human intellect with collaborative, adaptive, responsible, and explainable artificial intelligence”, Computer, Vol. 53/8, pp. 18-28.

[8] Carayannis, E. and D. Campbell (2009), “’Mode 3’ and ’Quadruple Helix’: Toward a 21st century fractal innovation ecosystem”, International Journal of Technology Management, Vol. 46/3/4, pp. 201-234, https://doi.org/10.1504/IJTM.2009.023374.

[10] Carayannis, E.; E. Grigoroudis; D. Campbell; D. Meissner and D. Stamati (2018), “’Mode 3’universities and academic firms: thinking beyond the box trans-disciplinarity and nonlinear innovation dynamics within coopetitive entrepreneurial ecosystems”, International Journal of Technology Management, Vol. 77/1-3, pp. 145-185.

[17] Cijvat, I.; E. Denessen; P. Sleegers and I. Molenaar (2022), “Wat leraren doen: de inzet van adaptieve leermiddelen in het basisonderwijs”, Pedagogische Studiën, Vol. 100.

[6] Dieterle, E., C. Dede and M. Walker (2022), “The cyclical ethical effects of using artificial intelligence in education”, AI & Soc, https://doi.org/10.1007/s00146-022-01497-w.

[9] Elias, C. and G. E. (2016), “Quadruple Innovation Helix and Smart Specialization: Knowledge Production and National Competitiveness”, Foresight and STI Governance, Vol. 10/1, pp. 31-42, https://doi.org/10.17323/1995-459x.2016.1.31.42.

[3] Holmes, W.; S. Anastopoulou; H. Schaumburg and M. Mavrikis (2018), Technologyenhanced Personalised Learning: Untangling the Evidence.

[1] Kasneci, E.; K. Seßler; S. Küchemann; M. Bannert; D. Dementieva; F. Fischer; U. Gasser; G. Groh; S. Günnemann; E. Hüllermeier and S. Krusche (2023), “ChatGPT for good? On opportunities and challenges of large language models for education”, Learning and Individual Differences, Vol. 103.

[12] Koedinger, K., A. Corbett and C. Perfetti (2012), “The Knowledge‐Learning‐Instruction framework: Bridging the science‐practice chasm to enhance robust student learning”, Cognitive science, Vol. 36/5, pp. 757-798.

[11] McKenney, S. and T. Reeves (2013), “Systematic review of design-based research progress: Is a little knowledge a dangerous thing?”, Educational researcher, Vol. 42/2, pp. 97-100.

[4] Molenaar, I. (2022), “Towards hybrid human-AI learning technologies”, European Journal of Education, Vol. 57/4, pp. 632-645, https://doi.org/10.1111/ejed.12527.

[15] Molenaar, I. (2021), “Personalisation of learning: Towards hybrid human-AI learning technologies”, in OECD Digital Education Outlook 2021, Pushing the Frontiers with Artificial Intelligence, Blockchain and Robots, OECD Publishing, https://doi.org/10.1787/589b283f-en.

[7] Molenaar, I. (Forthcoming), “Current status and future visions on Hybrid Human-AI learning technologies”, British Journal of Educational Psychology.

[16] Molenaar, I.; S. de Mooij; R. Azevedo; M. Bannert; S. Järvelä and D. Gašević (2023), “Measuring self-regulated learning and the role of AI: Five years of research using multimodal multichannel data”, Computers in Human Behavior, Vol. 139, https://doi.org/10.1016/j.chb.2022.107540.

[5] OECD (2021), OECD Digital Education Outlook 2021: Pushing the Frontiers with Artificial Intelligence, Blockchain and Robots, OECD Publishing, Paris, https://doi.org/10.1787/589b283f-en.

[2] OECD (2019), Going Digital: Shaping Policies, Improving Lives, OECD Publishing, Paris, https://doi.org/10.1787/9789264312012-en.

[13] Selwyn, N. (2019), Should Robots Replace Teachers? AI and the Future of Education, John Wiley & Sons, Inc.

[18] Van Schoors, R.; J. Elen; A. Raes and F. Depaepe (2023), “Tinkering the Teacher–Technology Nexus: The Case of Teacher-and Technology-Driven Personalisation”, Education Sciences, Vol. 13/4, p. 349.

Strategic partners are four knowledge institutions: Radboud University, University of Utrecht and Maastricht University and HAN university of applied sciences, three schoolboards: Lucas Education, Klasse and Quadraam and two business development centres Oost.nl and Brightlands. NOLAI located at the Faculty of Social Sciences of Radboud University.

The Dutch National Education Lab for Artificial Intelligence (NOLAI) aims to develop innovative intelligent technologies that improve the quality of primary and secondary education. The goal is to develop innovative prototypes that use artificial intelligence and develop new knowledge on responsible use of AI in education.

Between 30 to 40 people will be working at NOLAI and each year 10 to 20 co-creation projects are funded. The institute has an initial funding form the Dutch National Growth Fund of 80 million for the duration of 10 years and a follow reservation for business development and upscaling of evidence based prototypes of 63 million.

  • articulation of needs with educational institutions

  • co-creation of innovative use of AI in education

  • business development of evidence based prototypes

  • knowledge transfer and development to schools and business

  • primary and secondary education

NOLAI has two main programmes: the co-creation programme and the scientific programme. The co-creation programme develops innovative prototypes and applications of AI in co-creation with schools, scientist and business. Each year 10 to 20 new innovation using AI in education will be supported in this programme. The scientific programme develops knowledge on pedagogical-didactical, technical, infrastructural and ethical aspects of responsible use of AI in education. A team of professors, post-docs and PhD develops interdisciplinary knowledge on AI in education.

NOLAI’s main activities are to investigate the state of play and develop state of the art applications. State of play: in dialogue with schools we explore what their needs are for using AI to improve education, with scientist we map current knowledge and prototype development and with business explore current application of AI products and ambitions for the future. In relation to the state of the art, we work on co-creation developing innovation prototypes and application of AI in education and we develop pedagogical and ethical knowledge and new technical innovations to support state-of-the-art application of AI in education in our scientific programme.

Yearly reports on the development of the field, prototypes that are developed in co-creation projects, new demonstrations of how to apply AI in pedagogical arrangements, evidence of the effectiveness of AI in education, new pedagogical and ethical knowledge, technical developments and publications as outcomes of interdisciplinary research.

An example of a co-creation project is the development of the visualisation of student data collected across different learning management and adaptive learning technologies and summative assessments. Teachers asked for an approach to integrate the data in a meaningful way to support differentiation and personalization of learning for students. This project is a collaboration between 3 schools, an adaptive learning technology company, an assessment company and pedagogical and AI scientists. The iterative design approach ensures the connection between educational practice, science and business development.

EPFL University, Foundations, Governmental Entities, Corporate Partners, Schools (private/public), Educational Institutions, Teacher Training Centres (list on website, https://www.edtech-collider.ch/partners/).

The aim is to create a marketplace for the EdTech start-up community and to enhance the visibility of EdTech companies by creating a unique ecosystem and network in and around education and EdTech that fosters encounters with potential partner organisations, customers, investors and research, as well as enables synergies among start-ups.

NPO Association, founded in 2017, 1 FTE, 90+ EdTech members (status on 1 January 2023).

  • All levels of education: from K-12 to corporate learning and training to lifelong learning

  • De-fragmenting the EdTech market

  • Linking start-ups and labs

  • Facilitating pilot tests in schools (testbed programme)

  • Sharing challenges and difficulties (e.g., divergent data protection schemes)

While many accelerators aim to boost the development of start-ups in a short, limited period, the Swiss EdTech Collider is a membership-based, long-term effort to nurture the Swiss EdTech ecosystem by connecting the stakeholders: EdTech companies, customers and decision makers in private and public education system, chief learning officers, investors, researchers, governmental organisations, and other EdTech initiatives and hubs.

The programme supports EdTech start-ups in all life-cycle stages – from early stage to well-established ones – by creating “collision” opportunities and matching them with potential customers, partners and investors; by facilitating the pilot testing of EdTech products in schools (Swiss National EdTech Testbed Program, https://www.edtech-collider.ch/testbed/); by enabling translational research in learning sciences by supporting master theses of university students in EdTech companies; by connecting start-ups with business SME’s (Subject-matter experts) (e.g. legal, financial) based on their individual needs.

More than 130 start-ups have been involved since 2017, with some leaving or disappearing during that time. On average, about 12 new start-ups are joining the Swiss EdTech Collider every year as new members after a selective application process, leading currently to an average number between 90 – 100 member start-ups (on 1 January 2023). Some start-ups have merged, acquired each other or have been acquired by third parties. 10 master theses in learning sciences have been successfully conducted in start-ups since 2019. Also, the Swiss EdTech Collider has become a partner or member of various educational initiatives/systems (DIH EU, EPFL LEARN with the EPFL ML4ED Lab, EETN, EEA, BeLEARN) and has established itself as the EdTech reference in Switzerland.

Dynamilis (https://dynamilis.com/en/), a ML based app for improving handwriting: when children write on paper, the only way to assess their handwriting is the static shape of the produced letters. When writing on a tablet, one may also analyse the dynamics of the finger movements that generated the handwritten words. The analysis relies on these dynamic data: the speed of the pen, its pressure on the tablet as well as its angles with respect to the tablet surface (tilt). As these data points are collected 240 times per second, for several minutes, and data has been collected with more than 10 000 pupils, Dynamilis uses ML to parse these massive data sets. The algorithms extract properties of the hand movements that the human eye could not perceive, such as the second derivative of the pressure of the pen. The solution then uses dimensionality reduction methods to compute the position of every child’s production in a 3D space. In this space, it can measure how far a particular child is from a child with satisfactory handwriting skills. This method takes into account the age, the gender as well as the laterality (left- vs right-handed) of the child. The results of the analysis methods lead to proposing games – developed in consultation with therapists – specific to the child personal weaknesses. Teachers, parents, professionals can then follow the progress of a child.

Tübingen has a rich ecosystem on digital education and artificial intelligence. Within the center, strategic university partners and networks of digital education (e.g., Hector Research Institute of Education Sciences and Psychology [HIB], LEAD Graduate School & Research Network, Tübingen School of Education [TüSE]), educational technologies (e.g., Intelligent Computer-Assisted Language Learning, Department of Computer Science, Cluster of Excellence Machine Learning for Science), and independent research institutes (e.g., Leibniz-Institut für Wissensmedien [IWM], German Institute for Adult Education [DIE]) are actively involved.

We aim to strengthen the transfer expertise in digital education. The center is conceptualised as a research and transfer hub to support the participating research institutes to transfer their knowledge into practice, and to prepare teachers for teaching and learning with available and cutting-edge technologies.

More than 30 researchers from different fields such as educational research, psychology, computational linguistics, subject-matter didactics, digital humanities, computer science, and medicine are working together. The centre is based on an initial funding of 1.35 million (duration 5 years) from the Vector Foundation, and on the same amount of internal university funding. Additionally, TüCeDE has been successful in securing third-party funding for establishing federal competence centres (9.5 million) to advance the transfer strategy.

  • primary, secondary and higher education

  • knowledge transfer and development

  • professionalisation of educational sector

TüCeDE has three working areas. In the area Transfer and Professional Development, transfer and professionalisation strategies are implemented. In the area technology-enhanced learning, evidence-based instructional concepts are developed to support students’ subject-specific learning. In the area Innovative Technologies, cutting-edge technologies are developed and researched with the aim to scale meaningful learning environments (e.g., by using big data, artificial intelligence, intelligent tutoring systems, sensors, VR).

TüCeDE focusses on transfer and research activities. We establish measures of transfer (e.g., a clearing house on digital education, implementation of professional development programmes), as well as support for research, in close cooperation with renowned local stakeholders such as LEAD, HIB, and IWM. A main emphasis in these activities lies in the co-design with educational practice (i.e., teachers, educational administrators).

Scalable prototypes for transfer and professionalisation, evidence regarding the effectiveness of educational technologies, and publications as outcomes of interdisciplinary research.

In the context of interdisciplinary research (e.g., via LEAD or TüSE), scalable prototypes have been developed and evaluated. Examples comprise intelligent tutoring systems, interfaces to assess students’ attention, or subject-specific teaching units for adaptive teaching. These prototypes are used as good-practice examples, a crucial brick of our professional development programmes and transfer measures. Additionally, these transfer measures are enriched with meta-analytical evidence via our clearing house.

Swedish EdTest, ISTE, The DXtera Institute, Unthinkable.

To increase the effectiveness of evidence generation and application by members of the EdTech ecosystem, and to help them leverage data and AI. We do this in two ways firstly, by training EdTech SMEs, Investors and educators across K12, adult education and HE to develop and/or use evidence-informed products that benefit human learning; and secondly by providing consultancy to help organisations use AI and data science to understand human learning behaviours.

EDUCATE Ventures has been operating in the EdTech space for 5 years, and currently has fifteen team members, equivalent to 7 FTE staff members. EVR reported a revenue of GBP 500 000 for the most recently reported financial year.

  • EdTech companies and Investors

  • Educators and trainers

  • Training and developoment departments

  • Educational and training institutions/Businesses

  • Learners

  • Design of Evidence and Data structures, processes and readiness within organisations: theory of Change and Logic Modelling

  • Articulation of organisational education/training needs

  • Ethics of AI in education

  • Knowledge transfer and development

  • AI and data science

EVR provides two main services:

Modular training about data, evidence, and AI. We use our expertise in educational research to support, train and mentor the EdTech community in the development of research skills, ensuring products made for teaching and learning really do work.

Capacity building and bespoke consultancy to enable organisations to better leverage AI and data for educational benefit. We are the Artisans of AI: we use AI to identify, evidence and visualise the complex human learning behaviours that build human intelligence.

The EDUCATE Accelerator Programme

We accelerate a research mindset in EdTech SME’s, supporting them in to developing evidence-informed products to benefit human learning. Our training has been delivered in various forms to over 350 EdTech Companies and comprises of a series of workshops and dedicated mentoring sessions, supporting EdTech’s to develop their Theory of Change and Logic Model, as well as to conduct a literature review and identify the relevant research methodology for measuring their educational outcomes. Our research team are all either PhD recipients or doctoral candidates, who specialise in various aspects of technology enhanced teaching and learning.

Consultancies

We help organisations gather and use data to understand human learning behaviour. Initially we work with clients to arrive at their idea of a “Golden Thread”, a pedagogical imperative that underlies their desired goals. This is usually followed by a series of rapid evidence reviews, creation of an ontology, review of their data sources, and recommendations for becoming AI-ready. In some cases, this is demonstrated through and MVP dashboard.

Logic Models and Theories of Change developed by all graduates of the accelerator programme, operating as a strategic dashboard for the EdTech. A number of graduates have received an EdWard Level 1, having produced a rigorously reviewed Research Proposal. MVPs developed as part of the bespoke consultancies. Ontologies of abstract pedagogical concepts (such as metacognition, self-efficacy, etc.) and related research publications.

The EDUCATE Accelerator Programme (see above) 5 iterations of the programme saw graduates produce Logic Models. Our most financially successful Alumni include Busuu, Century Tech, MyTutor, Kano, Yoto and Pobble.

The key partners are research groups from the research institutions KU Leuven, University of Ghent and Vrije Universiteit Brussel that are affiliated with IMEC, Flanders’ research and innovation hub for nano-electronics and digital technologies.

IMEC Smart Education is a strategic research and innovation programme that aims to develop state-of-the-art technologies in order to address grand challenges in education and training. Through co-creation with industry and schools, it also aims to incubate novel solutions and services, and accelerate their adoption in the market, as such contributing in a sustainable way to the digital transformation of education and training.

The programme started in 2017 and is set to continue until at least 2026. It employs approximately 40 researchers on an annual basis on a seed funding of EUR 1 million yearly, leveraging towards external funding.

The target group comprises learners and teachers from all levels of compulsory education and higher education as well as from corporate training and lifelong learning.

  • strategic basic research

  • co-creation of educational technologies, with a strong focus on AI

  • articulation of needs with educational institutions

  • business development

  • knowledge transfer and development

The programme focuses on research on smart technologies (such as sensors, algorithms, and adaptive learning platforms) that are grounded in artificial intelligence and facilitate interaction and collaboration in the learning process, laying the foundation for tailor-made learning solutions. The development and evaluation of these technologies is driven by theories in the learning sciences. To realise its ambitions, the programme brings together IMEC researchers from a wide range of scientific disciplines and domains, such as instructional psychology & technology, statistics, machine learning and artificial intelligence, language technology, engineering sciences, neurosciences and social sciences. It has also forged a strategic alliance with the local industry through the creation of EdTech Station, the Belgian alliance of EdTech companies.

Efficacy studies, research on new methodologies for computational data analysis, technology development, creation and maintenance of research infrastructure, co-creation of prototypes.

Scientific knowledge on learning effectiveness, new methods for data analysis, demonstrators, prototypes of new EdTech solutions.

Increasing product diversification in the manufacturing industry requires rapid up- and reskilling, necessitating flexible training solutions. The COSMO project (Cognitive Support in Manufacturing Operations) researched and developed data-driven techniques for creating content for assembly training applications in AR and VR, and for adapting training content to the skill level of the individual operator. The project demonstrated that work instructions for digital training platforms can be generated more efficiently by analysing recordings of expert operators through computer vision and natural language processing. It also showed that adaptive work instructions, tailoring the level of detail in work instructions to the skill level of the operator, reduced the mental effort of operators and sped up assembly times. The project involved two education technology companies, two manufacturing companies, five research groups and four secondary schools from vocational and special needs education. https://vimeo.com/624275559.

The lead is Digital Promise, with partners at EDC, SRI International and the University of Pittsburgh. Digital Promise, EDC and SRI are non-profit organisations.

CIRCLS is a hub that interconnects separately funded National Science Foundation research projects, each of which investigates emerging technology for teaching and learning. Recently, research and development topics related to AI are featured in the portfolio of projects.

CIRCLS has existed for approximately 10 years, although has previously operated under different names. The CIRCLS organisation itself is funded for USD 3 million, and will continue at least until January 2024. Approximately 50 separately funded research projects participate in CIRCLS, and individual projects have grants of between USD 100 000 and 1 million. The overall extent of CIRCLS is indicated by the size of its mailing list

  • CIRCLS builds a research community among computer scientists, learning scientists and others who are exploring how advanced technologies can help teachers and learners.

  • CIRCLS intentional serves emerging scholars, seeking to involve them in this community.

  • CIRCLS also engages educators in the work.

  • Synthesising the work of individual research projects

  • Envisioning next research questions and research projects

  • Engaging the community in tackling issues at the community level, for example, how to conduct research in partnerships and how to advance equity

CIRCLS build and hosts a community that investigates emerging technologies for teaching and learning. These technologies can include AI, augmented and virtual reality, machine learning, learning analytics, simulations, visualizations, games, wearable technologies, robots, and accessibility technologies, among others. Learning sciences theories and approaches may include design-based research, computer supported collaborative learning, embodied cognition, analysis of discourse and argumentation, learning analytics, educational data mining and more. Throughout the community, there is a strong emphasis on equity in student learning.

CIRCLS has four kinds of main activities: (1) To build the community, we reach out to those who receive awards, to educators, to emerging scholars and to others who could enrich our community; we also broker relationships among those in the community. (2) To enable collective work, we host participatory activities like working groups to define new research questions or approaches. We also host a major convening approximately every two years. (3) To map and synthesise the work, we analyse the portfolio or project also involve teams in writing community reports. (4) To disseminate insights, we produce a newsletter, publish a series of Rapid Community Reports, publish resources, and more. Two important special activities are our mentoring series for emerging scholars and CIRCLS Educators, a group of researchers and educational practitioners.

The main outcomes of CIRCLS activities are (a) enabling a shift from only individual research projects to working together across projects (b) involving a greater diversity of people in the work (c) understanding and disseminating what the work is accomplishing and (d) helping the community to envision and propose the next level of research questions and projects.

A characteristic “product” of CIRCLS is a synthesis of the insights and directions across a broad community. Our convenings are not typical “principal investigator meetings” but are instead interactive events with a purpose and an outcome. The most recent convening, CIRCLS’21, brought approximately 350 researchers and others together to determine how the emphasis of the work could evolve from “broadening” to “empowering” – a difference between simply reaching learners and giving them tools and approaches that power their future learning journeys. The structure of the convening followed a storyline leading up to a townhall in which teams shared insights for the future of this field. The insights were captured in a graphical report available here: https://circls.org/circls21report.

The lead is North Carolina State University. Strategic partners are Digital Promise, a non-profit organisation, University of North Carolina, Vanderbilt University and Indiana University

For thousands of years, story has been the primarily medium for human learning – but as universal education became important, pedagogical approaches used story less. People have become less engaged in learning and efforts to increase learning outcomes in the general population are stalling. The main goal of this institute is to deepen engagement and advance learning by creating a new class of narrative-centered learning environments in which students can collaboratively engage with customised plots, synthesised characters, and realistic forms of interaction.

Engage AI is one of four AI Institutes in education, each funded by the National Science Foundation for 5 years and USD 20 million. Approximately 30-40 researchers will be involved each year.

  • primary and secondary education

  • education in museums and other informal institutes

  • foundational research

  • natural language processing

  • computer vision

  • machine learning

  • Use-Inspired research

  • narrative-centered learning

  • embodied conversational agents

  • multimodal learning analytics

  • communications among researchers, educators and the public

To accomplish the goal of revitalizing story-based learning, the Engage AI Institute conducts research on narrative-centered learning technologies, embodied conversational agents, and multimodal learning analytics to create deeply engaging learning experiences. We aim for the AI-driven learning environments to be built on advances in natural language processing, computer vision, and machine learning. We are focused on creating AI-driven narrative-centered learning environments to support collaborative inquiry learning in both formal and informal learning settings. Advances in core AI technologies drive new levels of interactivity and multimodal engagement, as well as support the creation of powerful predictive models of student learning.

The Institute will advance foundational AI techniques needed in education. It will investigate prototype narrative-centered learning environments. It will involve teachers, students and others in the work. It will advance ways to monitor and adapt learning via multiple types input, which could include visual input, audio input, and interactions with the technology. Ethics and Equity are core concerns and will interwoven in every project from start to end. The “Nexus” activity, led by Digital Promise, will create meaningful two-way engagement between the people in the Institute and people in industry, education, research and policy outside the Institute.

Prototype AI-enabled narrative learning environments will be developed and evaluated each year. Advances in Foundational AI will result in tools and techniques that will be shared. Advances in multimodal learning analytics will also be shared with the research community. Advances in approaches to integrating ethics and equity into the work will be dissiminated. The Nexus activity will share knowledge and invite participation with a broad public.

A typical product will be a game-like, immersive learning environment for K-12 students. Students may be asked to explore a phenomena or solve a problem by navigating a virtual space, such as an island, and talking with the people they encounter on the island. They will also discover and interact with resources that can help them in their quest. The AI elements will allow for an evolving, customised plot and for new characters whose appearance is synthesise and whose behaviors and speech are generated. The AI elements will allow the story to adapt to the participating students, with the aim to maximise both engagement and learning.

Strategic partners are academic (Georgia Institute of Technology, Georgia State U, Vanderbilt U, Harvard U, Technical College System of Georgia, U North Carolina Greensboro), non-profit (1EdTech), and corporations (Boeing, Wiley, IBM, Accenture)

The primary objectives of AI-ALOE are 1) expanding access to quality jobs and improving workforce reskilling and upskilling by applying the affordances of AI to transform online education for adult learners; 2) advancing foundational AI, particularly in human-AI interaction including personalisation, machine teaching, mutual theory of mind, and data visualisation; and 3) developing ethical, inclusive, user-centred design-based research and responsible AI.

About 75 people work on AI-ALOE initiatives, including about 50 that receive direct support. AI-ALOE is funded by the US National Science Foundation for USD 20 million from 2021-2026.

  • Vocational/technical education

  • Higher and continuing education

  • Employee training

  • informal adult learning for occupational skills

  • Development of scalable AI-based assistants that enable both instructors and adult learners to personalise teaching to their needs

  • Creating and improving feedback loops among instructors, learners, and AI-based agents

  • AI-powered enhancement of online cognitive engagement, teacher presence, and social interaction

  • Large language models and generative AI

  • Participatory design and ethical development for inclusion of diverse learners

  • Capacity building of skilled designers and researchers in AI and education

Current AI-ALOE technologies include:

  1. 1. Apprentice Tutors: An Intelligent Tutoring System Providing Personalized and Adaptive Support for Math Problem Solving and Skill Learning

  2. 2. Jill Watson: A Virtual Teaching Assistant That Empowers Teachers to Support and Engage All Students

  3. 3. SAMI (Social Agent Mediated Interactions): An AI Agent Connecting Learners and Building Community in Online Learning

  4. 4. SMART (Student Mental Model Analyzer for Research and Teaching): An AI-Powered System for Formative Assessment and Feedback

  5. 5. VERA (Virtual Experimental Research Assistant): A Virtual Laboratory Supporting Inquiry-Based Learning in Ecology

Over the next four years, AI-ALOE will expand its technological innovations to aid online adult learning across a range of contexts: academic, organisational, and informal/lifelong.

AI-ALOE identifies challenges and opportunities in online adult learning; creates foundational improvements in AI to improve learning outcomes at the episodic, course, programme and career levels; and studies the impact of those innovations across a range of learners, organisations, and contexts, with particular emphasis on aiding workforce access for marginalised groups.

University of South Australia, University of Texas Arlington, Arizona State University, Monash University, The University of Queensland, Texas A&M University, and Southern Methodist University.

The main goal of GRAILE is to advance research on AI in education and to help education providers with the adoption of AI to support education systems. GRAILE influences and develops policy and senior leadership in areas where AI intersects with learning in K-12, higher education and corporate settings. GRAILE provides guidance and best practices on the development of a vision and implementation of AI in education.

GRAILE is a network organisation designed for organisations interested in how AI intersects with education systems. GRAILE members are part of a deeply connected network of AI-minded institutions, seeking to advance knowledge and leadership capability building to drive organisational change and support the digital transformation within their institutions.

  • articulation of AI needs with education and corporate institutions

  • capacity building and professional and leadership development

  • developing and demonstrating AI-integrated infrastructure to support education systems, learning processes and outcomes

  • facilitate organisational change and business development

  • policy and best practice development for implementation of AI in education systems

  • co-creation of research and innovation

  • research translation, knowledge transfer and dissemination

  • primary and secondary education

  • vocational education

  • higher education

  • corporate

GRAILE’s agenda is focused on two main areas of work. One is to advance research and innovation on AI in education systems to help develop and evaluate an education system where AI capability becomes an integrated part of its ecology. Here the focus is on making sense of AI, test and pilot AI infrastructure, tools and solutions to support education processes and outcomes. The second body of work is concentrated on research translation, capability and leadership development to help drive organisational change and policies to support digital transformation and adoption of AI.

There is a need for research and knowledge translation which requires an information ecosystem that targets the knowledge needs of senior administration and practical translation of research into classroom settings. GRAILE provides their members with a platform where they can participate in research and pilot studies, short courses, webinars, debates, and conference events. Besides these large scale public events GRAILE offers tailor-made engagement programmes where member organisations can benefit from direct support and capability programmes suited to their needs.

GRAILE produces annual reports on the state of AI and its impact on education systems and organisational change. It offers regular podcasts on key emerging trends and developments on AI in education, Yearly leadership retreats with leading experts for GRAILE members, Short courses and webinars to engage in professional development. GRAILE publishes research outcomes in academic and popular journals.

An example of a GRAILE event is the yearly Empowering Learners for the Age of AI conference (ELAI). This is an open global online conference held over two days across all time zones with key events hosted by conference nodes within Australasia, Europe and America. With over 1 500 registrations this conference has been welcomed as a much-needed platform to support community building and sharing on how AI is transforming education systems. This year, December 2023, ELAI will apply a hybrid format combing in person and online participation, hosted by Arizona State University.

Our future society will be dominated by AI, advanced technology and automation, and our next generation citizens and workforce needs to be AI savvy to continue to make a difference. This future starts now, and we need an AI curriculum in our schools to help prepare students to learn to develop AI, learn to interact with non-human actors and learn to critically engage in ethical discussions to co-own their future and be inventive in society. In the AI playground students can, develop AI and through play experience the workings of AI and engage in ethical discussions about the impact of their algorithms and understand what it takes to develop AI.

The AI Playground offers a cloud-based learning environment for schools to help teachers offer AI learning experiences to their students.

The AI Playground provides a safe place to:

  • develop critical and computational thinking skills, teamwork skills, and design thinking skills.

  • learn to utilise and shape AI to solve complex problems.

  • learn to critically and ethically evaluate AI.

  • develop networked learning skills ready to operate in hybrid human-AI partnerships.

  • develop fluid agency to operate in world co-populated with non-human actors.

  • access lesson plans and curriculum resources for teachers

  • access teacher capacity building and professional development resources

  • access a teacher online network

  • primary and secondary education

  • vocational education

  • teacher training

The AI Playground offers a social space for exploration, it’s safe, playful, and inspiring. Our mission is to create an AI playground where students can take ownership over AI, play with it and develop AI to follow their imagination. The AI Playground is a digital learning environment that helps educators to bring AI into the classroom. The AI Playground offers students a way to learn together with AI to solve complex problems that humans struggle to solve on their own. This is done by providing students with challenges. These challenges, i.e. lesson plans, encourage students to take ownership over AI, play with it and develop AI to follow their imagination.

The AI Playground provides schools access to learning with AI in a game-based environment by focusing on a set of challenges. These challenges enable students to actively explore AI in the real world, understand how it works and even develop AI to solve complex problems. Further the AI Playground provides teaching materials and resources to help teachers implement these challenges in their classroom and provide guidance on how to effectively use the AI playground as a world of imagination with problems ready to be solved. The AI Playground also provides access to a growing online community of teachers where ideas and resources can be shared to collectively grow the implementation of AI playground in schools.

In the AI Playground the students will team with AI to collaboratively solve complex problems based on challenges. This way students will have hands-on experience of AI capability and develop skills to create and train AI to solve problems and learning tasks they are confronted with.

An example of a lesson plan in the AI Playground is the Mars rover challenge (Annex Figure 13.A.2). Here the students will work with AI to learn about Mars exploration. The students will work in groups and the first problem to solve is how to build a Mars rover, to learn about rover design and use of sensors to collect data about the Mars environment. The students will use Lego to physically build the rover in the classroom. A process that will be supported with AI based on computer vision within the AI Playground. The students can team up with AI to explore and identify the Lego pieces that are required for the Rover design. By putting Lego pieces on their desk, the AI will recognise them and provide feedback about the pieces it has identified (colour, size, name, etc.). By interacting with the AI Playground the students learn to filter through the Lego pieces and select all the bricks they need for the rover component they are currently working on. The picture below on the left illustrates this process. On the right side in this illustration the students see the camera output form the AI Playground. This is where computer vision is used in real time to identify Lego pieces. In the cards presented in the lower section of this illustration, the students receive feedback about the Lego pieces that have been picked up by the AI. This is the phase where students learn if these pieces are required for the component they are working on. They will also receive more detail about each of these bricks and how many of these pieces they need. On the left side of the same illustration the students can see a 3d model of the Rover component they are working on. The students can use this 3d model to zoom in/out and rotate the component they are trying to build. This is a complex problem to solve and the AI Playground can help the students in real time. Together with AI the students can locate where each brick needs to go. Each Lego brick (one or multiple pieces) that has been placed on the desk will be highlighted in the 3d model. This way the students can work together with AI to match the pieces and support the building process. Over time AI will learn from the students’ design pathways and the order in which they are building their Rover, meaning that AI can start to make recommendations about what might be the next brick the students can use.

As soon as they have finished their rover component the students can present it to the AI Playground for a final inspection to see if this part has been successfully completed. Once the entire rover has been built and the sensors have been programme and tested the students can travel to Mars and drive their digital Lego Mars rover twin in a simulated Mars environment (see second illustration on the right). Now they can drive around and explore existing Mars rovers and learn about their sensors and capability. They can trace back for example what the Nasa Perseverance rover has been doing. Furthermore the students van use their sensors and start collecting Mars data for analysis by themselves. Once they have completed their Mars mission they can travel back to earth.

Co-led by Sanna Järvelä from the University of Oulu (Finland) and Inge Molenaar from Radboud University (the Netherlands), the center's partners include Maria Bannert from the Technical University of Munich (Germany), Dragan Gašević from Monash University (Australia), and Roger Azevedo from the University of Central Florida (United States).

CELLA is set up as a global collaborative research center focused on equipping young learners to learn, live, and work in the age of AI. The aim is to develop research-based AI-driven learning technologies that promote children’s learning skills and ensure their well-being.

The center is supported by a grant of CHF 2 million (EUR 1.9 million) from the Jacobs Foundation for five years. The team consists of five Principal Investigators supervising five PhD students and two postdoctoral researchers coordinating the team and research activities.

  • primary and secondary education

  • articulation of needs with educational institutions

  • co-creation of innovation

  • knowledge transfer and development

Our center will bundle the knowledge and skills from the teams in different fields to design, validate, and implement new AI-driven learning technologies to support learners of different ages to improve their self-regulated learning (SRL), that is, how they go about learning. Moreover, we develop and test practices and inclusive design principles that boost the agency of learners to make informed instructional decisions about their learning while working with AI.

CELLA’s main activities are divided into four global research phases. In the first phase, all sites focus on detecting similar SRL processes in a secondary education context using the same AI-driven system. In the second phase, each site investigates these SRL processes in more open and diverse educational arrangments. Phase 3 involves designing and implementing personalised support for SRL, and phase 4 focuses on the long-term development and support of SRL. Research phases are based on collaboration among researchers, schools, and edtech companies.

The findings from the four research phases are disseminated through various channels, including research publications, public blogs, and conferences such as EARLI and its special interest groups (e.g., SIG27). Each PhD student writes their dissertation on their CELLA research work.

In the first study that is running, we measure self-regulated learning (SRL) action processes of 12-15-year-old students during essay writing using the trace data of a digital learning environment (N= 250). This is done in classroom settings by all five teams in the different countries at secondary education schools, and the experiment is integrated into existing lessons. The findings serve to validate an AI-driven system that detects SRL about how students go about learning.

Notes

← 1. https://www.nesta.org.uk/project/future-work-and-skills/

← 2. See https://www.ai4t.eu/.

← 3. The examples are selected based on the personal network of the first author and connections thereof to give a good overview of the ecosystem around AI and education. This does not have the intend to be a comprehensive overview.

← 4. See https://ai.eitcommunity.eu/#page-top and https://primabord.eduscol.education.fr/IMG/pdf/p2ia_francais_mathematiques_cp_ce1_ce2_web.pdf.

← 5. circls.org

← 6. engageAI.org

← 7. https://aialoe.org

Legal and rights

This document, as well as any data and map included herein, are without prejudice to the status of or sovereignty over any territory, to the delimitation of international frontiers and boundaries and to the name of any territory, city or area. Extracts from publications may be subject to additional disclaimers, which are set out in the complete version of the publication, available at the link provided.

© OECD 2023

The use of this work, whether digital or print, is governed by the Terms and Conditions to be found at https://www.oecd.org/termsandconditions.