copy the linklink copied!Chapter 5. Strengthening system processes to evaluate national education performance

System evaluation in Georgia is built upon a strong foundation of data collection. Despite having rich data, however, the use of evidence in policy-making is not systematic, which leads to an unstable environment in which policies can be created or eliminated quickly without rigorously evaluated rationale. Several reasons contribute to this environment, including the lack of a dedicated research and evaluation body within the ministry and the lack of tools that can be used to analyse Georgia’s educational data. These should be established and created to embed policy-making structures that are underpinned by evidence review and guided by continuous monitoring of data. An important component in system evaluation, which is currently missing in Georgia, is regular national assessments that collect information about student outcomes at various stages of learning. Establishing these would strengthen the evidence base that would then inform decision-making.

    

copy the linklink copied!Introduction

System evaluation is central to improving educational performance. Evaluating an education system holds the government and other stakeholders accountable for meeting national goals and provides the information needed to develop effective policies. In Georgia, system evaluation has seen significant development over recent years, especially in the areas of data collection and management.

Despite these advancements, however, some elements of system evaluation are still lacking. In particular, Georgia does not have a strong culture of using evidence to inform policy-making and there are few tools that can be used to analyse the rich data that are centrally collected. As a result, decisions are sometimes made without being based on relevant evidence. Furthermore, though national assessments are administered, the funding that supports these activities is being phased out and, afterwards, Georgia will not have a regular, external measure of student outcomes. In a context where educational inequity is worsening, it is problematic that these processes, which would help to systematically identify and address equity gaps, are not in place.

This review recommends that Georgia create a research and evaluation unit whose explicit purpose is to analyse data and embed the use of evidence in decision-making. More data analysis tools need to be created to aid stakeholders at all levels in making sense of the available data. These tools include analytical functions built into the E-School platform, but also a digital monitoring dashboard. Finally, Georgia should develop a national assessment strategy so external measures of student learning can be continuously collected and used to help guide school-level instruction and system-level strategic planning.

copy the linklink copied!Key features of effective system evaluation

System evaluation refers to the processes that countries use to monitor and evaluate the performance of their education systems (OECD, 2013[1]). A strong evaluation system serves two main functions: to hold the education system and the actors within it accountable for achieving their stated objectives and, by generating and using evaluation information in the policy-making process, to improve policies and ultimately education outcomes. System evaluation has gained increasing importance in recent decades across the public sector, in part because of growing pressure on governments to demonstrate the results of public investment and improve efficiency and effectiveness (Schick, 2003[2]).

In the education sector, countries use information from a range of sources to monitor and evaluate quality and track progress towards national objectives (see Figure 5.1). As well as collecting rich data, education systems also require “feedback loops” so that information is fed back into the policy-making process (OECD, 2017[3]). This ensures goals and policies are informed by evidence, helping to create an open and continuous cycle of systemic learning. At the same time, in order to provide public accountability, governments need to set clear responsibilities – to determine which actors should be accountable and for what – and make information available in timely and relevant forms for public debate and scrutiny. All of this constitutes a significant task, which is why effective system evaluation requires central government to work across wider networks (Burns and Köster, 2016[4]). In many OECD countries, independent government agencies like national audit offices, evaluation agencies, the research community and sub-national governments, play a key role in generating and exploiting available information.

A national vision and goals provide standards for system evaluation

Like other aspects of evaluation, system evaluation must be anchored in a national vision and/or goals, which provide the standards against which performance can be evaluated. In many countries, these are set out in an education strategy that spans several years. An important complement to a national vision and goals are targets and indicators. Indicators are the quantitative or qualitative variables that help to monitor progress (World Bank, 2004[5]). Indicator frameworks combine inputs like government spending, outputs like teacher recruitment, client outcomes like student learning, and societal outcomes like trust in government. While client and societal outcomes are notoriously difficult to measure, they are a feature of frameworks in most OECD countries because they measure the long-term results that a system is trying to achieve (OECD, 2009[6]). Goals also need to balance the outcomes a system wants to achieve with indicators for the internal processes and capacity throughout the system that are needed to achieve these outcomes (Kaplan and Norton, 1992[7]).

Reporting against national goals supports accountability

Public reporting of progress against national goals enables the public to hold the government accountable. However, the public frequently lacks the time and information to undertake this role, and their motivation tends to be driven by individual rather than national concerns (House of Commons, 2011[8]). This means that objective and expert bodies like national auditing bodies, parliamentary committees and the research community play a vital role in digesting government reporting and helping to hold the government to account.

An important vehicle for public reporting is an annual report on the education system (OECD, 2013[1]). In many OECD countries, such a report is now complemented by open data. If open data is to support accountability and transparency, it must be useful and accessible. Many OECD countries use simple infographics to present complex information in a format that the public can understand. Open data should also be provided in a form that is re-usable (e.g. other users can download and use it in different way so the wider evaluation community like researchers and non-governmental bodies can analyse data to generate new insights) (OECD, 2018[9]).

National goals are a strong lever for governments to direct the education system

Governments can use national goals to give coherent direction to education reform - from the different units within central government to sub-national governance bodies and individual schools. For this to happen, goals should be specific (i.e. including timelines, actions and responsible persons), measurable (i.e. including performance indicators and targets), ambitious but feasible and, above all, relevant to the education system and society at large. Having a clear sense of direction is particularly important in the education sector, given the scale, multiplicity of actors and the difficulty in retaining focus in the long-term process of achieving change. In an education system that is well-aligned, national goals are embedded centrally in key reference frameworks, encouraging all actors to work towards their achievement. For example, national goals that all students reach minimum achievement standards or that teaching and learning foster students’ creativity are reflected in standards for school evaluation and teacher appraisal. Through the evaluation and assessment framework, actors are held accountable for progress against these objectives.

copy the linklink copied!
Figure 5.1. System evaluation
Figure 5.1. System evaluation

Tools for system evaluation

Administrative data about students, teachers and schools are held in central information systems

In most OECD countries, data such as student demographic information, attendance and performance, teacher data and school characteristics are held in a comprehensive data system, commonly referred to as an Education Management Information System (EMIS). Data are collected according to nationally and internationally standardised definitions, enabling data to be collected in an integrated manner, used across the national education system and reported internationally. An effective EMIS also allows users to analyse data and helps disseminate information about education inputs, processes and outcomes in a continuous and dynamic manner.

National and international assessments provide reliable data on learning outcomes

Over the past two decades, there has been a major expansion in the number of countries using standardised assessments. The vast majority of OECD countries (30), and an increasing number of non-member countries, have regular national assessments of student achievement (OECD, 2015[10]). This reflects the global trend towards greater demand for outcomes data to monitor government effectiveness, as well as a greater appreciation of the economic importance of all students mastering essential skills.

The primary purpose of a national assessment is to provide reliable data on student learning outcomes that are comparative across different groups of students and over time (OECD, 2013[1]). Assessments can also serve other purposes such as providing information to teachers, schools and students to enhance learning and supporting school accountability frameworks. Unlike national examinations, they do not have an impact on students’ progression through grades. When accompanied by background questionnaires, assessments provide insights into the factors influencing learning nationally and across specific groups. While the design of national assessments varies considerably across OECD countries, there is consensus that having regular, reliable national data on student learning is essential for both system accountability and improvement.

An increasing number of countries also participate in international assessments like the OECD’s Programme for International Student Assessment (PISA) and the IEA’s Trends in International Mathematics and Science Study (TIMSS) and Progress in International Reading Literacy Study (PIRLS). These assessments provide countries with periodic information to compare learning against international benchmarks as a complement to national data.

Evaluation and thematic reports provide information about the quality of teaching and learning processes

Qualitative information helps to contextualise data and provide insights into what is happening in a country’s classrooms and schools. For example, school evaluations can provide information about the quality of student-teacher interactions and about how a principal motivates and recognises staff. Effective evaluation systems use such findings to help understand national challenges – like differences in student outcomes across schools.

While policy evaluation is rarely systematic, a growing number of OECD countries are starting to use evaluation. Different approaches include evaluation shortly after implementation, and ex ante reviews of major policies to support future decision-making (OECD, 2018[11]). Countries are also making greater efforts to incorporate evidence to inform policy design, for example by commissioning randomised control trials to determine the likely impact of a policy intervention.

Effective evaluation systems requires institutional capacity within and outside government

System evaluation requires resources and skills within ministries of education to develop, collect and manage reliable, quality datasets and to exploit education information for evaluation and policy-making purposes. Capacity outside or at arm’s length of education ministries is equally important, and many OECD countries have independent evaluation institutions that contribute to system evaluation. Such institutions might undertake external analysis of public data, or be commissioned by the government to produce annual reports on the education system and undertake policy evaluations or other studies. In order to ensure that such institutions have sufficient capacity, they may receive public funding but their statutes and appointment procedures ensure their independence and the integrity of their work.

copy the linklink copied!System evaluation in Georgia

Georgia has successfully established several components that are integral to perform system evaluation (Table 5.1). For example, the Ministry of Education, Science, Culture and Sport (MoESCS) has developed a national vision for its education system and action plans to direct the implementation of its strategic goals. Several independent bodies collect valuable data and the Education Information Management System (EMIS) stores information related to students, teachers and schools. Nevertheless, many of these components and processes are not fully developed and, more importantly, not oriented towards system evaluation. As a result, research and analysis that could support system improvement is not performed systematically and policy is sometimes made without being strongly informed by evidence.

copy the linklink copied!
Table 5.1. System evaluation in Georgia

References for national vision and goals

Tools

Body responsible

Outputs

  • Unified Strategy for Education and Science 2017-21

  • Georgia 2020 socio-economic development strategy

Administrative data

Education Management Information System (EMIS)

  • Unpublished, ad-hoc reports from Education Management Information System (EMIS)

  • Annual statistical releases

National assessment

National Assessment and Examinations Centre (NAEC)

  • Mathematics (9th grade)

  • Biology, Physics and Chemistry (9th grade)

  • Georgian as a second language (7th grade)

International assessments

National Assessment and Examinations Centre (NAEC)

  • Progress in International Reading Literacy Study (PIRLS)

  • Trends in International Mathematics and Science Study (TIMSS)

  • Programme for International Student Assessment (PISA): mathematics, science and reading

School evaluations

National Centre for Education Quality Enhancement (NCEQE)

  • Under discussion

Policy evaluations

No established process

  • -

Reports and research

Department of Strategic Planning and International Relations

National Assessment and Examinations Centre (NAEC)

  • Monitoring Report of the Unified Strategy of Education and Science

  • Ad-hoc internal reports on assessment and examination results

Sources: MoES (2018[12]), Georgia Country Background Report, Ministry of Education and Science, Tbilisi; MoES (2017[13]), Unified Strategy for Education and Science for 2017-2021; Government of Georgia (n.d.[14]), Social-economic Development Strategy of Georgia - Georgia 2020, www.adb.org/sites/default/files/linked-documents/cps-geo-2014-2018-sd-01.pdf (accessed on 23 January 2019).

High-level documents express the national vision for education

Georgia’s Unified Strategy for Education and Science for 2017-21 provides a comprehensive vision for education at all levels. This document sets out specific goals that aim to expand access to quality education and ensure that students acquire the necessary competences to support Georgia’s sustainable development (MoESCS, 2017[13]) (see chapter 1). The strategy complies with the overarching government strategy “Georgia 2020”, and seeks greater alignment to European Union policies and practices, such as the Bologna Process requirements for higher education (Government of Georgia, n.d.[14]; MoESCS, 2017[13]).

Few national stakeholders are aware of the national strategy

The Unified Strategy was developed by the MoESCS Department of Strategic Planning and International Relations with the support of other governmental bodies and donor agencies, including the European Union, the United Nations Children's Fund (UNICEF), the Millennium Challenge Corporation (MCC), the World Bank and the United States Agency for International development (USAID). The document was created somewhat hurriedly in response to international pressure (MoESCS, 2017[15]), which contributes to stakeholders’ limited knowledge or ownership of its goals. The OECD review team met with school principals and teachers in Georgia and, when asked, few were aware of the strategy and its contents. This is problematic because without a single, shared vision, stakeholders cannot work together to achieve system-wide strategic goals.

New priorities and reforms have been introduced in parallel to the strategy

In 2018, the new Minister introduced priority actions for reforming all levels of Georgia’s education system between 2018 and 2023 (MoES, 2018[16]). One of the main goals of this “Vision of Reform” is to implement the “Model of a New School”, which, among other goals, will introduce new management approaches, a unified teaching electronic platform, new teaching methods and will lead to an increase of teacher and school leader salaries. In addition, the plan mentions that the current examination system is to be assessed with the goal of implementing a new model in 2020.

Tools to collect and monitor information exist but are not systematically used

Georgia has a well-established information gathering system. However, hard-to-use monitoring and data tools limit the analysis that can be performed on the available information. Also problematic is that some important data collection instruments, such as the national assessment, might not remain in place in coming years due to a lack of funding, which would mean that even less data on education outcomes would be available.

An action plan and monitoring report accompany the Unified Strategy, but lack specific targets

The Unified Strategy is accompanied by a detailed action plan that specifies the activities, expected outcomes, responsible entities and estimated budget for each goal (MoESCS, 2017[15]). Progress through the action plan and the Unified Strategy in general is tracked using the Monitoring Report of the Unified Strategy of Education and Science. This report is produced by the Department of Strategic Planning and International Relations and is available online via the Ministry’s website.

While these tools complement the Unified Strategy, they do not establish specific indicators that can be used to measure progress towards system-level targets (OECD, 2013[1]). For example, in the plan, a stated objective is to promote vocational education and increase its attractiveness and an associated activity is to encourage the introduction of vocational education components in schools. However, no clarification is provided regarding how this would take place, nor does the action plan identify a target number of schools or students it would like to reach. Nevertheless, the Monitoring Report states that this goal was achieved (MoES, n.d.[17]).

EMIS collects and manages data, but stakeholders have difficulty analysing information

In 2012, MoESCS established EMIS as an entity at arm’s length from the Ministry. EMIS is responsible for collecting and storing information about Georgia’s education system, including data on students, the teaching staff and school infrastructure, as well as strengthening information and communications technology in Georgian schools (MoESCS, 2018[12]).

According to reports from national stakeholders, it is easy to input and update school and individual data in the EMIS online portal, E-school. As a result, virtually all stakeholders provide information directly from EMIS, which limits parallel data collections and creates greater data consistency. Nevertheless, stakeholders reported having trouble using the information collected by EMIS. For example, school principals might want to know the attendance rate at their schools over time, or disaggregated by boys and girls. While the EMIS database holds raw data about attendance and gender, the E-School portal does not provide users with tools to analyse the data and produce the desired results. At present, to arrive at such results, E-school users must manually download the raw data and then manipulate it in software such as Microsoft Excel, which requires technical capacity and time that few have and might also induce errors due to the need for manual manipulations.

Public access to EMIS data is limited

E-school allows account-holding users to access information, albeit in a non-analytical manner. Anyone without credentials to access E-School who wishes to view educational data, even anonymised and aggregated data, cannot do so via an online portal and must submit a formal written request to EMIS. The request is processed and the information issued up to ten days later (EMIS, n.d.[18]). The lack of automated and immediate data reporting creates additional work for EMIS staff and delays for users. In addition, there is no user-friendly interface specifying which information is available. This requires that information requestors communicate at length with EMIS to determine what information can be transferred and in what format before EMIS can even begin compiling the requested information.

Student data is not integrated across Georgia’s databases

Within EMIS, student data is identified using students’ government-issued identification numbers, which allows data to be aligned across all public institutions (e.g. health and labour) (MoESCS, 2018[12]). This structure is efficient, eliminates concerns about incorrectly identifying students and facilitates cross-institutional research.

Although student data are aligned via their identification numbers, MoESCS’s various databases are not integrated with each other, which makes merging data difficult. For instance, while student and teacher demographic data are housed in EMIS’s systems, their examinations data are located in NAEC’s systems and the two systems are not linked. This means that researchers cannot immediately access the data they need to answer important questions (e.g. student attendance vis-à-vis test scores). While the data can be requested and manually merged, these procedures represent an administrative burden and require significant technical capacity to be completed.

Georgia conducts regular national assessments but does not have a long-term strategy

Most OECD education systems have regular sample or census-based assessments to collect information on learning outcomes across different groups of students, at different stages of education and over time. In Georgia, NAEC has undertaken – with funding from MCC - regular sample-based student assessments for maths and sciences in Grade 9, and a census-based assessment for Georgian as a second language in non-Georgian schools in Grade 7 (MoESCS, 2018[12]). These assessments have no impact on students’ progression and are used to monitor system-level performance.

The structure of these assessments only allows for data collection in selected grades and competencies, which gives a limited understanding of students’ skills and development. For instance, students are not assessed in their mother tongue, which is a fundamental skill, until Grade 12. Furthermore, while these assessments reflect performance at the national level, they do not produce reliable information at the regional or school levels that could help identify at-risk groups and other gaps in the system.

Georgia has recently begun to develop a long-term strategy for its national assessments. The review team was told that MoESCS is considering testing students in grades 4, 6 and 10. The assessments will serve diagnostic, formative purposes, but the exact subjects and other features of the assessments’ design have not been determined.

Georgia participates regularly in international assessments

Several countries participate in international assessments to compare their outcomes to other countries’ and use the results to inform policy-making. Georgia has participated in TIMSS (2007, 2011 and 2015) and PIRLS (2006, 2011 and 2016) as well as the OECD’s PISA (2009 and 2015). As part of this process, NAEC has developed extensive experience in administering international assessments (MoESCS, 2018[12]). The OECD review team was told that ad-hoc analysis of international assessment data is performed, primarily by NAEC, but that the results of this analysis are not systematically used in the policy-making process, nor are they made public.

Evaluation and thematic reports

There are no annual, analytical reports about the education system

Regular reporting is a common feature of most OECD education systems. By bringing together a wide range of information on student outcomes and demographics as well as contextual information on student participation, regular reports highlight the system’s main challenges and play a key role in system evaluation by disseminating information (OECD, 2013[1]). Despite having strong technical capacity in-house and across affiliated agencies, the MoESCS produces neither regular nor thematic analytical reports about the education system.

As mentioned previously, the Department of Strategic Planning and International Relations produces a Monitoring Report. While this document monitors progress towards strategic objectives, such as completion rates, it does not analytically evaluate the state and performance of the Georgian education system. In addition, the report is extremely long and hard to access for readers.

External organisations have produced reports about Georgia’s education system

International and non-government organisations have undertaken valuable analysis of Georgia’s education system that has contributed to system evaluation. In 2014, for example, the World Bank published a sector-wide policy review (World Bank, 2014[19]) and Systems Approach for Better Education Results (SABER) Reports (World Bank, 2014[20]) have provided valuable recommendations on professional development and teacher policies. This work provided technical advice on how to support and build the capacity required at the Ministry to develop a Unified Strategy and accompanying Action Plan.

Evaluation institutions

Georgia does not have an agency that is responsible for the research and evaluation of the entire education system. There are several bodies with research capacity, such as NAEC and EMIS, but these organisations do not have a remit to look at the entirety of the system and instead analyse only their own data. Some bodies, such as the Department for Strategic Planning and International Relations, are mandated to guide the activities of the entire system, but they do not have research capacity. Without rigorous study of the system, their strategic planning tends to occur based on the priorities of individuals and not based on sound evidence.

Agencies such as NAEC and EMIS have the potential to make a significant contribution to system evaluation, but there are few mechanisms to ensure regular co-operation between them. Many of the agencies report directly to the Minister and have little horizontal communication and engagement with each other. This lack of coordination is problematic because it isolates the various agencies and compels them to focus on their own internal processes. For system evaluation to occur, these organisations need to be oriented towards the same important priorities, as identified by the Unified Strategy, so their programmes of work support each other and the national vision.

copy the linklink copied!Policy issues

An important challenge to developing system evaluation in Georgia is the lack of recognition around the importance of conducting research into the system and using that research to inform policy. This leads to an unstable policy environment in which individuals’ concerns and beliefs can take priority over long-term, evidence-based reforms. This review recommends that a research and evaluation unit be established in MoESCS and that this unit be responsible for overseeing system evaluation activities. This unit will draw attention to the need for basing decision-making on a careful review of the evidence and provide the capacity to help the government do so. A second priority for Georgia is to make the education data it collects easier to use for research and instructional purposes. Because data capacity in Georgia is relatively high, a small amount of extra investment in this area could produce notable results. Finally, Georgia needs to design a national assessment system that is aligned with key system goals. This assessment system must be stable and adequately resourced in order to monitor student learning over time. It will be important, however, that Georgia addresses important questions related to the assessment’s formative and accountability functions before introducing new measures.

copy the linklink copied!Policy issue 5.1. Building a culture of research, evaluation and improvement of the education system

Reviews of education systems reveal common practices related to research and evaluation that contribute to successful system evaluation. These include:

  • analysing available information is conducted to produce a rich body of information about the system

  • establishing procedures that position evidence review at the centre of policy-making

  • evaluating policies to determine their effect and to inform future decision-making.

In Georgia, there is a presently a lack of the foundations upon which a culture of research evaluation can be developed. There is no unit responsible for guiding the national-level evaluation and research agenda and, as a result, there is limited analytical information produced about the education system as a whole. Without consistent reporting about the system, policies are created without reviewing key evidence that could inform their development, and resources are spent in support of unsubstantiated initiatives.

Recommendation 5.1.1. Establish a formal research and evaluation unit

Many countries have a unit that is dedicated to guiding research and evaluation into the education system and using research results to inform planning. Some may conduct research themselves, while others coordinate the work of external researchers in accordance with government priorities. These units are also responsible for ensuring that the research that is produced is used in the policy-making process. Box 5.1 describes the composition and roles of two mature research and evaluation units from the United Kingdom and the Netherlands.

copy the linklink copied!
Box 5.1. Education research and evaluation units in the United Kingdom and the Netherlands

The Department of Education in the United Kingdom has created a strategy unit within the Department that works with senior leadership to shape the government’s overall education strategy. It has, among others, the following responsibilities:

  • Strategic projects – deliver high quality strategic policy projects.

  • Thought leadership – bring new, interesting and challenging thinking about education policy into the Department.

  • Supporting priorities – help develop and set priorities for the Department.

  • Social mobility – lead the Department’s work on social mobility, such as improving outcomes for disadvantaged pupils.

The Strategy Unit is led by a unit head and composed of project leads and analysts. Projects that are led by the Strategy Unit undergo an extensive research process that includes collection and analysis of quantitative and qualitative data. The outputs of the Strategy Unit are submitted to government leadership for their consideration in the policy-making process.

In the Netherlands, the Ministry of Education and the Netherlands Organisation for Scientific Research (NWO) have established the Netherlands Initiative for Education Research (NRO). This organisation does not conduct its own research, but is responsible for coordinating the research agenda of the Ministry by soliciting and reviewing external requests to perform research. A Steering Group that consists of representatives from education practice, education policy and the research community leads NRO.

Source: NWO (n.d.[21]), Netherlands Initiative for Education Research, www.nro.nl/en/ (accessed on 7 January 2019).

In Georgia, research and evaluation responsibilities are loosely divided between NAEC and EMIS. This configuration has limitations, as both bodies mostly work with their own data and neither is responsible for the evaluation of the entire system. Their position at arm’s length from the ministry also prevents them from being looked to for policy advice. Though their data can be linked via common data keys, research that draws upon both sources, in conjunction with others, is rarely conducted. Consequently, Georgian policy-makers do not have information that presents a general overview of the system and could inform decision-making. Creating a research and evaluation unit at the centre of MoESCS that is explicitly responsible for study of the entire system would greatly improve the country’s capacity to perform system evaluation.

Clearly define the role and position of the research and evaluation unit

MoESCS’s unit would be responsible for improving and centralising data access, evaluating the effectiveness of education policy, measuring progress towards strategic goals and promoting the use of evidence to inform policy-making and budgeting. The unit would not necessarily perform research and evaluation work itself, but could prioritise and commission it from other actors (see Box 5.1 for a description of how a similar unit in the Netherlands commissions research and evaluation work). Such an approach would strengthen demand for evidence and promote the development of education research capacity in general.

For the research unit to achieve its objectives, it must be prominently situated within MoESCS and not contained within a department. Therefore, the OECD review team recommends that the unit report directly to the Minister. This governance structure would provide the research unit with the mandate and recognition needed to guide different parts of MoESCS in a common direction. One of its first priorities might be to ensure that an analytical report about the system is published regularly and that the results are discussed in detail (see Recommendation 5.1.2).

The unit might initially be staffed by two to three individuals with experience in quantitative analysis, use of evidence in policy-making and delivery of policy. Funding to support the unit would have to originate from a dedicated budget line such that other items do not take financial priority over the unit. Given the importance of data to the work of the research and evaluation unit, Georgia should consider integrating EMIS into the unit. It is rare across OECD countries to have EMIS separate from the ministry. As the unit gains prominence and capacity, it would be well-positioned to lead EMIS, which would further centralise the importance of data in policy-making.

Develop a research agenda for the research and evaluation unit

Given the role of the research and evaluation unit, its work will need to be guided by a research agenda that explains what it wishes to do and why (World Bank, 2014[19]). This agenda should be formed based on the strategic issues defined by the Ministry and its major stakeholders. This process should be part of the strategic planning process – where strategic issues are identified based upon an analysis of the Ministry’s mission and values and the external environment (see Recommendation 5.1.3) (Bryson, 2018[22]).

While the research agenda should focus on issues related to established system goals, like equity (see Recommendation 5.1.3), it could also include items related to the feasibility of new policy proposals. For example, increasing teacher salaries is a priority of MoESCS. Several complex factors need to be taken into account when determining the feasibility and desirability of different options. These include:

  • projections on retirement

  • the long-term fiscal impact of more teachers moving up the salary scale

  • how the new salary structure can best incentivise improvements in practice and reinforce other initiatives (such as the New School Model)

  • the trade-offs between salary increases and other investments that could support system goals.

At present, it seems that decisions regarding teacher salary structures are being made without consulting adequate information (see chapter 3). Given the importance of this issue and the considerable financial resources involved, the research and evaluation unit should make reviewing this topic a key priority.

Recommendation 5.1.2. Encourage the dissemination and usage of research and evaluation activities

A core function of research and evaluation units in most OECD countries is the production of regular reports about the state of the system and periodic analytical reports about specific themes (OECD, 2013[1]). Such reports help hold the government accountable in addition to providing information about how policy and practice can be improved. Research and evaluation units can also be responsible for encouraging the use of these reports for policy-making purposes, as is the case with the Strategy Unit from the United Kingdom (see Box 5.1).

In Georgia, the Monitoring Report most closely approximates a report about the state of the system, but at 135 pages, it is difficult to read and interpret. It is more descriptive in nature and does not study the system in-depth to identify strengths and areas for improvement. The absence of regular, meaningful reporting is also one of the factors that has impeded the development of transparent, evidence-informed policy-making in Georgia. Policy-makers rarely meet to review important research findings and discuss policy solutions to the issues that the research identifies. There is also no clear expectation within government that such review takes place.

Annually release an analytical report about the education system

Most OECD countries regularly publish an analytical report on education, the content of which is guided by national priorities and goals (OECD, 2013[1]). An annual analytical report would contain information related to the key indicators of the national action plan. In addition, the analytical report would study the inputs, processes and outputs that are related to the indicators (OECD, 2013[1]).

In Georgia, the research and evaluation unit would be responsible for developing the annual report. Data and additional research capacity could be provided by EMIS (unless it is integrated within the unit) and NAEC, or requested externally. This report would differ significantly from the extant monitoring report in that it would be more analytical and delve “deeper” into key strategic issues. For example, the monitoring report might explain the current levels of student enrolment, connecting this indicator to changes in population patterns and school resource allocation. The report might also discuss future policies or activities intended to address these challenges. These are common features of analytical reports in OECD countries (Box 5.2).

copy the linklink copied!
Box 5.2. Annual analytical reports on the education system in the Czech Republic and Portugal

In the Czech Republic, the Ministry of Education, Youth and Sports produces an annual report that evaluates the overall education system (the Status Report on the Development of the Education System in the Czech Republic). The report summarises the main organisational and legislative changes that occurred in the given year and presents statistical indicators describing the situation and development in pre-primary, basic, secondary and tertiary education. The report also contains information about educational staff in the system, the funding of schools and the labour market situation of school leavers. These data constitute a basis for the development of education policies. Furthermore, the report typically includes an area of specific focus (e.g. in 2007 and 2008, the implementation of the curricular reform). Individual regions within the Czech Republic also produce their own reports to assess progress towards long-term policy objectives.

In Portugal, the National Education Council publishes the annual State of Education report, which provides an analysis of key data on the education system. The first issue, the State of Education 2010 – School Paths, offered a detailed investigation of student pathways in the education system and the second issue, The State of Education 2011 – The Qualifications of the Portuguese Population, provided an in-depth examination of the current qualifications of the population. The report also offers policy advice on how to improve the quality of basic and secondary education and evaluates policy initiatives. In 2011, these covered school evaluation, the funding of public schools, education for children aged three years and under, the reorganisation of the school network and specific education programmes.

Sources: Santiago et al., (2012[23]), OECD Reviews of Evaluation and Assessment in Education: Czech Republic 2012, OECD Reviews of Evaluation and Assessment in Education, OECD Publishing, Paris, https://dx.doi.org/10.1787/9789264116788-en; Santiago et al., (2012[24]), OECD Reviews of Evaluation and Assessment in Education: Portugal 2012, OECD Reviews of Evaluation and Assessment in Education, OECD Publishing, Paris, https://dx.doi.org/10.1787/9789264117020-en.

Release ad-hoc reports about thematic issues

A majority of OECD countries produce ad-hoc reports on specific themes (OECD, 2013[1]). These might range from observed achievement gaps to evaluating national initiatives to improve science and mathematics education. Ad-hoc reports are formulated based upon the national research agenda defined through strategic planning processes. Guided by the agenda, a national Ministry of Education, through its research bodies or external bodies, would conduct research into specific issues from the agenda (see Box 5.1). The resulting report would be used to shape future policy and be referenced by the annual analytical report from that year. An example of how an ad-hoc research report motivated educational change in Wales is described in Box 5.3.

copy the linklink copied!
Box 5.3. Research driven change in Wales

In 2011, following the release of PISA 2009 results, the government of Wales embarked upon an ambitious education improvement programme to raise student-learning outcomes. As part of this process, the Welsh government worked with the OECD to study how schools in Wales can be improved. A report was published in 2014 that identified key challenges in the Welsh education system and recommended specific improvements (OECD, 2014[25]). The media highlighted the report’s findings, in particular the need for a stable, long-term vision (Jones, 2014[26]).

In 2017, the OECD studied changes in the Welsh education system. The OECD published a rapid policy assessment and found that the original 2014 report had become a centrepiece in policy conversations. Several recommendations made had already been implemented by the Welsh government, including improving teacher professional development, facilitating peer learning between schools and updating the curriculum to support a modern vision of education (OECD, 2017[27]). The media also highlighted this report, noting that while progress had been made, further improvement could still be made (BBC, 2017[28]).

Sources: OECD (2017[27]), The Welsh Education Reform Journey, www.oecd.org/education/The-Welsh-Education-Reform-Journey-FINAL.pdf (accessed on 23 January 2019); OECD (2014[25]), Improving Schools in Wales: An OECD Perspective, www.oecd.org/education/Improving-schools-in-Wales.pdf (accessed on 23 January 2019); Jones (2014[26]), OECD: Welsh government lacks education 'long-term vision', BBC, www.bbc.com/news/uk-wales-26962501 (accessed on 23 January 2019); BBC (2017[28]), OECD report backs radical reform of Welsh curriculum, www.bbc.com/news/uk-wales-39105175 (accessed on 23 January 2019).

In Georgia, thematic reporting is not conducted systematically and, therefore, information about key educational issues is not produced regularly. When such analysis is done, such as with respect to the teacher performance scheme, international partners usually undertake it, which potentially limits the appropriation of key findings. The OECD recommends that the proposed research and evaluation unit also be responsible for overseeing the development of ad-hoc reports about strategic issues that appear in the Unified Strategy and take on a more proactive role in the commissioning of studies by third parties (see Recommendation 5.1.3). Given the educational context in Georgia, examples of thematic reports that would be important to develop include the education of ethnic minorities and resource allocation to rural schools.

Establish regular meetings between policy-makers during which evidence is shared and discussed

Research on effective policy-making emphasises the importance of informing decisions with evidence and analysis (OECD, 2017[29]). Evidence-informed policy-making means that before policy and major legislation is introduced, available evidence is studied and possible policy options openly discussed (Sanderson, 2002[30]; Senge, 2014[31]). The European Commission also urges that robust evaluation methodology (e.g. randomised control trials) be used to study the effects of policies after they are implemented in order to identify which policies are most effective (European Commission, 2007[32]).

Presently, policy conversations that are centred on evidence occur infrequently in Georgia. Within the MoESCS, heads of departments indicated that there are no regular meetings between themselves and the Minister of Education. Similarly, meetings do not frequently occur between heads of departments, resulting in departments that tend to work independently rather than collaboratively.

Ensuring that departments and agencies work together, meet frequently and discuss strategic issues is crucial to embedding the use of evidence in policy-making (Bryson, 2018[33]). Georgia should stimulate this collaboration by organising frequent meetings, coinciding with the release of key research, with all heads of department as well as the Minister. During these meetings, the participants could discuss the findings of recent studies and collectively decide what actions to take in response. Box 5.4 discusses steps that countries can take to ensure that policy is better informed by evidence. In most OECD countries, legislative and parliamentary processes reinforce such practices by requiring, for example, that the government present “white papers” that explain the evidence underpinning major proposals or establish independent commissions to inform decisions on key reforms.

copy the linklink copied!
Box 5.4. Evidence-informed policy-making

The OECD and the European Commission’s Joint Research Centre studied which key capacities and institutional structures are necessary to facilitate evidence-informed policy-making. Some identified structures include:

  • a strategic, long-term approach to evidence-informed policy-making

  • clear assignment of responsibilities and mandates to apply evidence-informed policy-making

  • strong co-operation between researchers and policy-makers

  • structured dialogue between all stakeholders.

The study also identified interventions that might be effective in facilitating evidence-informed policy-making. These include:

  • prioritising better regulation, impact assessment, regulatory scrutiny and stakeholder engagement

  • facilitating access to evidence, through communication strategies and evidence repositories

  • fostering changes to decision-making structures by formalising and embedding use of evidence within existing processes (e.g. through evidence-on-demand services).

Several countries have developed innovative approaches to strengthening the use of evidence in policy-making.

  • Finland has created a “Developer Network” that holds regular meetings for stakeholders who are active in the knowledge-policy environment.

  • The United Kingdom, through the Alliance for Useful Evidence, has created courses for decision-makers who want to become more confident users of research.

  • New Zealand has created Chief Advisor roles in its national government, such as a Chief Science Advisor, to imbue the government with external capacity for evidence use.

Source: OECD and European Commission (2018[34]), Building Capacity for Evidence Informed Policy Making: Towards a Baseline Skill Set, www.oecd.org/gov/building-capacity-for-evidence-informed-policymaking.pdf (accessed on 7 February 2019).

Engage external entities to become research and evaluation partners

While the research and evaluation unit would be centrally responsible for guiding research and promoting its use, having a wide network of researchers is vital to producing extensive evidence and ensuring successful system evaluation. MoESCS already has research capacity in EMIS and NAEC, and the unit could help coordinate these organisations to focus on common strategic issues.

Further, an informal network of education researchers has already been established in Georgia. The OECD spoke to researchers from higher education institutions, local non-governmental organisations and international organisations. This is a positive development, as it suggests transparency of the system and accessibility of data. Nevertheless, this network could be more formally engaged, under the direction of the research and development unit, to contribute to system evaluation. The unit might encourage researchers to submit proposals for government-funded research into topics of national interest and contribute to the annual and ad-hoc reports it produces. This review identifies a number of areas for research, such as an evaluation of the consecutive initial teacher education programme.

There is also scope for the unit to encourage organisations with expertise to undertake more research into areas of mutual concern. For example, through the allocation of research grants the unit could encourage non-governmental organisations that serve ethnic minority populations to carry out research into the factors related to lower access rates and outcomes. The results of this research could then become focal points of policy discussions at MoESCS.

Consider developing an independent evaluation institute

This chapter has focused on building urgently needed research capacity and coordinating research efforts within the ministry in order to improve the use of evidence in policy-making. In the future, however, Georgia should consider creating an institute dedicated to research and evaluation, which is a common practice in many countries. In the United States, the Institute for Education Sciences is well developed and is tasked with collecting statistics and carrying out rigorous research and evaluation-related to education (US Department of Education, n.d.[35]).

Recommendation 5.1.3. Use system evaluation to enhance the value of system planning

The introduction of a Unified Strategy is a positive development as it represents a concerted effort to set a system-wide educational agenda. However, the Unified Strategy was developed quickly in response to European Union requirements, which resulted in the plan not being widely known and understood. To effectively embed the Unified Strategy and enhance the value of system planning in general, strategic planning processes need to be informed by evidence produced through system evaluation so that strategy documents and goals reflect the national context. If national priorities are more relevant to educators, then educators will be more aware of them and will be more likely to align their practices.

Identify the core strategic issues of the Georgian education system, in particular equity of outcomes

Strategic issues are the key challenges an organisation faces that inhibit it from achieving its mandate, mission and values. The identification of strategic issues, which results from an analysis of evidence, is an inherent part of strategic planning and the final set of issues typically appears in high-level strategic documents (Bryson, 2018[33]). Explicitly mentioning the issues in this manner clarifies to all stakeholders, especially researchers, what the government thinks are the main challenges and how it proposes to address them. Otherwise, the research and evaluation community does not know where it should focus its efforts, risking misalignment between what is studied and what matters most. Box 5.5 describes how two high-level strategy documents from very different contexts communicate their strategic issues.

copy the linklink copied!
Box 5.5. Communicating strategic issues through high-level strategy documents

The Strategic Plan (2017/18 – 2021/22) of the Namibian Ministry of Education, Arts and Culture dedicates a section to carefully explaining the evidence analysis that was performed to determine the country’s strategic issues. This section reveals that a situation analysis was undertaken to assess the environment prior to the development of the plan, during which data was examined and stakeholders were consulted. The resulting strategic issues, found in a separate section, are further summarised into twelve categories, include developing a plan for infrastructure and improving data management.

At a local level, Garrett County Public Schools in the United States analysed evidence and identified strategic issues as part of a larger process to create a new strategic plan. A separate report was published that describes the procedures undertaken to determine the strategic issues, which include addressing disciplinary issues, large class sizes and student transportation. The document is organised around the strategic issues, each of which is explained, substantiated with data and associated with actions that will be taken to address the issue.

Sources: Ministry of Education (2017[36])Strategic Plan (2017/18 - 2021/22), www.moe.gov.na/files/downloads/b7b_Ministry%20Strategic%20Plan%202017-2022.pdf (accessed on 8 January 2019); Baker (2018[37]), Preliminary Report: Identification of Strategic Issues, Garrett County Public Schools, www.garrettcountyschools.org/resources/public-information/pdf/Strategic-Issues-Follow-Up-Report-4.10.18.pdf (accessed on 8 January 2019).

In Georgia, the Unified Strategy is the highest-level strategic document that guides education activities. While it includes several core elements of a strategic plan, it is unclear from the document what the most pressing strategic issues are for the Georgian education system. There is a section called “Strategic Directions,” but the content is very general. For example, it explains that education is the cornerstone of sustainable development and that the Unified Strategy applies to all levels of education. The document then describes at length the goals and objectives of the system, but does not identify the issues that motivated the development of those goals and objectives. One strategic objective, for instance, is “Ensuring equal universal access to high quality education.” However, there is no description of what evidence was analysed to determine that access is not universal, nor according to which population dimensions and to what extent the disparities exist. Explicitly including such information would help in setting more specific, measurable priorities, and in identifying meaningful indicators and targets.

As Georgia prepares future high-level strategy documents, the OECD recommends that strategic issues be included in them. In consideration of the evidence that was analysed as part of this review, the review team also recommends that equity of outcomes be emphasised as a strategic issue in MoESCS’s future strategic planning. International and national data suggest that students from rural areas and linguistic minority groups perform less well compared to their peers and, worryingly, that these trends are worsening over time (see chapter 1). Undertaking this analysis of the strategic issues facing the system would be an important function of the research and evaluation unit (see Recommendation 5.1.1).

Set balanced goals according to the evidence-based needs of the system

After strategic issues have been identified, goals will need to be set to direct the system towards addressing these issues. These goals should be specific and measurable, so that they focus attention and enable accountability. They must also be balanced – thinking of both the outcomes a system wants to achieve as well as the internal processes and capacity throughout the system that are needed to achieve these outcomes (Kaplan and Norton, 1992[7]). For example, if a goal is to construct 500 new schools, thought must be given to the financial resources required to construct these schools and the capacity of the relevant organisations to build the schools in the time allotted. Securing the funding and developing capacity, therefore, must then also become goals. Balancing goals in this manner is important to ensure the feasibility of desired outcomes and to guide actions towards achieving the outcomes.

Goal setting in Georgia has tended to neglect these core elements of capacity and processes. Inadequate attention dedicated to the means of implementation has prevented good intentions achieving their desired impact. In the Unified Strategy, one national goal is to use the national assessments to improve teaching and learning. However, only 10 000 GEL was allocated towards this goal in the Action Plan, even though MCC funding for the assessments themselves is phasing out (see Policy issue 5.3). This goal will need to be balanced by considering the financial resources that will be necessary to continue developing and administering the assessments. It will also have to consider the capacity that NAEC will require to not only continue administering the assessments, but also study the results in order to improve student learning.

A popular tool for creating balanced goals is the Balanced Scorecard (Balanced Scorecard Institute, n.d.[38]), which has been used internationally to create goals in higher education and other educational institutions (Yüksel and Coşkun, 2013[39]; Beard, 2009[40]). The Balanced Scorecard is a framework that compels policy-makers to take into account different perspectives when defining goals, not just desired outcomes, but also internal processes and organisational capacity. Adopting this or a similar tool would help Georgia to develop a more holistic approach to strategic planning and allocate sufficient priority to the means, not just the ends, of reform.

copy the linklink copied!Policy issue 5.2. Making information about the education system more accessible and usable

Georgia’s information systems are modern, widely used and are highly trusted. EMIS collects data from all schools throughout the country and NAEC stores assessment and examination data for students and teachers. Both organisations identify individuals using their government identification number, and simple demographic information is drawn directly from government sources instead of being re-entered.

Nevertheless, while education data are collected and managed effectively, accessing the information, particularly in an analytical manner, remains a challenge. User-friendly analytical tools have not been developed and individuals have neither the time nor the capacity to retrieve and analyse the data manually. As a result, educators and MoESCS officials do not systematically use data to help guide students’ education and inform strategic planning, which risks that systematic needs are not noticed and not addressed.

To address these concerns, the OECD recommends that analytical functions be introduced into EMIS tools (i.e. into E-school) and that a digital monitoring system be developed. These are relatively low-cost actions that could produce a significant impact with respect to enhancing accountability improving the quality of policy-making.

Recommendation 5.2.1. Introduce analytical and reporting functions for EMIS tools

Georgia’s EMIS systems are trusted as the central source of education data. However, while the systems are equipped to store information, they lack the functionality to analyse information. To analyse data in EMIS, the principal would have to export the data from E-school as a dataset and then analyse the data themselves using external software. This process discourages school staff from using data to inform their instruction, prevents MoESCS staff from using evidence to inform their decision-making and makes it difficult for the public to hold the government accountable.

Create a feature for generating analytical reports

Reporting is an integral feature of an EMIS system and is how the system transforms from being a receptacle of data to a provider of information (Abdul-Hamid, 2014[41]). Through reporting, users are able to specify what information they are interested in (i.e. data points), how they want the information to be processed (e.g. dividing male enrolment by total enrolment to obtain the percentage of the school that is male) and how they want the information to be displayed (e.g. in a list or as part of a paragraph).

In Georgia, E-school is the EMIS portal through which the MoESCS and school staff manage student and school data. Its analytical and reporting functions, however, are limited. For instance, principals cannot easily compute what the average grades are according to gender, nor can teachers quickly create a list of students who have been frequently absent.

Instead of expecting users to export the data and analyse it themselves, analysis of this nature can be facilitated through introducing reporting functionality into E-school. This would entail allowing users to create report templates by inserting empty fields onto blank pages and specifying what information should populate those fields. For example, a principal might want to know the attendance rate of students according to the students’ grade levels. To create this report, the principal would specify that they wish to create a two-column table in which the first column lists grade levels and the second column indicates the attendance rate of students from that grade. Advanced functionality would allow this data to be filtered by time period and generate graphical charts to depict the results. Every time this report is “run,” E-school would populate the defined objects with the most recent data (Abdul-Hamid, 2014[41]). Given that E-school was developed in-house, EMIS has the capacity to add this type of functionality and could begin by creating simple templates that would be applicable to all school situations (e.g. attendance and basic indicators according to gender and special education status).

Build a web portal that allows public access to EMIS data

Real-time access to data through a public web portal (accessible by anyone, not just those with Ministry of Education credentials) is a common international method of extracting information from EMIS databases and presenting it in an accessible manner. At the most fundamental level, users will be able to learn how many students attend a school and how they perform on a national assessment. More sophisticated systems, such as EdStats in the United States (Box 5.6), aid external research and analysis by facilitating comparison across schools, aggregation at different levels (e.g. regional or national) and providing a set of data visualisation tools (Abdul-Hamid, Mintz and Saraogi, 2017[42]).

copy the linklink copied!
Box 5.6. EdStats, a data access portal from the United States

In Florida, United States, the Education Information Portal (EdStats) provides access to data from public schools from kindergarten through grade 12, public colleges and universities, a state-wide vocational and training programme and career and adult education. Through an online interface, any individual can view data that are aggregated at school-, district- and state-levels. Comparisons can be made across different schools and districts.

EdStats is powerful in that it allows data to be organised not only to the level of governance (e.g. state, district, or school), but also subject matter. This means that users who navigate EdStats can choose to view all data according to a single domain and make further contextualised comparisons according to the domain. This saves users from having to navigate through different schools or districts in order to find the same indicator for each one of those entities.

Along with providing access to data, EdStats provides simple tools for users to perform their own analysis. Users can format the data into tables that they define themselves (some standard tables are already provided). Custom reports that contain several tables can then be generated according to users’ specifications. EdStats also has a strong data visualisation component. Different types of graphs and charts can be created based on the data. District-level analysis can even be plotted as maps that display indicators according to the geographic location of the districts within the state.

Source: Florida Department of Education (n.d.[43]), FL Department of Education - Education Information Portal, https://edstats.fldoe.org/SASPortal/main.do (accessed on 12 July 2018).

In Georgia, families access EMIS through a web interface (E-Catalogue) in order to search for how many enrolment spaces exist in each school in the country. The only information they can see, however, are how many total places the school has, how many places remain and the language of instruction of the school. They cannot view information related to school quality.

MoESCS should create an online platform that allows public access to more EMIS data through a user-friendly graphical interface. All users of the platform would be able to browse national education data and select schools and municipalities for comparison based upon chosen criteria (for example, location or language of instruction). The platform should also contain features to create dynamically generated charts and figures and export data for further analysis. Parents and students could use the portal to make important decisions and help hold the system accountable. Researchers would be able to use this portal to study the education system and contribute to system evaluation efforts (Recommendation 5.1.2).

Recommendation 5.2.2. Create an easier-to-use monitoring system

System monitoring has an accountability function, which determines if goals are being reached, and a learning function, which determines if defined strategies and policies are up-to-date in the current environment. It is not a stand-alone process, but part of an on-going, cycle (Bryson, Berry and Kaifeng Yang, 2010[44]; George and Desmidt, 2014[45]). Without a means to monitor the system continuously, countries risk creating a monitoring tool that contains an abundance of potentially out-of-date information, but is not relevant for policy-making.

At present, MoESCS’s primary tool for monitoring the education system is the comprehensive Monitoring Report. In addition to being hard to interpret, a critical disadvantage to the report is that, as a static document, information is only available when the document is released. Policy-makers are unable to acquire up-to-date information in between publication dates. As a result, the OECD review team was told that Georgia’s monitoring report is hardly downloaded and is not regarded as an important resource in the policy-making process.

Complement the monitoring report with a digital performance dashboard

A performance dashboard is a visual representation of the progress of selected indicators. By being linked directly to a system’s databases, the dashboard will always display the most recent information to users without the need to wait for a report to be authored (Eckerson, 2011[46]). Box 5.7 describes some of the procedures and tools that the United States follows and uses to monitor its education system.

To make system monitoring easier to accomplish and more widely used, Georgia should develop a digital performance dashboard to accompany its static monitoring report. Georgia’s digital performance dashboard would be linked to MoESCS databases, like EMIS and NAEC, and databases from outside of MoESCS, such as labour statistics. The dashboard would visually represent the progress of user-selected indicators, such as participation and assessment outcomes, both on average across the country and disaggregated by population groups and regions. Moreover, the dashboard should be balanced and show indicators not only related to intended outcomes, but also the processes that need to be in place in order to support the outcomes (e.g. levels of funding).

copy the linklink copied!
Box 5.7. System monitoring in the United States

In 1993, the United States created the Government Performance and Results Act (GPRA), which required government agencies to adopt performance management with the aim of increasing trust in government (General Services Administration, n.d.[47]). In 2011, the act was updated (now called the Government, Performance, Results and Modernization Act, though still commonly referred to as GPRA) to mandate that agencies produce their strategic plans in machine-readable formats to facilitate digital analysis, as well as identify core strategic issues.

To comply with GPRA, the Department of Education annually releases performance reports and performance plans for upcoming years (US Department of Education, 2018[48]). It has also created dynamic tools, linked to the most recent data, to help monitor educational performance. These tools include the College Scorecard (Department of Education, n.d.[49]), which provides information about university enrolment, program offerings and fees, and the Nation’s Report Card (Department of Education, n.d.[50]), which shows assessment results across the country at national-, state- and district-levels. The public to receive instant information about the status of the education system can use these services at any time.

Sources: General Services Administration (n.d.[47]), Performance.gov, www.performance.gov/ (accessed on 7 January 2019); US Department of Education (2018[48])Annual Plans and Reports, www2.ed.gov/about/reports/annual/index.html (accessed on 7 January 2019); Department of Education (n.d.[49]), College Scorecard, https://collegescorecard.ed.gov/ (accessed on 7 January 2019); Department of Education (n.d.[50]), NAEP Report Cards, www.nationsreportcard.gov/ (accessed on 7 January 2019).

Release the performance dashboard with a tutorial that shows how it should be used to monitor the performance of the system

While different performance dashboards can be created to fit the needs of different stakeholders (Eckerson, 2011[46]), in Georgia the primary users of the dashboard would first be senior managers and policy-makers at the Ministry as they have the most critical need for immediate monitoring information. The dashboard should help these leaders easily assess to what extent a defined strategy is being implemented, if capacity is being developed and the desired results are being produced, as well as what changes might be needed.

As policy-makers are used to consuming monitoring information in the form of a large, static report, they will need guidance in how to use a performance dashboard. This can be accomplished by introducing the performance dashboard with a tutorial, created by the developers of the dashboard, that illustrates to policy-makers how they can interpret the information in the dashboard and use it to achieve their desired goals (Eckerson, 2009[51]). For example, the review team was told that improving student attendance in upper secondary education is a system priority. Policy-makers should be made to understand that, with a performance dashboard, attendance data would be available continuously, which would offer policy-makers the opportunity to react immediately to changes in the indicator. If an individual notices that attendance in the current month is declining compared to attendance in the previous month, they could follow-up more closely with the relevant ERC (see Recommendation 5.2.1) to determine why this is occurring and if it is occurring in some schools more than others. The policy-maker could then communicate the findings to those schools and work with them to develop suitable interventions.

copy the linklink copied!Policy issue 5.3. Developing and implementing a national assessment strategy that supports system goals

Research shows that having externally validated measures of student performance helps monitor performance and collects data to inform system-level policy (OECD, 2013[1]). Results from these assessments can also be used to communicate to students’ their levels of learning and acts as a reference for teachers’ classroom marking.

In Georgia, there is no established system of monitoring of student learning outcomes before Grade 12, and what instruments there are do not cover key outcomes (such as literacy) and are administered on a sample basis. Existing standardised assessments survey sciences and mathematics in Grade 9, and Georgian language in ethnic minority schools in Grade 7 assessment. Ad-hoc, pay-for assessments are also available, but none of these assessments is administered to all students. Importantly, MCC funding, which is largely supporting these assessments, is phasing out and there is no guarantee that such important work will continue. A recent proposal concerning a national assessment strategy suggests that diagnostic assessments be administered at the beginning of Grades 4, 6 and 10, but this strategy is not finalised.

Recommendation 5.3.1. Define a concept for the national assessments

Through balanced goal setting (see Recommendation 5.1.3), Georgia will need to plan for the resources and capacity that will be necessary to continue the administration of its national assessments. Given the importance of having consistent external measurement of student performance, Georgia should also take the opportunity to improve upon the assessments and determine how the assessments should be structured to best support national goals.

Establish a steering committee to determine the purpose of the assessments

Carefully defining the purpose of national assessments to reflect the country’s teaching and learning needs is critical in order to guide the subsequent design of the assessments (Gabrscek and Bethell, 1996[52]). Determining the purpose of the national assessments should be done by a steering committee comprising a diverse group of stakeholders representing different backgrounds and interests nationally (Greaney and Kellaghan, 2007[53]). The steering committee should also include technical expertise on the development and use of national assessments.

In Georgia, the steering committee will need to consider not only the goals of the education community, but also those of the political administration and reconcile these aspirations with what is practical in the country. At present, proposals with respect to the design of the future national assessment appear to be made very quickly without full consideration of how the assessment will support system goals or relate to other policies, such as school evaluation. The establishment of a steering committee can help to ensure decisions on the national assessment take a system-wide view. International experts can be enlisted to lend a global perspective to the steering committee’s deliberations. Specific decisions that need to be made by the steering committee are discussed further below.

Consider making formative feedback to educators a core function of the assessment

According to a recent proposal, Georgia intends to use the national assessments for diagnostic and formative purposes, in addition to using the results to help monitor the system. In other words, the assessments would provide data that can be used for improving student learning and for school quality (OECD, 2013[1]). The OECD supports this approach and the review team recommends that Georgia’s national assessments be guided by these purposes.

Using the national assessments formatively would help to address key teaching and learning challenges in Georgia. For example, national outcomes vary across regions and sub-populations (see chapter 1). Classroom assessment practices are used more to categorise students rather than help them learn (see chapter 2). At the same time, Georgia has also introduced a new curriculum that aims to shift teaching and learning towards competency development across different stages of education. International experience shows that teachers require significant guidance to assess students according to such a curriculum (chapter 2).

Against this backdrop, Georgia requires meaningful assessment results about student learning that can help teachers better determine where students are in their learning according to the curriculum, tailor teaching to students’ individual needs and guide them to improve their own classroom assessment practices. The national assessment can provide such results and alongside other resources that this review recommends help to build teachers assessment literacy and develop their understanding of national learning standards.

Recommendation 5.3.2. Determine the design features of the national assessments

Once Georgia has decided the assessments’ primary purpose(s), their design will need to be determined. Table 5.2 illustrates several design components about national assessments that will need to be agreed upon in the Georgian context. In general, it is recommended that Georgia make its decisions in order to support to formative purposes of the assessments and in consideration of the specific monitoring needs of the country.

copy the linklink copied!
Table 5.2. Key decisions regarding national assessments

Component

Options

Advantages

Disadvantages

Subjects

Many

Broader coverage of skills assessed

More expensive to develop, not all students might be prepared to take all subject

Few

Cheaper to develop, subjects are generalisable to a larger student population

More limited coverage of skills assessed

Testing population

Sample

Cheaper and faster to implement

Results can only be produced at high, aggregate levels

Census

Results can be produced for individual students and schools

More expensive and slower to implement

Grade level

Lower

Skills can be diagnosed and improved at an early stage of education

The length of the assessment and the types of questions that can be asked are limited

Upper

More flexibility with respect to the length of the assessment and the types of questions that are asked

Skills cannot be evaluated until students are in later stages of education

Scoring type

Criterion-referenced

Results are comparable across different administration

Results require expertise to scale and are difficult to interpret

Norm-referenced

Results are easier to scale and interpret

Results are only comparable within one administration of the assessment

Item type

Closed-ended

Cheaper and faster to implement, items are more accurately marked

Can only measure a limited amount of skills

Open-ended

A broader set of skills can be measured

More expensive and slower to implement, marking is more subjective in nature

Testing mode

Paper

The processes are already in place and the country is familiar with them, requires no additional capital investment

Results are produced more slowly, seen as more old-fashioned

Computer

Results are produced more quickly, more cost effective in the long-term, seen as more modern

New processes have to be developed and communicated, requires significant initial capital investment

Administration time

Beginning of the year

More aligned with a formative approach as teachers can use the results for diagnostic purposes

Results cannot be used to help evaluate improvements made in student learning that school year

End of the year

Supports the use of results as part of accountability procedures as results capture student performance after a year of schooling

Results cannot be used by teachers to help their students improve

Sources: Adapted from Department for International Development (DFID) (2011[54]), National and international assessment of student achievement: a DFID practice paper, www.gov.uk/government/uploads/system/uploads/attachment_data/file/67619/nat-int-assess-stdnt-ach.pdf (accessed on 13 July 2018); OECD (2011[55]), Education at a glance, 2011: OECD indicators, OECD Publishing, Paris, https://doi.org/10.1787/eag-2011-en.

Consider administering school-level diagnostic assessments starting in grade 2 and national assessments in grades 6 and 9 (and possibly in grade 10 later)

The review team supports earlier assessment of students, as suggested by MoESCS’s recent proposal to test students in grades 4, 6 and 10. However, the OECD recommends that diagnostic assessment of students be performed at the school-level at the beginning of academic year starting in grade 2, while the national assessments be administered to students at the end of the academic year in grades 6 and 10.

Develop diagnostic assessments materials and encourage their use in schools starting in grade 2

A diagnostic assessment is a type of formative assessment that is administered at the beginning of a study unit to determine a student’s level and help develop a learning programme for the student (OECD, 2013[1]). These assessments would be especially important to use in Georgia, because data from international assessments reveal that gaps in learning between student populations have widened over time. Administering diagnostic assessments in grade 2 instead of grade 4 can help teachers identify learning needs and address gaps in achievement before they grow.

Conducting a national diagnostic assessment would standardise results of student learning. However, research shows that in high-stakes testing environments such as Georgia’s, externally marked assessments might be interpreted as having summative consequences, despite the government’s intent (Kitchen et al., 2017[56]). Since Georgia wishes to alleviate the testing burden on its student population, the review team does not recommend nationally administering diagnostic assessments. Taking such a test at the beginning of the academic year might make students and parents think that the results will affect a student’s standing in school, which would lead to the type of distorted educational practices that Georgia is trying to eliminate (see chapter 2).

To diagnose student learning without giving the impression of summative judgement, the OECD recommends that MoESCS, through the Teacher Professional Development Centre, develop diagnostic instruments that will help teachers identify their students’ levels of learning according to the national curriculum. Schools should be required to administer these assessments internally to students, towards the beginning of the school year, starting in grade 2. Using centrally developed resources is advantageous because the instruments have already been tested and approved. Over time, after teachers have become more familiar with diagnostic assessments, they should be encouraged to develop their own instruments (see chapter 2). Of critical importance to the formative value of the assessment will be the support provided to teachers on how to adapt their instruction to different student learning levels. Guidance on pedagogical responses should be introduced along with any new diagnostic tools.

Administer the national assessment in grades 6 and 9

While helpful for informing individual instruction, school-level assessments are unreliable for monitoring student learning nationally. Therefore, the OECD recommends that a national assessment be administered in addition to the aforementioned school-level diagnostic assessments. The results of these assessments can be used for system monitoring, conducting research and informing policy-making (see Policy issue 5.1).

The new curriculum defines the first two stages of learning as ending in grades 6 and 9. Therefore, the OECD suggests that MoESCS administer the national assessments in these grades to produce valuable information about student learning at key moments in students’ education. Administering the assessment in grade 4 is not recommended because grade 4 is in in the middle of a curricular stage, diagnostic information is already being collected starting in grade 2 and because MoESCS wants to avoid over-testing students.

The OECD further suggests that the national assessments be administered at the beginning of the school year instead of the end. This would produce diagnostic information about student performance that can be used by teachers to guide their instruction during the same school year.

Change the grade 9 administration to grade 10 if compulsory education is extended

The OECD recommends that the national assessment in grade 9 be moved to grade 10 if compulsory education is expected to include grade 10 (as announced in recent reform plans). Administering a national assessment at the end of compulsory education would provide more reliable information about what all students know and can do upon finishing their required studies, as well as help students to make an informed decision about their next step in education or work. It would also enable the assessment to contribute to the certification requirement for successful completion of compulsory education. This would lend more value to the certification, encouraging students to apply themselves and enhancing the signalling function, especially for those who prefer to enter the labour market or seek professional training.

Assess mother tongue and mathematics

Focusing on a limited number of subjects would be consistent with the national focus to relieve testing pressure on students and schools (see chapter 2). Among OECD countries with national assessments at the primary level, roughly one-third assess only mathematics and literacy in the national language (OECD, 2015[10]). Georgia could also test these two subjects in grade 4, which would collect information about students’ essential competencies without over-testing them. This is especially true for Georgian language, which is currently not externally assessed until grade 12 on the Secondary Graduation Examination (SGE).

In grade 9, additional domains, such as science or national history, may be added to the core subjects in order to increase subject matter coverage, as is done in several OECD countries (OECD, 2015[10])). For students whose first language is not Georgian, Georgian as a second language could be added (this might eliminate the need for the current grade 7 assessment in Georgian as a second language) given the variance of student outcomes according to ethnic groups. Caution should be taken when adding subjects, as each additional subject will add to the cost of administering the national assessment and will require greater implementation capacity.

Implement census-based testing

Currently, all national testing before the SGE is sample-based, except the Georgian as a second language assessment in grade 7. While sample-based testing can provide national-level results, it does not provide individual-level results and, therefore, most students in Georgia do not receive an externally validated measure of their learning until around age 17. Furthermore, school-level results cannot be calculated, as most schools would not have enough sampled students to produce reliable data. This makes it difficult for principals and teachers to use national assessment data to improve their students’ learning, as they do not know if national-level results reflect their specific contexts.

Establishing census-based testing would give each student nationally comparable results. Students, their families and teachers could use those results to plan how to improve individual students’ learning. Census testing also allows for the creation of school-level and even regional-level results. This information could be used for school improvement purposes, within parameters that are designed to be formative (see Recommendation 5.3.3), and to aid research and evaluation efforts (see Recommendation 5.1.1).

Strongly consider computer-based testing instead of paper-based testing

In most OECD countries, the delivery of the national assessment is through a paper-and-pencil format. Nevertheless, this trend is changing and computer-based administration is becoming more common, particularly in countries that introduced a national assessment recently (OECD, 2013[1]). Administering assessments via the computer can save considerable costs as delivery and marking would be streamlined. It also improves accuracy by reducing the possibility of human error during these processes.

Georgia is well-positioned to adopt a digital strategy for administering its national assessments. Much standardised testing is already administered via computer, such as the SGE and the voluntary, ad-hoc tests. Therefore, little additional infrastructure would need to be built to accommodate digital testing. Additionally, previous experience in Georgia suggests that the credibility of the testing and marking process is a high priority. This means that computer-based testing’s capacity to return results quickly makes it more attractive than paper-based testing, which would establish trust in the new national assessments. Quickly generated results also help support the formative, diagnostic purposes of the assessment, which have been expressed as an objective.

Develop several item types to assess a broad range of student skills

In OECD countries, the most common types of items that appear on national assessments are multiple-choice responses and closed-format, short answer questions (e.g. providing a numeric solution to a mathematics problem) (OECD, 2013[1]). These item types are easier and quicker to develop and the marking of these types of items are more reliable (Hamilton and Koretz, 2002[57]; Anderson and Morgan, 2008[58]). Less frequently used item types include open-ended writing, performing a task, oral questions and oral presentations. These item types, however, are increasing in usage due to their ability to assess a broader and more transversal set of skills than closed-ended items (Hamilton and Koretz, 2002[57]).

A consistent concern with the former SGE is that the questions only have one format and tend to encourage students to memorise a certain set of responses (Bakker, 2014[59]). Similarly, some higher education stakeholders told the review team that the UEE does not assess the skills most relevant to success at the tertiary level (see chapter 2). Therefore, a key consideration for the national assessments is to ensure that they assess critical elements of student learning.

While there are natural limitations to closed-format responses, these types of items, when developed well, do have the capacity to assess higher-order student learning outcomes (see chapter 2) (Anderson and Morgan, 2008[58]). For example, the majority of questions from both PISA and TIMSS are closed-format. Care will need to be taken to ensure that these items are measuring student learning instead of memorisation, and that proper item-writing convention is followed, such as reviewing items for potential bias and varying the placement of distractor choices (Anderson and Morgan, 2008[58]). The grade 9 national assessment can begin to incorporate more open-format questions as students at this age are more capable of responding at length.

Recommendation 5.3.3. Develop a reporting scheme that serves formative purposes and avoids punitive consequences

With regular, census-based assessments being administered, MoESCS will need to consider carefully how to report the results to students, teachers, schools and the public. External assessments, even when they have no stakes attached to them, can result in distortive practices like teaching to the test, in which classroom instruction focuses disproportionately on assessed content or repeated assessment practice (OECD, 2013[1]). This risk is particularly pronounced in Georgia, where tests are considered judgemental rather than part of a formative, educational process.

Georgia should avoid any suspicion that the results would be used to punish school staff, as this occurred before and the reaction was negative (in 2012, roughly 200 principals were dismissed based upon their schools’ results on the SGE). Instead, and consistent with the aims of assessment in general (see chapter 2), results of the national assessment should be reported in a manner that informs instruction and guides decision-making.

Use assessment data to directly support struggling schools, not for high-stakes accountability

A single indicator, such as a school result on an assessment, is not an accurate indication of the effectiveness of a school or the school’s teachers as it does not consider factors outside of the school’s control (OECD, 2013[1]). Evaluating schools and teachers by assessment results alone would therefore result in schools with the greatest concentration of students from more advantaged backgrounds continually being considered the most effective. Furthermore, attaching high-stakes accountability measures to the results of assessments can incentivise unethical behaviour from teachers and principals, such as helping students while they are testing and manipulating the pool of students who are to take the test (Nichols and Berliner, 2005[60]). Georgia is already trying to address a large private-tutoring market that is largely inspired by testing pressure. To avoid adding further pressure, it is strongly advised that teachers and principals not be held accountable using a single assessment result.

A more fair and constructive approach is to use assessment results as part of the risk assessment framework that leads to more targeted provision of support (see chapter 4). The results should not have punitive consequences. Because of societal pressure to use test results as a ranking mechanism, there is risk that schools themselves will interpret assessment results in this way, even if MoESCS does not. It is, therefore, important to consider how results are benchmarked and what sorts of information is made available to schools. These issues are discussed next.

Identify different benchmarks against which schools can compare themselves

Census-based testing allows for the generation of school-level data and comparisons between individual schools. This level of direct comparison, however, is not always relevant as student populations vary greatly across schools in Georgia. Therefore, instead of limiting the unit of analysis to individual schools, several different benchmarks should be identified against which schools can compare themselves (Kellaghan, Grenaney and Murray, 2009[61]). For example, it might be more appropriate to compare a school’s results only to other schools that have the same language of instruction or are located in the same region. Aggregate averages of schools from these categories can be produced so individual schools can measure themselves against the performance of relevant groups of schools.

These types of comparisons can also help generate pressure to provide more support to certain groups of schools that appear to be systematically struggling. On the other hand, schools who demonstrate significant improvement could be identified and their practices can be shared with other similar schools or networks of weak and strong schools created. In Australia, for example, the National Assessment Program – Literacy and Numeracy is administered annually to students in grades 3, 5, 7 and 9. Using the results from this assessment, schools that demonstrated substantial gains in student learning were identified and their principals were asked to share best practices from their schools (ACARA, n.d.[62]).

Create different reports designed to leverage the formative value of the assessments

In addition to school-level reports, census-based testing could generate reports at several different levels of the education system (OECD, 2013[1]). Box 5.8 describes the different types of reports that are generated from a national assessment in the United States. In Georgia, what information is presented in its national assessment reports, and how the reports are delivered, need to be decided and in accordance with the overall purpose to improve student learning. Different types of reporting that might be considered include:

  • Reports for teachers should contain item-level analysis with information about how their students responded to each item and the competencies those items assessed. This information should be presented alongside contextualised comparison groups, such as gender, linguistic minorities and municipalities. To further support the assessment’s formative function, the results might also analyse common errors that students made, with suggestions on to improve teaching of that content.

  • School-level reports might present the performance of the individual school with benchmarks for comparisons. However, the information should not be released online in order to avoid the risk of the results being used for direct accountability.

  • The MoESCS would receive an aggregate report that summarises and analyses the results of the entire country. Results must be disaggregated by demographic characteristics, such as gender, language of the school, region, if the school is in a rural or urban area and student socio-economic status. Reporting according to these factors (among many others) would represent the minimum level of analysis that would be required to inform policy-making.

  • Analysis of individual questions, topics or skills would also be important for the Ministry to identify at a national level if students in Georgia tend to struggle more with certain competencies or in certain domains. This information would reveal the need to identify how teaching in certain parts of the curriculum can be improved.

  • Georgia can considered whether to provide student reports based upon the determined purposes of the assessments. Assessments (or assessments in certain grades) that are designed to be used for diagnostic reasons would not need to produce reports for students, only teachers and schools. On the other hand, if the assessments serve a summative function (such as certification from a cycle of education), then student-level reports would need to be issued.

    If student reports are to be issued, they should compare a student’s performance to national, municipal and other relevant benchmarks. Students and parents might be informed about individual student results as part of the regular parent-teacher meetings. Teachers might be provided national guidance on how to present the results. For example, teachers might discuss how far the student is in terms of mastering core competencies.

copy the linklink copied!
Box 5.8. Student assessment reports for different stakeholders, the Measure of Academic Progress (MAP) in the United States

In the United States, the MAP assessments are a set of private, computer adaptive tests that are available in reading, mathematics and science for students in kindergarten through grade 12. Entire school districts have participated in testing, which provides the opportunity to produce district-, school-, class- and student-level reports. All reports are offered online.

  • District reports are intended for the superintendent and educational specialists working within the district office. They summarise the results of all students in the district and disaggregated by grade. Results are compared to regional and national benchmarks.

  • School reports are intended for principals and teachers. They show results from an individual school disaggregated by grade and by class.

  • Class reports are intended for teachers. They summarise the results of a class and show the results of individual students from the class. If students have taken the test more than once, trend data for those students are also shown. In addition to overall performance, teachers can also see how long students took to complete the test and how they are performing on specific sub-skills.

  • A student report is intended for students and parents. It shows in detail how a student performs in specific areas benchmarked against national percentiles.

Sources: NWEA (2019[63]), The MAP Suite, www.nwea.org/the-map-suite/ (accessed on 28 January 2019); Bergeron (n.d.[64]), MAP Reports and Resources for Teachers, NWEA, http://info.nwea.org/rs/nwea/images/Web-Based-MAP-Teacher-Reports-and-Resources.pdf (accessed on 28 January 2019).

copy the linklink copied!Recommendations

copy the linklink copied!

Policy issue

Recommendations

Actions

5.1. Building a culture of research, evaluation and improvement of the education system

5.1.1. Establish a formal research and evaluation unit

Clearly define the role of the research and evaluation unit

Develop a research agenda for the research and evaluation unit

5.1.2. Encourage the dissemination and usage of research and evaluation activities

Annually release an analytical report about the education system

Release ad-hoc reports about thematic issues

Establish regular meetings between policy-makers during which evidence is shared and discussed

Engage external entities to become research and evaluation partners

Consider in the future developing an independent evaluation institute

5.1.3. Use system evaluation to enhance the value of system planning

Identify the core strategic issues of the Georgian education system, in particular equity of outcomes

Set balanced goals according to the evidence-based needs of the system

5.2. Making information about the education system more accessible and usable

5.2.1. Introduce analytical and reporting functions for EMIS tools

Create a feature for generating analytical reports

Build a web portal that allows public access to EMIS data

5.2.2. Create an easier-to-use monitoring system

Complement the monitoring report with a digital performance dashboard

Release the performance dashboard with a tutorial that shows how it should be used to monitor the performance of the system

5.3. Developing and implement a national assessment strategy that supports system goals

5.3.1. Define a concept for the national assessments

Establish a steering committee to determine the purpose of the assessments

Consider making formative feedback to educators the primary function of the assessments

5.3.2. Determine the design features of the national assessments

Develop diagnostic assessments materials and encourage their use in schools starting in grade 2

Assess mother tongue and mathematics

Implement census-based testing

Strongly consider computer-based testing instead of paper-based testing

Develop several items types to assess a broad range of student skills

5.3.3. Develop a reporting scheme that serves formative purposes and avoids punitive consequences

Use assessment data to directly support struggling schools, not for high-stakes accountability

Identify different benchmarks against which schools can compare themselves

Create different reports designed to leverage the formative value of the assessments

References

[41] Abdul-Hamid, H. (2014), What Matters Most for Education Management Information Systems, World Bank, https://openknowledge.worldbank.org/handle/10986/21586 (accessed on 16 July 2018).

[42] Abdul-Hamid, H., S. Mintz and N. Saraogi (2017), From Compliance to Learning: A System for Harnessing the Power of Data in the State of Maryland, The World Bank, https://doi.org/10.1596/978-1-4648-1058-9 (accessed on 8 December 2018).

[62] ACARA (n.d.), Case studies of high gain schools, https://www.acara.edu.au/reporting/my-school-website/case-studies-of-high-gain-schools (accessed on 29 January 2019).

[58] Anderson, P. and G. Morgan (2008), Developing tests and questionnaires for a national assessment of educational achievement, https://elibrary.worldbank.org/doi/abs/10.1596/978-0-8213-7497-9 (accessed on 3 August 2018).

[37] Baker, B. (2018), Preliminary Report: Identification of Strategic Issues, Garrett County Public Schools, https://www.garrettcountyschools.org/resources/public-information/pdf/Strategic-Issues-Follow-Up-Report-4.10.18.pdf (accessed on 8 January 2019).

[59] Bakker, S. (2014), The Introduction of Large-scale Computer Adaptive Testing in Georgia Political context, capacity building, implementation, and lessons learned, http://siteresources.worldbank.org/INTREAD/Resources/Bakker_Introduction_to_CAT_Georgia_for_READ.pdf (accessed on 26 October 2018).

[38] Balanced Scorecard Institute (n.d.), Balanced Scorecard Institute, https://www.balancedscorecard.org/ (accessed on 8 January 2019).

[28] BBC (2017), OECD report backs radical reform of Welsh curriculum, https://www.bbc.com/news/uk-wales-39105175 (accessed on 23 January 2019).

[40] Beard, D. (2009), “Successful Applications of the Balanced Scorecard in Higher Education”, Journal of Education for Business, Vol. 84/5, pp. 275-282, https://doi.org/10.3200/JOEB.84.5.275-282.

[64] Bergeron, C. (n.d.), MAP Reports and Resources for Teachers, NWEA, http://info.nwea.org/rs/nwea/images/Web-Based-MAP-Teacher-Reports-and-Resources.pdf (accessed on 28 January 2019).

[22] Bryson, J. (2018), Strategic planning for public and nonprofit organizations: A guide to strengthening and sustaining organizational achievement, https://books.google.fr/books?hl=en&lr=&id=xqVFDwAAQBAJ&oi=fnd&pg=PR14&dq=Bryson,+J.+M.+(2018).+Strategic+planning+for+public+and+nonprofit+organizations:+A+guide+to+strengthening+and+sustaining+organizational+achievement&ots=VrenWMj5mt&sig=qsYMnv5zWAwYbGyxjDw6_cdcMdU (accessed on 7 January 2019).

[33] Bryson, J. (2018), Strategic planning for public and nonprofit organizations: A guide to strengthening and sustaining organizational achievement.

[44] Bryson, J., F. Berry and K. Kaifeng Yang (2010), “The State of Public Strategic Management Research: A Selective Literature Review and Set of Future Directions”, The American Review of Public Administration, Vol. 40/5, pp. 495-521, https://doi.org/10.1177/0275074010370361.

[4] Burns, T. and F. Köster (eds.) (2016), Governing Education in a Complex World, Educational Research and Innovation, OECD Publishing, Paris, https://doi.org/10.1787/9789264255364-en.

[57] Corporation, R. (ed.) (2002), Tests and their use in test-based accountability systems, Hamilton, Laura S., Stecher, Brian M., Klein, Stephen P., https://www.jstor.org/stable/pdf/10.7249/mr1554edu.9.pdf (accessed on 3 August 2018).

[49] Department of Education (n.d.), College Scorecard, https://collegescorecard.ed.gov/ (accessed on 7 January 2019).

[50] Department of Education (n.d.), “NAEP Report Cards”, https://www.nationsreportcard.gov/ (accessed on 7 January 2019).

[54] DFID (2011), National and international assessment of student achievement: a DFID practice paper, https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/67619/nat-int-assess-stdnt-ach.pdf (accessed on 13 July 2018).

[46] Eckerson, W. (2011), Performance dashboards : measuring, monitoring, and managing your business, Wiley, https://books.google.fr/books?hl=en&lr=&id=daiXfV1jcakC&oi=fnd&pg=PR7&dq=monitoring+dashboard+different+users&ots=AKGdrxjD9-&sig=g73ooB6iih9xXNqJUQdFK4R6Xe4#v=onepage&q=monitoring%20dashboard%20different%20users&f=false (accessed on 29 January 2019).

[51] Eckerson, W. (2009), “Performance management strategies”, Business Intelligence Journal, https://oceanides.inf.ucv.cl/~infoteca/LibrosDigitales/Performance_management_strategies.PDF (accessed on 29 January 2019).

[18] EMIS (n.d.), EMIS (official website), https://www.emis.ge/ (accessed on 31 January 2019).

[32] European Commission (2007), Towards more knowledge-based policy and practice in education and training, http://ec.europa.eu/dgs/education_culture/publ/pdf/educ2010/sec1098_en.pdf (accessed on 10 December 2018).

[43] FL Department of Education (n.d.), FL Department of Education - Education Information Portal, https://edstats.fldoe.org/SASPortal/main.do (accessed on 12 July 2018).

[52] Gabrscek, S. and G. Bethell (1996), Matura Examinations in Slovenia: Case Study of the Introduction of an External Examinations System for Schools, National Examinations Centre, http://www.cpz-int.si/Assets/pdf/Matura.pdf (accessed on 7 January 2019).

[47] General Services Administration (n.d.), Performance.gov, https://www.performance.gov/ (accessed on 7 January 2019).

[45] George, B. and S. Desmidt (2014), “A state of research on strategic management in the public sector: An analysis of the empirical evidence”, in Joyce, P. and A. Drumaux (eds.), Strategic Management in Public Organizations : European Practices and Perspectives., Taylor and Francis.

[14] Government of Georgia (n.d.), Social-economic Development Strategy of Georgia - Georgia 2020, https://www.adb.org/sites/default/files/linked-documents/cps-geo-2014-2018-sd-01.pdf (accessed on 23 January 2019).

[53] Greaney, V. and T. Kellaghan (2007), National Assessments of Educational Achievement Volume 1, The World Bank, https://doi.org/10.1596/978-0-8213-7258-6.

[8] House of Commons (2011), Accountability for public money, Twenty-eighth Report of Session 2010-11, https://publications.parliament.uk/pa/cm201011/cmselect/cmpubacc/740/740.pdf (accessed on 15 December 2018).

[26] Jones, A. (2014), OECD: Welsh government lacks education ’long-term vision’, BBC, https://www.bbc.com/news/uk-wales-26962501 (accessed on 23 January 2019).

[7] Kaplan, R. and D. Norton (1992), “The Balanced Scorecard—Measures that Drive Performance”, Harvard Business Review, Vol. 70/1, pp. 1-9, https://hbr.org/1992/01/the-balanced-scorecard-measures-that-drive-performance-2 (accessed on 16 July 2019).

[61] Kellaghan, T., V. Grenaney and S. Murray (2009), National Assessments of Educational Achievement Volume 5, The World Bank, https://doi.org/10.1596/978-0-8213-7929-5.

[56] Kitchen, H. et al. (2017), Romania 2017, OECD Reviews of Evaluation and Assessment in Education, OECD Publishing, Paris, https://dx.doi.org/10.1787/9789264274051-en.

[36] Ministry of Education, A. (2017), Strategic Plan (2017/18 - 2021/22), http://www.moe.gov.na/files/downloads/b7b_Ministry%20Strategic%20Plan%202017-2022.pdf (accessed on 8 January 2019).

[16] MoES (2018), Reform of Education System.

[17] MoES (n.d.), Monitoring Report for Strategic Objectives and Action Plan Performance - United Strategy for Education and Science (2017-2021).

[12] MoESCS (2018), Georgia Country Background Report, Ministry of Education and Science, Tbilisi.

[15] MoESCS (2017), Action Plan - United Strategy for Education and Science (2017-2021).

[13] MoESCS (2017), Unified Strategy for Education and Science for 2017-2021.

[60] Nichols, S. and D. Berliner (2005), “The Inevitable Corruption of Indicators and Educators through High-Stakes Testing.”, Education Policy Research Unit, https://eric.ed.gov/?id=ED508483 (accessed on 29 January 2019).

[63] NWEA (2019), The MAP Suite, https://www.nwea.org/the-map-suite/ (accessed on 28 January 2019).

[21] NWO (n.d.), Netherlands Initiative for Education Research, https://www.nro.nl/en/ (accessed on 7 January 2019).

[11] OECD (2018), Education Policy Outlook 2018: Putting Student Learning at the Centre, OECD Publishing, Paris, https://doi.org/10.1787/9789264301528-en.

[9] OECD (2018), Open Government Data Report: Enhancing Policy Maturity for Sustainable Impact, OECD Digital Government Studies, OECD Publishing, Paris, https://doi.org/10.1787/9789264305847-en.

[29] OECD (2017), Policy Advisory Systems: Supporting Good Governance and Sound Public Decision Making, OECD Publishing, Paris, https://doi.org/10.1787/9789264283664-en.

[3] OECD (2017), Systems Approaches to Public Sector Challenges: Working with Change, OECD Publishing, Paris, https://doi.org/10.1787/9789264279865-en.

[27] OECD (2017), The Welsh Education Reform Journey, http://www.oecd.org/education/The-Welsh-Education-Reform-Journey-FINAL.pdf (accessed on 23 January 2019).

[10] OECD (2015), Education at a Glance 2015: OECD Indicators, OECD Publishing, Paris, https://dx.doi.org/10.1787/eag-2015-en.

[25] OECD (2014), Improving Schools in Wales: An OECD perspective, http://www.oecd.org/education/Improving-schools-in-Wales.pdf (accessed on 23 January 2019).

[1] OECD (2013), Synergies for Better Learning: An International Perspective on Evaluation and Assessment, OECD Reviews of Evaluation and Assessment in Education, OECD Publishing, Paris, https://doi.org/10.1787/9789264190658-en.

[55] OECD (2011), Education at a glance, 2011: OECD indicators, OECD Publishing, Paris, https://doi.org/10.1787/eag-2011-en (accessed on 3 August 2018).

[6] OECD (2009), Measuring Government Activity, OECD Publishing, Paris, https://www.oecd-ilibrary.org/docserver/9789264060784-en.pdf?expires=1539857706&id=id&accname=ocid84004878&checksum=50FFFE642692824D66D1DE143A215EA7.

[34] OECD and European Commission (2018), Building Capacity for Evidence Informed Policy Making: Towards a Baseline Skill Set, http://www.oecd.org/gov/building-capacity-for-evidence-informed-policymaking.pdf (accessed on 7 February 2019).

[30] Sanderson, I. (2002), “Evaluation, Policy Learning and Evidence-Based Policy Making”, Public Administration, Vol. 80/1, pp. 1-22, https://doi.org/10.1111/1467-9299.00292.

[24] Santiago, P. et al. (2012), OECD Reviews of Evaluation and Assessment in Education: Portugal 2012, OECD Reviews of Evaluation and Assessment in Education, OECD Publishing, Paris, https://dx.doi.org/10.1787/9789264117020-en.

[23] Santiago, P. et al. (2012), OECD Reviews of Evaluation and Assessment in Education: Czech Republic 2012, OECD Reviews of Evaluation and Assessment in Education, OECD Publishing, Paris, https://dx.doi.org/10.1787/9789264116788-en.

[2] Schick, A. (2003), “The Performing State: Reflection on an Idea Whose Time Has Come but Whose Implementation Has Not”, OECD Journal on Budgeting, Vol. 3/2, https://doi.org/10.1787/budget-v3-art10-en.

[31] Senge, P. (2014), The Fifth Discipline Fieldbook, Crown Publishing Group.

[48] US Department of Education (2018), Annual Plans and Reports, https://www2.ed.gov/about/reports/annual/index.html (accessed on 7 January 2019).

[35] US Department of Education (n.d.), Institute of Education Sciences (IES), https://ies.ed.gov/ (accessed on 13 November 2018).

[19] World Bank (2014), Georgia - Technical assistance to support preparation of education sector strategy : education sector policy review - strategic issues and reform agenda, World Bank, Washington D.C., http://documents.worldbank.org/curated/en/505151488895322292/Georgia-Technical-assistance-to-support-preparation-of-education-sector-strategy-education-sector-policy-review-strategic-issues-and-reform-agenda (accessed on 21 June 2018).

[20] World Bank (2014), SABER Country Report - Georgia, World Bank, Washington D.C., http://wbgfiles.worldbank.org/documents/hdn/ed/saber/supporting_doc/CountryReports/TCH/SABER_Teachers_Georgia_CR_Final_2014.pdf (accessed on 7 February 2019).

[5] World Bank (2004), Ten Steps to a Results-Based Monitoring and Evaluation System, World Bank, Washington D.C., https://www.oecd.org/dac/peer-reviews/World%20bank%202004%2010_Steps_to_a_Results_Based_ME_System.pdf.

[39] Yüksel, H. and A. Coşkun (2013), “Strategy Focused Schools: An Implementation of the Balanced Scorecard in Provision of Educational Services”, Procedia - Social and Behavioral Sciences, Vol. 106, pp. 2450-2459, https://doi.org/10.1016/J.SBSPRO.2013.12.282.

Metadata, Legal and Rights

This document, as well as any data and map included herein, are without prejudice to the status of or sovereignty over any territory, to the delimitation of international frontiers and boundaries and to the name of any territory, city or area. Extracts from publications may be subject to additional disclaimers, which are set out in the complete version of the publication, available at the link provided.

https://doi.org/10.1787/94dc370e-en

© OECD 2019

The use of this work, whether digital or print, is governed by the Terms and Conditions to be found at http://www.oecd.org/termsandconditions.