Chapter 8. Assessing performance in higher education

Previous chapters of this report analysed the inputs, activities and outcomes of higher education systems in OECD countries, with special attention to the four jurisdictions participating in the benchmarking exercise. This chapter builds on the previous analysis to examine the performance of the four participating jurisdictions and reflect more generally upon the benchmarking approach taken in this project.

    

The statistical data for Israel are supplied by and under the responsibility of the relevant Israeli authorities. The use of such data by the OECD is without prejudice to the status of the Golan Heights, East Jerusalem and Israeli settlements in the West Bank under the terms of international law.

8.1. Introduction

The benchmarking higher education systems performance exercise envisaged a comparative assessment of how well higher education systems are able to conduct research, educate students, and provide value to the broader economy and society through engagement activities. This chapter discusses challenges to the benchmarking of higher education performance that arose from gaps in evidence and data. It also outlines reflections and lessons learned from the project on measuring performance at the system level, and possible future directions for benchmarking activities.

8.2. Benchmarking process and results

8.2.1. Evidence gathered and used for the OECD system benchmarking project

The OECD benchmarking approach was designed to integrate quantitative and qualitative evidence and provide a system-level view of higher education performance that could inform deliberations on government strategy for higher education. Public sector performance measurement models, including a model developed by the OECD Public Management Programme (PUMA) currently known as the OECD Public Governance Committee, informed the project. The ambition of the project was to measure the “full span” of performance against criteria of relevance, efficiency, effectiveness, economy, cost-effectiveness, utility and sustainability (OECD, 2017[1]).

The benchmarking exercise carried out a comprehensive assessment of indicators from international data sources potentially useful for assessing performance in higher education, taking into account statistical limitations and the various economic and social contexts in which higher education systems operate. Comparative data is presented throughout this report for all OECD countries, augmented with descriptions and comparisons of policies and practices (mainly for the four participating jurisdictions), with the aim of enhancing understanding of the links between policies, practices and indicator values.

Review and selection of benchmarking indicators from existing sources

The indicators used for the benchmarking exercise were selected through a multi-step process. First, existing higher education indicators and datasets from international data sources (Table 8.1) were gathered and mapped onto the project’s conceptual framework (OECD, 2017[1]). Over 800 different indicators aggregated at the national level and related to the context, organisation and resourcing of higher education, as well as its education, research and engagement functions, were reviewed in this way.

Table 8.1. International data sources for the benchmarking indicator mapping

Actual sources (surveys, projects or databases)

Institutional source

ACA Institutional Survey

Academic Cooperation Association

European Labour Force Survey (and related ad-hoc modules), Community Innovation Survey, European Union Statistics on Income and Living Conditions (EU-SILC), Adult Education Survey, Personal well-being indicators

Eurostat

More2, E3M

European Commission and associated contractors

OECD Statistics database, Indicators of Education Systems (INES) ad-hoc surveys, OECD Survey of Adult Skills (PIAAC), OECD Programme for International Student Assessment (PISA), OECD Main Science and Technology Indicators, Career of Doctorate Holders (CDH) Survey

OECD

Science, Technology and Innovation Database

UNESCO-UIS

Global Competitiveness Index

World Economic Forum

Intellectual Property Statistics

World Intellectual Property Organization (WIPO)

Note: International data sources from which no higher education indicators were drawn, or providing only indicators also available elsewhere, are not reported in this table.

Approximately 100 indicators were chosen to create a data infrastructure for the benchmarking project. Decisions on inclusion in the data infrastructure were based on criteria including:

  • Coverage and parsimony. The set of indicators were chosen to cover the full scope of inputs, activities, outputs and outcomes in the functions of education, research and engagement, while at the same time minimising duplication and overlap.

  • Relevance and comparability. The baseline indicators were chosen on the basis of their alignment to the concepts relevant to the assessment of higher education performance, and on the basis of consistent collection with a common and transparent methodology used across countries.

Development of new indicators

In addition to reviewing existing indicators, the project generated new higher education indicators by integrating data from disparate sources and using existing databases in new ways. For example, new indicators were developed from existing data sources such as:

  • institution-level financial and human resource data from the European Tertiary Education Register, which was used to compute additional indicators such as the ratio of non-academic to academic staff, and proportions of private third-party institutional funding

  • individual-level data from the Survey of Adult Skills, which was used to generate new indicators on graduate skills and labour market outcomes

  • individual-level data from the social media platform LinkedIn, which was used to produce indicators on graduate career paths.

Other indicators were calculated based on national data provided by the four participating jurisdictions. For example, the disaggregation of indicators by subsector (universities vs. professional HEIs) throughout the report is based on this national data collection.

This work of statistical synthesis and production was used to produce the quantitative information included in the report, covering figures, tables and boxes reporting statistics (Figure 8.1).

Figure 8.1. Summary of the statistical work involved in the benchmarking exercise
Figure 8.1. Summary of the statistical work involved in the benchmarking exercise

Note: These numbers refer to the statistical work involved in producing Chapters 1-7 of this report.

Policy and practice information for the participating jurisdictions

Qualitative information was collected from the four participating jurisdictions through a country background questionnaire that elicited a total of approximately 500 pages of narrative information with respect to 24 policy domains. These 24 domains were identified during the development of the conceptual framework for the benchmarking project and cover aspects of the structure, governance, resourcing and functions of higher education systems (Table 8.2).

Table 8.2. Policy domains covered by the benchmarking exercise

System organisation, governance and resourcing

System functions (education, research and engagement)

System structure

Equity

Diversity of provision

Participation

Consultation processes

Digitalisation

Admission processes

Continuing education

Quality assurance

Lifelong learning

Qualifications

Internationalisation

Policy priorities

Labour market relevance

Funding mechanisms

Research and Development

Student financial assistance

Technology transfer and innovation

Autonomy and Accountability

Regional development

Governance mechanisms

Regional integration

Academic career

Social and civic engagement

The information on policies provided by the four participating jurisdictions was supplemented by additional desk-based research, which primarily focused on the identification of international higher education policy initiatives and additional country practices. The totality of the qualitative information gathered formed the basis for the tables and boxes in the report containing comparative analysis and examples of specific policies and practices (Figure 8.2).

Figure 8.2. Summary of the policy and practice evidence in the benchmarking exercise
Figure 8.2. Summary of the policy and practice evidence in the benchmarking exercise

Note: These numbers refer to the policies and practices information included in Chapters 1-7 of this report.

8.2.2. Strengths, challenges and performance in the participating jurisdictions

The benchmarking exercise provided an opportunity to review the current state of higher education in OECD countries and identify some pressing performance issues facing higher education systems. However, reviewing combinations of indicators at the country level demonstrates the complexity of making summary judgements about the performance of higher education systems. Table 8.3 shows the position of Estonia, the Flemish Community, the Netherlands and Norway within the OECD distribution based on a scorecard of 45 indicators used in the benchmarking process, using quartiles (Box 8.1).

Box 8.1. Explanation of indicator scorecards

Indicator scorecards are used in this chapter and in the individual country reports to provide a synthetic view of the relative position of each of the four participating jurisdictions within the OECD distribution. In this chapter, a scorecard of 45 indicators covering each of the three functions of higher education is presented for the four participating jurisdictions (Table 8.3). All of the indicators contained in the scorecard correspond to charts and fuller discussion presented in previous chapters of this report.

Quartiles are used to compare each country with the full membership of OECD countries. Location in the bottom quartile means that a jurisdiction is among the one-quarter of OECD countries with the smallest values for that indicator, while location in the top quartile means that a jurisdiction is among the one-quarter of OECD countries with the highest values for that indicator. The coloured square for each indicator represents the position in the OECD distribution, from the bottom quartile (left square) to the top quartile (right square). The square is shaded in grey (instead of black) when data are available for less than half of the OECD countries (the minimum number of countries with available data is 14). No coloured square means that data are missing. In each case, the indicator is presented for the most recent year available.

For the portions of the scorecard related to resourcing higher education, positioning in the top or the bottom quartile in itself does not imply a high or low relative performance, as these indicators relate to the relative levels of inputs only. Instead, the scorecard indicators on resourcing should be considered in relation to the indicators in the education and research portions of the scorecard, where positioning in a higher quartile can be more easily interpreted to mean higher performance relative to other OECD countries, and vice-versa. For example, a country with many research and development related outputs or outcomes in the top quartiles of the OECD, but investment in research in the lower quartiles could be considered to have a relatively efficient system of higher education research.

The following important points should also be noted for Table 8.3:

  • for the indicator ‘socio-economic gap in HE access’: the top quartile implies that the difference between 18-24 year-olds with tertiary educated parents and those with non-tertiary educated parents is smaller.

  • For Estonia, the entry rates to bachelors-level education include all entrants rather than first-time entrants, which creates a slight overestimate of the entry rate.

  • Due to a change in methodology in 2013 in Estonia, the data for “change in expenditure between 2008 and 2015” in the Resources section should also be interpreted with caution.

  • For the Flemish Community, indicators marked with an asterisk refer to Belgium rather than the Flemish Community.

Table 8.3. Indicator scorecard for the participating jurisdictions
Table 8.3. Indicator scorecard for the participating jurisdictions

Note: See Box 8.1.

 StatLink https://doi.org/10.1787/888933941880

It is also important to note that the scorecard shows relative position only; a position in the top quartile does not signify high performance in areas where performance is generally weak across the OECD. Many performance indicators signal that higher education systems in OECD countries have significant scope for improvement, regardless of their position within the OECD. For example, gaps in higher education access by socio-economic background continue to be substantial across countries, indicating considerable room for improvement in equity. In addition, completion rates in bachelor-level education show that one-third or more of entrants do not complete their studies in many OECD countries, indicating weaknesses with respect to both efficiency and equity (Chapter 5).

According to the scorecard, each participating jurisdiction is indicated to have a relatively well-functioning higher education system overall, when considering their positions in the OECD distribution. Measured across the scorecard dimensions associated with performance in education, research and engagement, they are less frequently in the bottom quartile in relation to other OECD jurisdictions and are more likely to be in the top quartile. However, there are differences in the frequency of the appearance of each of the four jurisdictions in either the top or the bottom quartiles (Table 8.4).

Table 8.4. Frequency of appearance of participating jurisdictions in the top and bottom quartiles of the benchmarking scorecard
Based on counts of the numbers of appearances in the top and bottom quartile

 

Estonia

The Flemish Community

The Netherlands

Norway

 

Bottom Quartile

Top Quartile

Bottom Quartile

Top Quartile

Bottom Quartile

Top Quartile

Bottom Quartile

Top Quartile

Education

3

1

2

4

1

3

1

7

Research/Engagement

1

6

2

6

2

5

1

7

Importantly, the scorecard also shows that patterns of performance across different domains are unique to individual jurisdictions, limiting the utility of overall system performance judgements across countries. For example, Norway appears in the top quartile of performance in total 14 times across the 30 education, research and engagement indicators. At the same time, while Estonia also appears almost the same number of times as Norway in the top quartile of indicators on research and engagement, it is much less likely to appear in the top quartile of indicators related to the education function (Table 8.4).

Within each of the four jurisdictions, there are also evident differences in inputs relative to other OECD countries across the suite of metrics. For example, the values for both the Netherlands and Norway tend to lie in the upper quartiles of OECD countries when considering the indicators of financial and human resources invested in the system. However, there is more variation in the positioning of the Netherlands across quartiles than Norway when considering the suite of indicators used to measure education and research performance. These variations further highlight the difficulty in developing overall judgements of higher education systems, as aggregation or simplification of the data can lead to unwarranted or inadequately justified performance assessments.

Through analysis of the scorecards for each benchmarking jurisdiction, important individual strengths and challenges relative to other OECD countries become evident, depending on which indicator and performance area is considered (Table 8.5).

Table 8.5. Examples of strengths and challenges in the participating jurisdictions relative to other OECD countries
Selected indicators where each jurisdiction lies in the bottom or top quartile of OECD countries in the education, research and engagement sections of the scorecard.

 

Areas of challenge (jurisdiction is in bottom quartile)

Areas of strength (jurisdiction is in the top quartile)

Estonia

Completion rate of bachelor's students; open access of scientific documents

New entrants older than 25 in bachelor’s programmes; Women researchers in higher education

The Flemish Community

Proportion of doctorate holders in the population; new entrants older than 25 in bachelor’s programmes

Entry rates into bachelor or equivalent education; graduates above proficiency level 3

The Netherlands

New entrants older than 25 in bachelor’s programmes; patent applications from the higher education sector

Higher education graduates (age 15-29) employed or in education; publications among the 10% most cited

Norway

Relative earnings of bachelor’s graduates, share of higher education R&D funding on basic research

Open access of scientific documents; socio-economic gap in higher education access

8.2.3. Combining indicator values to measure performance

Indicators used to describe the performance of higher education systems, such as those outlined in the scorecard in the previous section, focus on one aspect of the higher education system, separately measuring inputs, outputs or outcomes. However, assessing the performance of higher education systems against the criteria of efficiency or cost-effectiveness requires a more complex exercise, linking inputs to outputs or outcomes.

Efficiency is concerned with the question of how well inputs such as financial and human resources are converted into outputs such as graduates and research results, while cost-effectiveness measures how inputs are translated into outcomes, such as increased skills levels among graduates. The development of actionable measures of efficiency in the higher education sector is complicated by the multiplicity of inputs and outputs that cannot be directly mapped to each other, difficulties in measuring inputs themselves, ascertaining the level of control over the inputs, and attaching an importance weighting to the outputs (Johnes and Johnes, 2004[2]; Johnes, 2006[3]). Actionable measures of cost-effectiveness are even more difficult to achieve, as outcomes such as labour market success and skills acquisition depend on much more than the performance of the higher education system.

To test whether benchmarking indicators could be combined to generate simple and reliable measures of efficiency, five measures of educational and research efficiency (expenditure on completing and non-completing students, expenditure to produce a skilled graduate, the number of publications per researcher and expenditure per publication) were calculated, and their results were considered in terms of comparability and validity.

Expenditure on completing and non-completing students

The core output of the higher education system is graduates, particularly graduates at the bachelor’s and master’s level, which make up the majority of degree outputs across the OECD. The level of expenditure by higher education institutions per first-degree graduate is a function of both the expenditure required to educate students at this level, and the duration of their study programmes. The mix of first-degree programmes can also vary across OECD countries; while some countries only offer first-degree programmes at the bachelor’s level, other systems also have longer programmes that award a master’s level (ISCED 7) qualification without first awarding a bachelor’s level qualification (Chapter 2).

Using 2015 data on annual expenditure per student and the typical duration of first-degree programmes in OECD countries at either the bachelor’s or master’s level, it is possible to produce some comparative estimates of the cumulative theoretical expenditure required to produce a first-time graduate (Figure 8.3). A number of limitations apply:

  • Data availability for this indicator is limited to the countries that reported the theoretical durations of their first-degree programmes and provided details of expenditure at the bachelor’s to doctoral level (ISCED 6-8) in the UNESCO, OECD and Eurostat (UOE) data collections.

  • Across OECD countries, it is generally not feasible for average expenditure per student to be disaggregated between bachelor’s , master’s and doctoral levels of education, as staff costs and other forms of expenditure are often shared between programmes spanning all three levels. Therefore, the average non-R&D expenditure per student at ISCED levels 6-8 is used in these calculations as the closest approximation of the annual expenditure required to educate a student in undergraduate programmes that award either a bachelor’s or master’s degree.

  • These estimates do not take into account the significant proportion of students who take longer than the typical duration to complete their studies, and therefore may require a higher level of expenditure.

At the same time, as expenditure amounts are expressed using purchasing power parities and take into account the specific duration of programmes within countries, the average cumulative theoretical expenditure is comparable across countries.

The estimates indicate that there is a substantial variation in how much higher education systems spend to produce a first-time graduate at the bachelor’s and master’s level across the OECD (Figure 8.3). As might be expected, cumulative spending is related to the duration of the programme, with longer-duration programmes generally costing more to produce a graduate.

Differences in expenditure across countries can also be large enough to create exceptions to this pattern. For example, in Australia, Sweden and the Flemish Community, the average estimated expenditure to produce a graduate from a three-year bachelor’s programme is similar to the expenditure to produce a graduate from a four-year bachelor’s programme in Korea and Slovenia. Similarly, at the master’s level, the cumulative expenditure to produce a graduate from a five-year programme is lower in Norway, Finland and France than for a four-year programme in the United Kingdom.

Figure 8.3. Estimated expenditure for first-degree graduates (2016)
Expenditure over the theoretical programme duration, in 2015 USD PPP
Figure 8.3. Estimated expenditure for first-degree graduates (2016)

Note: *Participating in the Benchmarking Higher Education System Performance exercise 2017/2018.

Master’s level programmes in this calculation refer to first-degree programmes that award a master’s level qualification only, as opposed to postgraduate programmes.

Source: Adapted from OECD (2018[4]), Education at a Glance 2018: OECD Indicators, https://doi.org/10.1787/eag-2018-en.

 StatLink https://doi.org/10.1787/888933941899

High rates of programme non-completion also signal inefficiency in higher education systems, as investment by the government and private individuals does not create the expected output.1 The cost of non-completion in each jurisdiction depends on the proportions of students who do not complete, as well as the cost of educating students. Using the levels of expenditure per student in 2015 and applying country-level non-completion rates from the 2014 UOE data collection on student completion, a conservative estimate of the cumulative expenditure on non-completing students from first degree programmes from one entry cohort can be obtained for each of the four participating jurisdictions (Table 8.6).

The estimate makes two simple assumptions:

  • All students who eventually do not complete leave their programmes during their first three years.

  • Expenditure per student is constant at 2015 levels over the duration of study of the non-completing students.

In reality, as both participation and the costs of higher education are increasing over time across the OECD (see Chapter 3) and some students may leave programmes at a point beyond the first three years (and therefore incur higher expenditure) the figures in Table 8.6 are likely to represent more conservative estimates of the true levels of expenditure on non-completing students.

Table 8.6. Estimated expenditure on non-completing first-degree students
Based on numbers of students in 2016 entry cohort and 2015 expenditure in USD PPP

 

Annual expenditure per student 2015, excluding R&D

(USD PPP)

New entrants 2016

(number)

No qualification three years after the end of theoretical duration and not in education (2014)

Estimated overall expenditure on non-completing students for 2016 entry cohort (USD millions PPP)

Estimated minimum proportion of 2015 annual expenditure (excluding R&D) of higher education institutions on non-completing students

The Flemish Community

11 537

52 822

22%

160.9

6.0%

Estonia

8 404

9 168

43%

39.8

9.1%

The Netherlands

12 115

120 146

22%

384.3

4.2%

Norway

12 225

47 139

21%

145.2

5.3%

Note: This calculation assumes the distribution of the attrition rate of students as 85% of non-completers leaving during their first year, 10% in their second year and 5% in their third year, and assumes constant costs per student in each jurisdiction at 2015 USD PPP. Increasing year-on-year costs per student, or a distribution of attrition which is skewed more towards later years would further increase estimated costs.

Source: Adapted from OECD (2018[4]), Education at a Glance 2018: OECD Indicators, https://doi.org/10.1787/eag-2018-en.

As can be seen in Table 8.6, even the use of conservative assumptions for the estimation can imply a substantial annual expenditure of non-completion in each of the participating jurisdictions, when considered in relation to the overall expenditure by higher education institutions (excluding R&D). As Estonia has the highest rates of non-completion, lower student numbers and costs indicate an estimated annual expenditure of close to USD 40 million that does not result in graduate output, a figure that represents about 9% of the 2015 expenditure on education in Estonia. In the Netherlands, with a higher cost structure and a much larger entry cohort, the amount reaches USD 384 million, but represents less than 5% of the total expenditure in 2015. Depending on how higher education is funded in national contexts, this cost of this expenditure is shared between governments and households.

Expenditure to produce a skilled graduate

The estimates presented in the previous section for expenditure on completing and non-completing students do not take into account any measure of the quality of the outputs. Figure 8.4 shows an association between GDP per capita and an estimate of the expenditure on higher education institutions per higher education graduate reaching at least literacy proficiency level 3 (according to the OECD Survey of Adult Skills). The expenditure of higher education institutions, as well as GDP per capita, is measured in USD using purchasing power parity data. Higher education expenditure in this case includes R&D expenditure, as graduates from all higher education programmes are considered in the calculation. The estimate of graduates reaching at least proficiency level 3 has been calculated for each jurisdiction as the product of the following two variables:

  • the total number of higher education graduates in 2015

  • the estimated share of higher education graduates reaching at least literacy proficiency level 3 among those who completed their studies in the ten years before being surveyed (the Survey of Adult Skills took place in 2012 or 2015, depending on the jurisdiction).

This measure provides a comparative estimate of the ratio between a fundamental input (financial resources) and output (graduates with level 3 literacy skill proficiency) in a particular year across higher education systems. Its main strength is the transparent calculation methodology, which makes it possible to compare values across countries. However, this measure of the input/output ratio has a number of limitations:

  • It does not take into account differences in the costs of education across different programmes, or costs spent to provide education to students who do not receive a degree (as outlined in the previous section).

  • It ignores the complex timing of the education process. The cost of the education of students who graduated in 2015 was incurred by the higher education system in the years preceding graduation, as well as the years in which the fixed costs to set up that programme and institution were sustained.

  • It does not take into consideration the contextual factors affecting the higher education process and the skills of graduates, and in particular student skills at entry from secondary education (whose skills at 15 years of age are observed to have significant variation).

  • It makes a very narrow definition of “skilled graduate” in terms of achievement of moderate to advanced skills in one domain only.

Figure 8.4. Expenditure per higher education graduate (with a level 3 or higher literary skill proficiency) across OECD higher education system (2015)
Expenditure per level 3 literary proficient graduate, compared to GDP per capita
Figure 8.4. Expenditure per higher education graduate (with a level 3 or higher literary skill proficiency) across OECD higher education system (2015)

Note: *Participating in the Benchmarking Higher Education System Performance exercise 2017/2018.

The OECD marker refers to the OECD total (not average).

Source: Adapted from OECD (2018[5]), OECD Education Statistics, https://doi.org/10.1787/edu-data-en; OECD (2018[6]), OECD National Accounts Statistics, https://doi.org/10.1787/na-data-en; OECD (2018[7]), OECD Survey of Adult Skills, www.oecd.org/skills/piaac/data/.

 StatLink https://doi.org/10.1787/888933941918

As shown in Figure 8.4, jurisdictions with a similar economic context (proxied by their GDP per capita) tend to have similar amounts of expenditure per graduate reaching at least proficiency level 3. For example, in 2015 the Netherlands had a similar level of expenditure per graduate reaching at least proficiency level 3 as Austria, Germany and Sweden. When compared to the Netherlands, these were also the three countries with the closest level of GDP per capita. As another example, Spain, New Zealand and Korea had similar levels both of GDP per capita and of expenditure per graduate reaching at least proficiency level 3.

However, there are some exceptions to the general statistical pattern. For example, Estonia in 2015 had a substantially larger expenditure per graduate reaching at least proficiency level 3 than countries with a comparable level of GDP per capita. This could be partly explained by the increase in higher education expenditure, and the reduction in the number of students, in the years preceding 2015.

Measuring efficiency in research

Research efficiency can be measured by considering the levels of research outputs that are produced compared to research inputs. As seen in Chapter 6, there is variation across the OECD in the concentration of researchers across the population in OECD countries. As might be expected, this also has an impact on the proportional volume of research outputs. For example according to 2016 data, there is a positive linear relationship (correlation coefficient = 0.82) between the number of researchers per 1000 of population and research publications per 1000 of the population (as recorded in the Scopus database of scientific publications (OECD, 2017[8])).

Publications per researcher

One possible measure of efficiency in research is to consider the average number of publications per researcher across systems, as an indicator of which systems are more productive. Figure 8.5 shows the estimated number of publications produced per researcher in 2015 across OECD countries. This estimate is subject to a number of limitations, including:

  • Publications in 2015 were considered due to data availability, but are likely to be based on cumulative research performed by researchers over a number of years prior to 2015. In a context of increasing numbers of researchers in recent years, this may lead to these figures producing underestimates of research efficiency.

  • The figure for 2015 publications includes publications for all research sectors in each country. While the majority of scientific publications have at least one academic author, the inability to disaggregate scientific publications by sector means that scientific publications that did not originate in the higher education sector may lead to an overestimate of research efficiency.

  • The Scopus database does not include all scientific production. For example, it excludes contributions to conferences and some types of books, as well as collaboration with the private or public sector for the application of knowledge.

  • The number of publications used to calculate this indicator includes publications authored by researchers working outside higher education (although the large majority of scientific publications come from the higher education sector (Johnson, Watkinson and Mabe, 2018[9])).

Figure 8.5. Estimated annual publications per researcher (2015)
Figure 8.5. Estimated annual publications per researcher (2015)

Source: Adapted from OECD (2017[8]), OECD Science, Technology and Industry Scoreboard 2017: The digital transformation, https://doi.org/10.1787/9789264268821-en.

 StatLink https://doi.org/10.1787/888933941937

Figure 8.6 suggests that, on average across OECD countries, under the conditions of the measurement, around 0.4 annual publications are produced per researcher, implying that an average researcher may publish new knowledge roughly once every 2.5 years.

Expenditure per scientific publication

Figure 8.6 reports an estimate of the expenditure per scientific publication across OECD countries. This estimate is calculated for each jurisdiction as the ratio between the total amount spent by higher education institutions on R&D in 2015, in USD at purchasing power parity and total number of scientific publications in the Scopus database in 2015 The calculation methodology of this R&D input/output ratio exposes it to a number of limitations:

  • Distinguishing between R&D and other expenditure in higher education can be challenging, due to the close connection between research and education activities (Chapter 3). This reduces the precision of the measure of expenditure.

  • As in the previous indicator, the Scopus database does not have complete coverage and includes some publications from other R&D sectors. In addition, the long timelines involved in scientific production are not taken into account.

Higher education R&D expenditure per Scopus publication is therefore a simple ratio between research input and output indicators based on internationally agreed definitions and statistical procedures. Despite the outlined limitations, it has the important advantage of being comparable across countries.

Across OECD countries, one scientific publication was produced for every USD 120 000 of R&D expenditure by higher education institutions in 2015 (not including technical assistance and other expenditure).

In Figure 8.6 the input/output ratio is also plotted against the level of GDP per capita in 2015, to highlight the comparison between countries with a similar economic context. Figure 8.6 bears some resemblance with Figure 8.4, as countries with higher GDP per capita generally spend a higher amount per unit of output than less wealthy countries (even though the relationship between the input/output ratio and GDP per capita is less strong in Figure 8.6 than in Figure 8.4).2

Figure 8.6. Higher education R&D expenditure per scientific publication (2015)
Higher education institutions’ expenditure on R&D per publication in the Scopus database
Figure 8.6. Higher education R&D expenditure per scientific publication (2015)

Note: The OECD marker refers to the OECD total (not average).

Source: Adapted from OECD (2018[5]), OECD Education Statistics, https://doi.org/10.1787/edu-data-en; OECD (2017[8]), OECD Science, Technology and Industry Scoreboard 2017: The digital transformation, https://doi.org/10.1787/9789264268821-en.

 StatLink https://doi.org/10.1787/888933941956

All in all, Figure 8.4 and Figure 8.6 allow Estonia, the Flemish Community (or Belgium), the Netherlands and Norway to be compared with countries with a similar level of GDP per capita on two different indicators of the input/output ratio in higher education. Despite their limitations and different calculation methodology, these indicators suggest that the expenditure per unit of output in the participating jurisdictions for the most part tends to be similar to other countries at a similar level of economic development.

Discussion

The five indicators described in this section are presented as examples of simple measures of efficiency and cost-effectiveness that could be computed using existing data. The key benefit of these measures is their comparability across OECD countries subject to the specified limitations. This means that countries can have an indication of where they stand compared to other OECD countries on the financial and human resource costs associated with the key outputs of higher education systems, and can provide a starting point for further investigation of the drivers of differences between countries (whether statistical or structural).

However, further improvements would be required to increase the validity and policy relevance of indicators on efficiency and cost-effectiveness of higher education before they could become actionable measures of higher education performance. For example, almost no account can be taken of the quality of the outputs, due to the lack of available data, which severely limits the scope and value of cost-effectiveness measures. The inability to disaggregate programme costs at different levels of higher education and distinguish between teaching and research costs also complicates the process of providing estimates that would be beneficial to policymakers. The following section outlines some of the identified data gaps in more detail.

8.3. Lessons learned from the benchmarking exercise

8.3.1. A number of benefits of the benchmarking exercise can be identified

There were a number of clear benefits to carrying out the benchmarking project, which can be summarised as follows:

  • The broad scope of the analysis allowed for a comprehensive updating of the OECD knowledge base on all aspects of higher education, and therefore this report offers the widest stocktaking of higher education systems in the OECD since the 2008 publication of Tertiary Education for the Knowledge Society (OECD, 2008[10]).

  • The data development exercise for the benchmarking project resulted in the creation of a benchmarking data infrastructure that can be automatically refreshed as new data becomes available. This data infrastructure has the potential to be used for online dissemination of data related to the benchmarking project.

  • New data sources were explored and some new indicators were developed, which can be improved and further integrated into future work. New types of reporting and analysis were also carried out for countries, such as the generation of performance scorecards and scenarios for the participating jurisdictions (see the accompanying county notes of the four jurisdictions).

  • Important gaps in data and evidence were identified, some of which may be filled in the future though the development of new OECD indicators in conjunction with the OECD Indicators of Education Systems (INES) project.

  • The project provided a forum for peer dialogue and policy learning during the regular meetings between the OECD Secretariat, and the national co-ordinators from the participating jurisdictions.

8.3.2. Evidence gaps and difficulties in linking qualitative data to performance created limitations

Although there were a number of significant benefits among the project outcomes, difficulties arose which made applying the conceptual framework more challenging than anticipated.

Data gaps and poor data coverage

Despite the extensive data review exercise that was carried out by the benchmarking project (as described in section 8.2.1), it was not possible to obtain coverage of all inputs, activities, outputs and outcomes of higher education. Given the limitations of the data many of the performance criteria outlined in the conceptual framework (such as economy and effectiveness) proved impossible to measure, while others (such as efficiency) allowed only narrow experimental measures to be estimated.

Areas related to resourcing higher education and each of the missions of higher education that lack data coverage have been explicitly indicated in the concluding sections of the previous chapters of this report. Some of the areas with little to no comparative data available relate to the core functions of higher education, resulting in gaps in knowledge, which do not exist at other levels of education that attract similar levels of investment (i.e. primary and secondary education). For example:

  • Chapter 7 highlighted the increasing focus on the mission of higher education to provide broader societal benefits, along with some of the policies and practices that have emerged in higher education systems in recent years to extend the range of engagement activities. However, information required to produce indicators of successful performance on engagement with the broader community is still sparse. While some data are available, they are mainly related to the collaboration of higher education with industry and do not adequately cover the full span of engagement activities in which higher education institutions are involved in. For example, no comparative data are available on the social and regional engagement activities of higher education institutions or the impact of these activities.

  • Comparative data on learning outcomes of higher education students are not widely available, which severely restricts the possibilities for assessment of higher education programme quality outcomes. Standardised assessments of learning outcomes are in use in some national contexts and for some professions, and a number of experimental models have been developed through national or international initiatives that cover both domain-specific learning outcomes and more generic learning outcomes (Chapter 5). However, unlike at the primary or secondary levels of education, there are no widely adopted international assessments of higher education learning outcomes administered on either a representative or a census basis.

  • Instructional inputs and methods in higher education, especially human resources, are not well measured in international data collections (and, often, national data collection systems). For example, there is currently no standardised, recurrent collection of internationally comparable information on the distribution of staff across different staff categories, levels of seniority and contract type or the division of the workload of staff between teaching, research and engagement activities. This limits the insight available on teaching and learning conditions in the instructional environment, and forces reliance on poor proxies, such as student-to-staff ratio.

Qualitative information on policies and practices could not be easily linked to available indicators

The benchmarking project had the stated goal of linking data about policies and practices to outputs, making inferences about the impact of higher education policies and practices on system-level performance. However, developing these links was not possible in practice.

Pre-existing structured data with respect to higher education policies and practices, as well as comparative information on system organisation and features needed to support causal inferences were not available. Qualitative evidence with respect to over twenty domains of national higher education policy was collected in open-ended narrative form from participating jurisdictions. This required extensive time and effort on the part of national authorities, and proved to be difficult to transform into standardised and comparable data. Moreover, comparable information was not available for the remaining OECD countries, meaning that information on policy and practice, even if transformed into standardised data, could not be used to explain variation in performance without a wider coverage of countries (Section 8.4.2).

8.3.3. Global systems judgements are unlikely to be the most policy relevant performance measures

Higher education systems are more complex than lower levels of education in most OECD countries, due to the increased presence of market forces, greater levels of institutional autonomy and the broad range of missions and functions of higher education systems. Approaches to measuring performance need to reflect this complexity. Institution-level rankings based purely on a small set of indicators can fail to take into account the many ways in which higher education systems demonstrate good performance, and can also mask areas of lower performance that are not covered by the available data.

On the other hand, system-level analysis that aggregates results across higher education subsystems with sharply dissimilar missions, resourcing levels and student profiles produces average values that may have limited policy analytic use. Higher education “systems” are heterogeneous, often highly so. In Mexico, for example, there are thirteen legally recognised subsystems of higher education, while in the United States, with more than 3 000 higher education institutions, analyses of higher education performance typically proceed based on taxonomies consisting of many sectors. Diverse modes of provision of higher education exist within systems with differing levels of institutional differentiation, which adds to the challenge of evaluating the collective performance of institutions within a system in a consistent manner. While the national social, political and economic context provides a common background and links institutions together, their individual characteristics and missions differ greatly. For national authorities – whose legislation, regulation, and funding may operate at the subsystem level – characterisation of system-level performance across heterogeneous sectors of higher education systems may not be a helpful activity, since it aligns poorly to policy instruments and associations.

In contrast, comparisons at the subsystem level, such as how teaching colleges or applied science universities in their system compare to others across the world may be much more useful for policy development or assessment. For this reason, the benchmarking exercise included a review of the performance of different subsectors in the three participating jurisdictions, which have binary systems. As Table 8.7 shows, the professional HEIs in all three jurisdictions cater more heavily to non-traditional student groups, such as students over 30 and part-time students, and are less likely than universities to enrol international students and attract funding from non-government sources. At the same time, completion rates are higher in some cases in professional HEIs and available employment rates of graduates show that professional HEIs have results as favourable as universities. However, the extent to which these tendencies hold varies substantially between jurisdictions. It is clear that different strengths and weaknesses exist not only between subsectors in the national context, but also when comparing subsectors of the same type across jurisdictions (Table 8.7).

Table 8.7. Performance of professional HEIs relative to universities in the participating jurisdictions
2016 or most recent year available.

 

Estonia – Professional HEIs

The Flemish Community – Professional HEIs

The Netherlands – Professional HEIs

Relative size of the sector (Share of new entrants in the total for professional higher HEIs and universities (%)

31

62

69

Ratio of annual expenditure per student relative to the university sector (excluding R&D)

0.70

1.12

1.08

Ratio of the proportion of funding from non-government sources relative to the university sector

0.25

0.02

Ratio of first-time graduates older than 30 relative to the university sector

1.88

3.85

4.73

Ratio of part-time students in bachelor’s programmes relative to the university sector

1.28

1.33

7.55

Ratio of international students in bachelor’s programmes relative to the university sector

0.16

0.76

0.56

Ratio of on-time completion relative to the university sector

M:1.00

F: 1.54

M: 0.86

F: 1.00

M: 1.49

F: 1.30

Ratio of non-completion relative to the university sector (not in education and not graduated three years after duration)

M:1.75

F: 0.87

M: 0.55

F: 0.79

M: 1.03

F: 1.30

Ratio of employment rates of 25-34 year-olds relative to the university sector

1.04

1.27

Note: For ratios, university sector is equal to 1.

Source: Adapted from information provided by the participating jurisdictions. See the reader's guide for further information.

8.4. Future directions

This section describes and motivates some key areas of policy focus to improve future capacity for measuring higher education performance.

8.4.1. Key comparative data gaps need to be filled

More and better data is needed on how much students are learning in higher education

There is an increasing focus on improving teaching quality in higher education. Many countries have strengthened higher education quality assurance processes to enhance institutional accountability for teaching and learning. However, unlike other levels of education, there is currently no means of assessing the skills and competencies of higher education students or graduates in a comparable manner.

There is no broadly accepted definition of what educational quality should deliver or how quality should be measured. It has been demonstrated (for example, through initiatives such as the CALOHEE and AHELO projects) that common assessment frameworks can be agreed and valid measurements of learning outcomes across countries are possible. AHELO and other higher education international assessment initiatives also show that there are a number of practical difficulties in administering such tests across countries, in reaching the requirements for national samples to allow for international comparisons, and also in taking into account the diversity of contexts and defining learning outcomes for different subjects. (OECD, 2013[11]).

New ways of measuring engagement activities are needed

In light of government and public expectations, the social impact of higher education is likely to become a more important part of the higher education performance landscape. While many higher education institutions have a strong commitment to community, regional, or even global engagement, there are no mechanisms in place to report and monitor these activities and assess their impact. This weakens incentives for institutions to broaden their engagement activities, as the absence of agreed measurement results in the neglect of this performance dimension in public funding, performance evaluation and quality assurance processes.

More work is needed to expand common international definitions for higher education activities

While higher education programmes can be mapped from national qualifications frameworks to international standards (through ISCED); there are very few other international definitions applicable to the sector. For example, there is no standard international classification for academic staff categories. Not only does this make comparison of systems difficult from a policy perspective, it may also inhibit mobility, as academic staff may not be able to easily distinguish the meaning and duties of job categories in different countries.

Similarly, higher education institutions cannot be classified in a meaningful way across jurisdictions according to missions and orientations. There are key national and regional data collection systems that function at an institutional level, such as the United States Integrated Post-Secondary Education Data System (IPEDS) and the European Tertiary Education Register (ETER). However, these databases do not yet have a data structure and definitions that permit them to be joined in support of analysis. This creates a limitation for students, academics and policymakers alike in understanding and comparing institutions and systems across jurisdictions, and represents a lost opportunity for policymakers to learn from other contexts. Developing common international classifications for higher education institutional data could therefore deliver substantial benefits to comparing system features and measuring performance.

Finally, international data collection systems such as the UNESCO, OECD and Eurostat (UOE) collection infrequently collect data about key dimensions of higher education – such as revenues, expenditures, staffing and graduation rates – at the subsystem level, as there are currently no common taxonomies that permit this.

There is a serious information gap on teaching staff in higher education.

Staff costs represent the biggest financial outlay in higher education systems across the OECD. At the same time, there is almost no internationally comparable information available on the working conditions, experience, well-being, pedagogical knowledge, time use or teaching practices of teaching staff in higher education.

Instructional inputs and methods in higher education, especially human resources, are not well measured in international data collections (and, often, national data collection systems). Instructional practices in higher education are sometimes reported in student surveys, but these surveys are beset by serious methodological problems that call into question their validity and they lack cross-national comparability.

This situation is in sharp contrast to the richness of information available at other levels of education, for example through the OECD Teaching and Learning International Study (TALIS). The collection of internationally comparable self-reported instructional practices in higher education is possible, in principle, using a structured survey instrument based in a large-scale international assessment or survey. An extension of TALIS to the higher education sector, or a similar international study could allow experiences and practices of staff in different settings within the higher education sector to be evaluated, and provide the insight necessary for the improvement of teaching and learning in higher education.

8.4.2. Policy benchmarking could help to fill core gaps in knowledge

As well as improving the range of indicators available to assess higher education performance, the OECD member countries and key stakeholders could additionally benefit from having detailed and comparable information about the design of policies in their higher education systems, such as characteristics of institutional funding models, student loan systems, faculty career systems and retirement policies. Therefore, future benchmarking exercises could also focus on the collection of comparative policy information for a large number of OECD countries.

Data about policy design could permit policymakers and nongovernmental groups across the OECD to benchmark their policy choices to others, assess what is feasible, and foster deeper and more productive peer-learning discussions across OECD member countries. Fixed response policy benchmarking surveys, properly planned and coordinated, would minimise response burden on the part of governments, avoid duplication of effort and maximise comparability across systems. Surveys could be implemented in collaboration with other relevant international organisations, and with the OECD Indicators of Education Systems (INES) project and its networks, including the network on education system level information (NESLI), which has previously undertaken structured policy surveys relevant to higher education, including a survey on national criteria and admission systems for first-degree programmes.

For example, if policymakers were contemplating the redesign of a student grant system, they would have access to detailed information about these choices in other jurisdictions, such as criteria for student grant eligibility, methodologies for needs assessment and policies with respect to income verification. Policymakers could use this information in the design of their own policy proposals, to inform national policy debates, and to seek expert advice about policy design and implementation from systems with policy features they plan to adopt. Furthermore, the availability of structured policy data would allow for greater future possibilities for linking performance indicators and policy data to make stronger inferences about the relationship between policies and performance in higher education.

8.4.3. Concluding remarks

The benchmarking exercise has reviewed a wealth of quantitative data and qualitative information in order to assess the relative performance of higher education systems across OECD jurisdictions, particularly the four participating jurisdictions. The benchmarking project has provided a valuable opportunity to identify key evidence gaps that prohibit a deeper performance analysis. Future OECD work can build on the findings of this report and explore ways to expand the comparative evidence available to policymakers in higher education systems across the OECD.

References

[2] Johnes, G. and J. Johnes (2004), International Handbook on the Economics of Education, Edward Elgar Publishing, Cheltenham, http://www.elgaronline.com/view/184376119X.xml.

[3] Johnes, J. (2006), “Data envelopment analysis and its application to the measurement of efficiency in higher education”, Economics of Education Review, Vol. 25/3, pp. 273-288, https://doi.org/10.1016/j.econedurev.2005.02.005.

[9] Johnson, R., A. Watkinson and M. Mabe (2018), The STM Report: An overview of scientific and scholarly publishing, International Association of Scientific, Technical and Medical Publishers, The Hague, http://www.stm-assoc.org/2018_10_04_STM_Report_2018.pdf.

[4] OECD (2018), Education at a Glance 2018: OECD Indicators, OECD Publishing, Paris, https://doi.org/10.1787/eag-2018-en.

[5] OECD (2018), OECD Education Statistics, OECD Publishing, Paris, https://doi.org/10.1787/edu-data-en (accessed on 10 December 2018).

[6] OECD (2018), OECD National Accounts Statistics, OECD Publishing, Paris, https://doi.org/10.1787/na-data-en (accessed on 13 December 2018).

[7] OECD (2018), OECD Survey of Adult Skills, OECD Publishing, Paris, http://www.oecd.org/skills/piaac/data/ (accessed on 21 August 2018).

[1] OECD (2017), Benchmarking higher education system performance: Conceptual framework and data, OECD, Paris, http://www.oecd.org/education/skills-beyond-school/Benchmarking%20Report.pdf.

[8] OECD (2017), OECD Science, Technology and Industry Scoreboard 2017: The digital transformation, OECD Publishing, Paris, https://doi.org/10.1787/9789264268821-en.

[11] OECD (2013), Assessment of Higher Education Learning Outcomes: AHELO Feasibility Study Report Volume 3 Further Insights, OECD, Paris, http://www.oecd.org/education/skills-beyond-school/AHELOFSReportVolume3.pdf.

[10] OECD (2008), Tertiary Education for the Knowledge Society: Volume 1 and Volume 2, OECD Reviews of Tertiary Education, OECD Publishing, Paris, https://dx.doi.org/10.1787/9789264046535-en.

Notes

← 1. Although, as noted in Chapter 5, there may possibly be some benefit to even partial completion of higher education in some OECD countries, overall, the returns are much lower than for those completing higher education.

← 2. When excluding four outliers (Chile, Greece, Ireland and Turkey), the correlation between the two series in Figure 8.5 is 0.87. By comparison, excluding any quadruplet of countries does not result in a correlation higher than 0.58 in Figure 8.7.

End of the section – Back to iLibrary publication page