6. Assuring the quality of postgraduate education

This chapter focuses on the mechanisms used for the external quality assurance of academic postgraduate education in Brazil. Brazilian postgraduate education comprises “stricto sensu” programmes, with a strong academic and scientific focus, and vocationally oriented “lato sensu” programmes, such as Masters of Business Administration (MBA). “Stricto sensu” programmes are subject to a specific system of quality evaluation and regulation, implemented by the Foundation for the Coordination of Improvement of Higher Education Personnel (CAPES). The chapter analyses these processes, examining the systems in place to evaluate new courses, to allow them to enter the National Postgraduate System, as well as the periodic programme reviews that are undertaken every four years. Based on the strengths and weaknesses identified, the chapter provides recommendations for fine-tuning the system and planning for the future.

    

6.1. Focus of this chapter

A focus on academic postgraduate provision

This chapter focuses on the mechanisms in place at national level in Brazil to assure the quality of postgraduate education in the country. As noted in Section 3.4, Brazilian postgraduate education falls into two distinct categories. Courses with a strong academic and scientific focus, which include Master’s degrees (mestrado acadêmico), Professional Master’s degrees (mestrado profissional) and doctoral education (doutorado), are classified as “stricto sensu” postgraduate provision and form part of the National System of Postgraduate Education (SNPG). They are subject to a specific system of quality evaluation and regulation, implemented by the Foundation for the Coordination of Improvement of Higher Education Personnel (Coordenação de Aperfeiçoamento de Pessoal de Nível Superior, CAPES), a public foundation under the responsibility of MEC. In parallel, many higher education institutions offer professionally oriented, postgraduate “specialisation” programmes, including Master’s in Business Administration (MBA), which are classified as “lato sensu” provision and are not subject to external programme-level quality assurance as part of SINAES or organised by CAPES1. The focus in this chapter is on the quality assurance processes for stricto sensu postgraduate provision.

Academic Master’s degrees are still viewed as research degrees

The existence of a separate, long-established and highly developed system of external quality assurance for postgraduate education programmes is a distinctive feature of Brazilian higher education, reflecting the historical development of the science base in the country. In many OECD countries, external quality assurance of Master’s courses is undertaken by the agencies responsible for supervision of undergraduate education, often with the implicit understanding – in Europe notably - that Master’s degrees are a vehicle to deepen knowledge and skills gained at undergraduate level and a requirement for a wide variety of jobs in the economy. In Brazil, in contrast, stricto sensu Master’s courses – including so-called “Professional Master’s” – are widely understood as the first stage in an academic or research career – a situation that is largely a reflection of the relatively recent expansion of doctoral education in the country.

A system with strong external regulation of doctoral programmes

Responsibility for the quality of doctoral training in many higher education systems internationally has been left to individual universities, with limited intervention from public authorities (European University Association, 2018[1])2. In such cases, incentives and signals relating to the way programmes are organised are (increasingly) provided in an indirect way by public research funding agencies, through the criteria used to award doctoral training or research grants. The more direct approach adopted by public authorities in Brazil reflects a long-standing concern to expand the population of highly qualified researchers in the country, as a means to boost domestic scientific and innovation capacity. In practice, as discussed below, aspects of the evaluation system implemented by CAPES to ensure the quality of postgraduate training share characteristics with assessments undertaken in other higher education systems to monitor the research performance of higher education institutions.

6.2. Strengths and weaknesses of the current system

CAPES evaluates and regulates market entry of new courses and oversees the periodic evaluation of established programmes

The system of external quality assurance for academic postgraduate education in Brazil began in its current form in 1998. It comprises two distinct processes:

  1. 1. Evaluation of proposals for new courses as a basis for regulating the entry of postgraduate training to the system and;

  2. 2. Periodic evaluations of established postgraduate programmes, currently undertaken on a four-year cycle, allowing their continued operation (permanência) or, in case of poor performance, leading to their closure.

CAPES is responsible for coordinating the evaluation process. The evaluation of courses and programmes is undertaken by selected academic peers from the same scientific field working in field committees (Comissões de Área), with the scores attributed by the field committees to proposals and existing programmes subsequently approved (or adjusted) by the Technical and Scientific Council for Higher Education (Conselho Técnico e Científico da Educação Superior - CTC-ES) composed of academics from all knowledge areas.

A distinction between programme (programa) and course (curso)

CAPES uses the Portuguese terms programa (programme) and curso (academic course or programme) in a specific way. A programa comprises the staff, infrastructure and activities associated with the provision of postgraduate education in a specific field, whether at Master’s level, doctoral level or both. It is the principle unit of analysis for the periodic evaluations of postgraduate provision. Of the 3 472 stricto sensu academic postgraduate programas evaluated in the most recent CAPES four-year review in 2017, just over 60% combined both Master’s and doctoral provision, 37% involved only Master’s provision and 2% involved only doctoral provision (CAPES, 2017[2]). The term curso is used to refer to a single course of study at a particular level: a Master’s, Professional Master’s or doctorate. A programa may thus contain two cursos (a Master’s and a doctorate) or effectively be synonymous with a curso, when only one type of curso is provided. For the sake of clarity, this chapter uses the English terms programme and course, to allow a distinction to be made where necessary.

The sections that follow review, in turn, the strengths and weaknesses of the CAPES evaluation processes for approving new courses (Avaliação de Propostas de Cursos Novos, APCN) and periodic review of established programmes (Avaliação Quadrienal).

CAPES: approval of new postgraduate courses

Relevance: rationale and objectives of the current system

In Brazil, academic postgraduate education in all types of higher education institution3 is collectively considered as part of a National System of Postgraduate Education (SNPG), which – as a system - has the aim of training highly quality teaching and academic staff; training (highly) qualified staff for non-academic sectors and strengthening the country’s scientific, technological and innovation capacity more generally. CAPES states that the evaluation system it operates is designed to certify the quality of postgraduate education in the country, as a basis for allocation of publicly funded scholarships and research funding, and to identify regional disparities strategic knowledge areas as a basis for strategic actions to address such gaps (CAPES, 2018[3]). As such, the CAPES evaluation system as a whole serves at least three purposes:

  1. 1. It is a mechanism for ensuring the quality of postgraduate training (and thus – in theory - the quality of the human resources trained) as a form of guarantee for students and their future employers;

  2. 2. The results of the evaluation process (in particular the periodic reviews) provide objective criteria for the allocation of public funding for researcher training (notably grants to Master’s and doctoral students) and research projects (with the implicit expectation that the high-quality programmes identified will make good use of public resources);

  3. 3. The results of the evaluation also identify how well the country is developing research capacity in different scientific fields and across the territory of the Union, allowing corrective policy measures to be developed as necessary.

The specific approval process for new courses (APCN) is designed to ensure only academic teams with demonstrated expertise, a proven track record of quality research and adequate facilities are authorised to provide academic postgraduate education. Course proposals are assessed by a field committee composed of academic peers from the field in which the course seeks to operate. Following a standard assessment and validation process, new courses are formally approved if they score at least three on a nominal scale of one to five4, taking into account a range of variables discussed below. Only once approved can courses recruit students; obtain national recognition for their diplomas, and obtain funding from CAPES for student scholarships and institutional capacity building5.

The APCN process consciously sets a comparatively high bar for entry into the system of academic postgraduate training and for the creation of doctoral training provision in programmes that already operate a Master’s level. In so doing, it seeks to maintain high minimum standards for postgraduate education, protect students against poor quality provision and ensure efficient targeting of public funding. During the review visits, the OECD team noted a high degree of support for the principle of maintaining a high threshold for entry into the academic postgraduate education system.

Effectiveness: quality indicators used or generated

CAPES evaluations of postgraduate courses and programmes rely to a large extent on qualitative assessments undertaken through peer review. These qualitative assessments, which may take into account quantitative data, are ultimately translated into a single score (conceito) attributed to courses and programmes. Standing evaluation committees composed of Brazilian academics in specific disciplines are established for 49 scientific fields. The work of each committee is coordinated by a “field coordinator”, elected by the academic programmes in the field in question for a renewable term of four years. The field coordinator and the members of the field committee undertake their work in the CAPES evaluation processes on a voluntary basis, alongside their main academic jobs.

The field committees are responsible for assessment new courses and undertaking the periodic reviews (discussed below). Following initial eligibility checks by CAPES staff, the relevant field committee assesses proposals for new courses using a standard set of criteria for the field. The topics assessed follow a standard model developed by CAPES, but with specific assessment and judgement criteria, and the weighting of individual variables, tailored to each field by the field committee in question. The field committee collectively assesses proposals in relation to the assessment topics and assessment criteria to provide ratings for each topic on a five-point scale from “very good” to “deficient”. The ratings for the different topics are combined to generate an overall assessment score for the programme. Programmes that are finally approved are attributed an initial default CAPES rating (Conceito CAPES) of three out of five (three being the minimum quality threshold required). Programmes are only attributed higher scores following a periodic review.

As illustrated in Table 6.1, the evaluation criteria for academic courses (academic Master’s and doctorates) for new course proposals include the relevance of the new course to national and institutional development; the design and proposed scale of the course; the qualifications and scientific output of the staff involved and their planned involvement in the course; and available infrastructure. The same criteria are also used to evaluate proposals for new Professional Master’s courses although with specific evaluation criteria modified to take account of the more practical orientation of these courses and their closer links to the world of work. CAPES regulations (CAPES, 2017, p. art.6[4]) specifically highlight that professional programmes are likely to require a different staff profile. They recommend giving weight to the professional experience of teaching staff, even if they do not hold a PhD, and reducing the emphasis on staff being full-time, given that many will teach alongside other professional roles.

Table 6.1. Criteria for evaluation of new postgraduate course proposals

Criteria

Fit with institutional development plan

Consistency of proposal to the Institutional Development Plan (PDI) of the proposer and commitment of the institution's leaders to the initiative

Consistency of proposal and programme design

Staff qualifications

Relevance of ongoing research activities by the team proposing the programme

Appropriateness of curriculum structure to subject of programme

Clarity of proposal in relation to students and planned graduate profile

Selection criteria for students

Number of study places (vagas)

Fit of graduate profile with national priorities and needs

Academic capacity

Evidence that the team proposing the programme has academic, didactic, technical and / or scientific competence and qualifications linked to the objective of the proposal

Permanent staff

Demonstration that an adequate number of permanent staff with exclusive dedication are allocated to the programme and will be able to deliver the type and volume of training proposed.

Scientific output of staff

Indication of a maximum of five research outputs for each permanent staff member for the last five years

Infrastructure

Adequacy of the educational and research infrastructure: physical facilities, laboratories, experimental facilities and library

Adequacy of computer equipment, network access and multimedia information sources for teachers and students

Adequacy of secretarial infrastructure and administrative support

Source: Adapted from Portaria CAPES nº 161/2017, article 4 (CAPES, 2017[4])

The field committee assesses these dimensions based on the electronic application from the proposing institution (submitted through the CAPES Sucupira platform), with the option to request a site visit if considered necessary. The OECD team understands that site visits as part of initial course approval, although formally provided for, are comparatively rare in practice6.

The criteria examined in the process for approval of new courses cover a wide range of the variables that might reasonably be expected in an ex-ante assessment of a proposed postgraduate programme. The criteria focus on most key factors that might be expected to contribute to the quality of the future training provided. Nevertheless, there is scope to review – and certainly to substantiate better - the prominence and weight attributed in the evaluation template to the different factors considered.

The current evaluation system attaches considerable weight to the status and intellectual outputs of the staff who will be involved in the proposed course. This is entirely consistent with the objective of checking that adequate conditions are in place to allow students to have access to knowledgeable teachers and mentors and receive their training in an environment where high quality research is undertaken and valued. This reflects a model of academic postgraduate education that views research culture and peer effects among individuals involved in research as key contributors to the learning and scientific development of students. In placing such emphasis on these factors, however, there is a risk that other variables affecting the quality of the training offered are attributed too little attention, particularly at Master’s level and in professionally oriented programmes.

The key concerns of the OECD team relate to the comparatively limited attention attributed to, first, the relevance of new courses to national or regional needs and developing knowledge areas and, second, the design of the training programme, and support and personal development opportunities offered to students.

Under the section dealing with students, the existing standard evaluation template for new courses includes an assessment of relevance of the “graduate profile” the proposed course is intended to generate – in other words, the types of knowledge and skills graduates are expected to possess (this is also a consideration in the evaluation instruments used in SINAES (INEP, 2017[5]). However, this fundamental issue is considered under the same broad heading as practical issues like student selection and the number of study places. Although the coherence of the proposed course with the Institutional Development Plan (PDI) of the host institution is assessed, there is no explicit assessment of the relevance of course to the needs of Brazil, in terms of knowledge development and highly qualified human resources. There is no obvious place in the current framework where the contribution of new courses to new or emerging fields of knowledge is assessed.

These problems are compounded by the fact that assessments are carried out exclusively by academic staff from a specific discipline, using largely traditional measures of academic performance. While academics in a given field may be expected to have a good understanding of the developments in that field in an international context, particularly in less applied areas, they may, understandably, have less understanding of how knowledge and skills in the field can contribute to national development goals or respond to societal challenges. There is scope to include more perspectives from non-academic bodies in this aspect of the assessment.

Similarly, although five of the 49 academic fields are nominally classified as inter-disciplinary (biotechnology, environmental science, education, material science and “inter-disciplinary”), the strong focus on traditional disciplines and scientific output in these disciplines may create barriers to new courses in innovative fields of study that may ultimately be important for the future of Brazil’s postgraduate training system. The risks of working in disciplinary silos are by no means unique to Brazil, but do warrant further attention in the way CAPES evaluations are structured and organised.

The second key issue that deserves greater attention in the assessment of new programmes is organisation of training and support for students. At present, the CAPES evaluation template includes an assessment of the “appropriateness of curriculum structure to subject of programme”, but little obvious room to assess how the training programme will help to develop students’ knowledge and skills and monitor their progress. Across the OECD, higher education institutions and research funding bodies have increasingly focused on developing postgraduate training with a greater explicit focus on helping students to acquire research skills and transversal competences (in collaborative working, communication, project management, entrepreneurship, for example) that they can exploit subsequently in a wide range of settings. Evaluation systems in other higher education systems do place more emphasis of these issues7.

In the discussion of quality indicators, it is important to acknowledge that the current CAPES assessment system has developed distinct criteria to be applied in the evaluation of Professional Master’s courses. In particular, the criteria for this type of course take account of the different staff profile required to successfully implement more applied forms of training. Between 2010 and 2017, the number of Professional Master’s courses in Brazil increased from 247 to 703, suggesting that the authorisation system is functioning for this type of provision. However, developing appropriate quality criteria for applied research and postgraduate programmes has proved challenging in all OECD higher education systems and there is certainly scope for ongoing mutual learning. Within Brazil, it is important to monitor the implementation of existing Professional Master’s programmes, to discuss strengths and challenges with programmes, students and industry and public sector partners and to ensure lessons learnt feed back into the evaluation indicators used.

A final consideration about the indicators used in assessment of proposals for new courses is the absence of an explicit requirement for a course or programme development plan with measurable, time-bound targets. Requiring programmes to develop such a plan and establish clear targets would create a useful reference for subsequent periodic reviews.

Effectiveness - division of responsibilities

A defining characteristic of the CAPES evaluation system is the strong role of academic peers in both defining the evaluation criteria and undertaking programme evaluations. Field coordinators and committees have leeway to adapt commonly agreed evaluation templates to the requirements of their specific fields, by defining field-specific assessment criteria and adapting weightings between broad evaluation criteria. In practice, field committees stick very closely to the standard evaluation template, but adjust specific evaluation criteria for individual topics. The strong involvement of the academic community in both policy-setting and implementation, as well as the flexibility afforded to field committees in the process, have contributed to the widespread acceptance of the CAPES evaluation system and a shared sense of “ownership”. This contrasts with the evaluation processes for undergraduate programmes implemented as part of SINAES, which are widely perceived as top-down.

Despite the strengths of the current division of responsibilities within the CAPES evaluation system, the operation of an evaluation system that relies heavily on the voluntary contribution of academic staff organised in discipline-specific field committees is not without problems.

First, there is the practical issue of the availability and commitment of academic peers. Although academics involved in the CAPES evaluation process consulted by the OECD review team felt the time and effort required of them for the current system for approval of new programmes remained reasonable, they highlighted that the CAPES system as a whole is becoming unmanageable for field committees, as the number of postgraduate programmes increases. We return to this issue in the discussion of the four-year reviews below.

Another potential risk with the current system is that authorisation to start a new academic postgraduate programme depends to large extent on the opinion of academics who work in “rival” postgraduate programmes in the same field and who may have an interest in restricting expansion of provision to limit competition for students and research funds. In practice, the OECD review team found no evidence that this potential conflict of interests has led to any undue restrictions on the creation of new programmes. The number of postgraduate programmes has increased considerably over the last decade. Moreover, evaluation criteria for new proposals are clear, field committees need to justify their evaluation scores in detail, the final evaluation score is validated by CAPES’ inter-disciplinary Technical and Scientific Council, and transparent procedures exist for proposing institutions to appeal against decisions.

More seriously, as highlighted above, the reliance on disciplinary committees composed exclusively of Brazilian academics risks creating an excessively narrow academic focus in evaluations. While scientific excellence and traditional measures of academic output remain the basis for postgraduate education, it is important to complement assessment of this basis with perspectives from outside academia, to ensure the development of postgraduate education responds to broader national and regional needs. Equally, as highlighted above, it is crucial that there is room for innovation in the definition of study fields and the way programmes are implemented.

On a practical level, the current process for the evaluation of new courses involves limited or no direct interaction between those proposing the new courses and those evaluating the proposals. This may be justified by the limited availability of time and resources and a desire to ensure the evaluation is independent and transparent. Nevertheless, the CAPES evaluation system is notable for being largely “paper-based”. Other quality assurance systems tend to employ site visits, or at least, as in the Programa Nacional de Posgrados de Calidad in Mexico, an interview with the course coordinator as part of the initial authorisation or accreditation process (CONACyT, 2015[6]).

Effectiveness: use and effects

As noted above, a successful CAPES evaluation is a pre-requisite for all new academic postgraduate courses to begin operation. On passing the initial evaluation process, new courses are attributed a provisional evaluation rating of three out of five. On this basis, they have access to CAPES funding for capacity building and student scholarships. Funding for grants is allocated by CAPES to the programme, which is then responsible for awarding scholarships to students.

The results of the entry evaluation for new courses are made public on the CAPES website and are used by courses in their marketing and student recruitment processes. For understandable reasons, the approval of new courses, which are evaluated in varying numbers every year, does not attract as much public attention as the results of the four-year periodic evaluations discussed below and which cover the entire stock of postgraduate programmes.

Efficiency and cost-effectiveness

Academics involved in the CAPES field committees consulted by the OECD review team tended to indicate that the time and financial resources invested in the evaluation of new courses were proportionate to the goals of the system and remained manageable in light of the average number of new proposals received annually. Although the costs associated with initial approval of postgraduate courses have not been made available to the OECD at the time of writing, the absence of systematic review visits and the use of academic field committees who work on a voluntary basis clearly limit costs for the Brazilian State.

The Review team understands that no assessment of the value of the time dedicated to evaluation of new courses by academic staff in the field committees – and thus also the cost to their home institutions - is currently available. Given the comparatively rapid rate of expansion of postgraduate provision in Brazil in recent years and the related increase in the number of proposals for new courses, it will be important to develop a better understanding of the number of person-hours used in the evaluation process and the associated costs.

Given the concern in Brazil to maintain a high quality-threshold for entry of new courses to the academic postgraduate system in the country, the existing system of systematic peer review for all new programmes appears to be appropriate in the current Brazilian context. In the longer term, as the scale of the postgraduate system continues to evolve, it may be desirable (or necessary) to move away from programme-level initial accreditation to allow institutions that meet specific conditions and have adequate institutional quality assurance processes to launch academic postgraduate programmes under their own authority. Such models of institutional self-accreditation exist in many mature higher education systems, although the OECD team recognise that such an approach may not yet be appropriate for an expanding system such as that in Brazil.

CAPES: four-year programme reviews

Relevance: rationale and objectives of the current system

Every four years8, CAPES implements a comprehensive evaluation of all academic postgraduate programmes that have already been accredited and been in operation sufficiently long for students to have produced academic results. Although the specific objectives of this process are not formulated very explicitly in the relevant secondary legislation, the four-year reviews appear to fulfil a double role:

  • They provide a means to ensure postgraduate programmes (continue to) meet at least minimum defined quality standards, as programmes scoring less than three out of five lose CAPES funding and the national validity of their diplomas, and;

  • The reviews provide an incentive for programmes to strive for improvement – as measured by the defined criteria – as programmes can obtain a higher score (than that awarded in the initial approval process or in the previous round of periodic reviews) and thus greater prestige and, potentially, greater funding.

From a quality assurance perspective, the reviews are in practice very much focused on external assessment and ensuring accountability, with limited or no focus on supporting programmes to improve (quality enhancement).

Effectiveness: quality indicators used and generated

As for the evaluation of proposals for new courses, the four-year reviews are coordinated by CAPES, but undertaken by the 49 field committees under the leadership of their field coordinator. The field committees draw on information on staff, students, graduates and details of scientific outputs reported by each postgraduate programme through the online Sucupira platform as a basis for their assessment of each programme. As in the case of the assessment of new courses, field committees use a standard evaluation grid which they adapt to the specificities of their field, in particular through formulating specific evaluation criteria for each topic and adjusting the weights between topics.

As seen in Table 6.2, the assessment includes a review of the programme proposal and its relevance, although this is not attributed any points in the final score. The criteria relating to staff are similar to those used in the evaluation of new courses, but verified using data from Sucupira. Similarly, most of the criteria relating to students are based on quantitative data reported by the programmes.

The quality of student publications (including published dissertations and theses) and the quality of the academic output of staff in academic journals are assessed using a standard classification of publication “vehicles” (from international peer-reviewed journals to university online publications), recorded in an online database called Qualis. Part of the work of each field committee each year is to review an established classification of publication vehicles relevant for their field and attribute a “quality rating” on a seven-point scale (A1, A2, B1, B2, B3, B4, B5), where A1 typically includes the most prestigious international journals in the field with high impact factors. Citation impact, assessed through mechanisms such as the Scopus database and citation scores such as the h-index9, is a significant criterion in the rating of journals in Qualis in many CAPES fields. However, while the use of impact factors is well-established, but not uncontested, in the hard sciences, there is an ongoing debate in Brazil, as in other countries, about the extent to which such measures capture the impact and relevance of work in the social sciences, humanities and arts.

The development of the Qualis classification database means that publications produced by each programme (and reported in Sucupira) are automatically attributed a quality rating on the basis of the assigned rating of the publication vehicle used. As the Qualis classification is undertaken for each field, the same journal may have a different rating in different fields. Individual fields have also developed Qualis-like rating systems for artistic and technical outputs, although these systems are less well established and more complex to implement.

Table 6.2. Periodic programme evaluation: criteria for academic programmes

Criteria

Weighting (range)

Programme proposal

1.1. Coherence, consistency, comprehensiveness and ‘currentness’ (atualização) of the priority research fields, lines of research, projects in progress and curricular proposal.

0

1.2. Future planning for the programme taking into account challenges for the knowledge field in terms of knowledge production, training, social engagement and the destinations of graduates.

1.3. Infrastructure for teaching, research and outreach / engagement

Academic staff

2.1. Profile of the academic staff, considering levels of qualification, diversification in the origin of training, ongoing training and experience and the compatibility of these with the programme proposal.

10-20 %

2.2. Adequacy and time commitment of permanent teachers to research activities and the training programme.

2.3. Distribution of research and training activities among the staff involved in the programme.

2.4. Contribution of programme staff to undergraduate teaching and / or research activities, paying attention to the repercussion that this item may have on the training of future participants in the postgraduate programme (only when there is a direct link with an undergraduate programme).

Students, theses and dissertations

3.1. Number of theses and dissertations defended in the evaluation period, in relation to the number of permanent teaching staff and the size of the student body.

30-35%

3.2. Distribution of the focus of theses and dissertations defended in relation to the profile of teaching staff.

3.3. Quality of theses and dissertations and contribution of the academic output of undergraduate (if the HEI has undergraduate courses in the area) and postgraduate students to the overall output of the programme, as measured by publications and other indicators relevant to the field.

3.4. Efficiency of the programme in the training students: time taken for graduation of Master’s students and doctoral candidates.

Scientific outputs

4.1. Quality rated publications by permanent staff member.

35-40%

4.2. Distribution of quality rated publications in relation to the permanent teaching staff of the programme

4.3. Technical output, patents and other outputs considered relevant.

4.4. Artistic outputs, in areas where such output is relevant

Social engagement and impact

5.1. Insertion and regional and (or) national impact of the programme.

10-20%

5.2. Integration and cooperation with other research and development programmes and professional development related to the area of knowledge of the programme, with a view to the development of research and postgraduate studies.

5.3. Visibility or transparency given by the programme to its performance.

Source: CAPES (2017) Regulamento para a Avaliação Quadrienal 2017 (2013-2016) Programas acadêmicos e profissionais. (CAPES, 2017[7])

In contrast, no standardised classification system exists for books or book chapters published by academic staff (or students) in programmes. This means, for fields where books are a major vehicle for intellectual output, field committees have to assess books and book chapters individually. Typically, programmes are invited to identify books and book chapters that they believe meet particular quality criteria established by the field committee and these are then all reviewed quickly. Books and book chapters identified as having particular merit are then read in full by members of the field committee. The OECD team understands that the assessment of books and book chapters represents one of the largest calls of the time of members of some field committees (notably in the humanities, social sciences and some of the hard sciences).

The social engagement and impact (inserção social) of programmes is reviewed in a qualitative fashion on the basis of documentary evidence submitted by programmes. Some of the field committees examine the destination of graduates under the topic of ‘Insertion and regional and (or) national impact of the programme’. However, it is not clear how this assessment is made and whether it is based on systematic surveys of graduate destinations.

Considered in the round, the set of indicators used in the CAPES four-year evaluations covers many of the key variables that would widely be assumed to contribute to high quality postgraduate provision. It is positive that the evaluation grid, under different headings, takes into account factors such as staff-to-student ratios, time to graduation and cooperation networks with external research and non-academic organisations, for example.

However, the most striking feature of the four-year reviews is the strong focus on the scientific output of the academic staff involved in the programmes being evaluated. As noted earlier in this chapter, the presence of competent researchers is crucial to the capacity of programmes to share subject knowledge and research expertise with students and create an environment conducive to the students undertaking their own high-quality research. However, the CAPES evaluation is – nominally at least – an evaluation of postgraduate training programmes, not a research performance evaluation like the Research Excellence Framework (REF) used in the United Kingdom. As such, it is questionable why the system does not allocate less weight and fewer resources to assessing the performance of staff and more to assessing the performance of students and outcomes of graduates.

The current system does attempt to measure the quality of dissertations and theses and other papers and outputs of students. However, this assessment is in most cases based on proxies provided by a notional quality rating attached to the “vehicles” in which the dissertations or theses (or derivatives thereof) are published. It is questionable whether it is reasonable to expect Master’s students or even doctoral candidates to be publishing outputs in journals at a similar level to established academic staff. While postgraduate students do publish in high quality academic publications, they are typically in a minority in most established higher education systems. Other forms of publication – such as non-peer-reviewed online journals - do not necessarily provide a reliable guarantee of quality.

There is no easy solution to these problems in a system like the current CAPES four-year reviews. Quality assurance systems which rely on on-site review visits (principally at the Master’s level), do sometimes involve a qualitative review of a sample of student dissertations. Other systems rely on other mechanisms to ensure the quality of postgraduate student outputs – essentially placing trust in standard processes. Several English-speaking countries rely heavily on external marking (by academic peers) of papers and dissertations at Master’s level, as a means to assure quality across the system. Many systems – including Brazil – insist doctoral theses are peer reviewed and finally approved by defence panels composed of leading academics in the field.

As noted, there is also some attempt in the current CAPES system to assess the destinations of graduates from programmes. However, on the basis of available evidence, this aspect of performance is not currently addressed adequately. The ability of graduates from programmes to find relevant employment in, or outside, the academic sector and draw on their skills must be assumed – in part at least – to reflect the quality of the training they have received. It would be desirable to further develop systems in Brazil to allow graduates to be tracked and to include graduate outcomes more prominently in the postgraduate training evaluation system.

A final issue that deserves attention in this discussion of indicators is the way in which field committees identify and assess programmes deemed to be of international quality or excellence, with strong internationalisation and international engagement (inserção internacional). These are attributed CAPES scores of 6 or 7 and subsequently have access to additional resources. Each field committee is responsible for establishing transparent criteria for allocating these top scores. In all cases, programmes are initially scored on a scale of one to five, and then programmes with doctoral provision that score five (that score ‘good’ or ‘very good’ on all other criteria) are assessed against additional criteria understood to indicate international excellence. Common criteria include the amount of external research funding attracted by the programme, the number and intensity of international cooperation and the proportion of outputs published in international journals.

Given the assumed link between internationalisation and academic excellence, the principle of making achievement of the highest scores for academic postgraduate programmes dependent on objective measures of international activity appears sound. Although the rigour and appropriateness of the indicators used to measure internationalisation may vary between fields, the types of measures used appear generally to be appropriate, objectively measurable and comparable to indicators of internationalisation used in other higher education systems. In the 2017 four-year CAPES evaluation, 184 programmes (5.3% of academic programmes) achieved a score of seven and 298 (8.6%) programmes achieved a score of six. It is credible that a higher education system of the size and maturity of Brazil’s would have such numbers of programmes that could be considered of high quality in an international context. As discussed below, there is scope to bring more international perspectives into the assessment of quality and the determination of whether programmes genuinely deliver international standards of excellence.

Effectiveness: division of responsibilities

As discussed earlier in this chapter, the reliance of CAPES on peer review is both a strength, for the acceptance and credibility of the system in the academic community, and a potential risk factor, as the scale of the postgraduate training system in Brazil expands and increases the burden of undertaking peer evaluations. CAPES has hitherto been successful in attracting and obtaining the commitment of well-regarded Brazilian academics to work as part of its field committees, including for the four-year reviews. However, some of the academics involved in CAPES evaluations interviewed by the OECD review team expressed concern that it was becoming harder to engage academics and – from a purely logistical perspective – the system of peer review as currently configured is no longer sustainable.

A second key issue in the current staffing of CAPES evaluation processes is the risk of endogamy (inbreeding). Even in a country size of Brazil – particularly given the relatively small size of its postgraduate training system – the number of established academics in a given field of study is limited. The number working in very high-quality departments and programmes at an international level is even smaller. As such, there is the risk that the people making judgements on whether or not a given programme is of international standard have close connections with the programmes they are judging. It is likely that their appreciation of the relative merits or deficiencies of the programme is conditioned by the tight-knit academic community of which they are a part. Moreover, the comparatively small pool of evaluators and their background may lead the evaluation process to reward programmes that reproduce existing models of education, rather than innovate.

It would be beneficial to bring the perspective of international peers into the CAPES evaluation process, particularly for assessment of those programmes judged to be of international standing. A large-scale involvement of international peers in assessment would almost certainly be impractical because of the costs involved, the difficulty of securing participation and language issues. Nevertheless, some targeted involvement of academic peers, including through electronic communication, from outside the country may be feasible.

Effectiveness: use and effects

The results of the four-year CAPES evaluations have far-reaching effects. For programmes that fail to meet the minimum quality standard of three out of five, the evaluation essentially leads to the closure of the programme. Programmes failing to achieve a score of three lose their right to CAPES funding and see the national validity of their diplomas withdrawn. It is understood that students in programmes that achieve scores below three typically have to transfer to other programmes to complete their studies.

The national research funding council, the Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq), also takes into account the association of researchers with particular evaluated programmes in in the assessment of application for individual research grants, including the “Research Productivity Grants” (Bolsas de Produtividade em Pesquisa).

Efficiency and cost-effectiveness

The costs of the current CAPES evaluation system for CAPES itself are comparatively modest owing to the reliance on the voluntary participation of academics in the field committees. However, as the system evolves, it will inevitably have to find ways to maintain the value of peer involvement in the evaluation, while reducing the sheer volume of work required to evaluate each programme.

The use of Qualis appears to be an efficient and relatively effective way of providing information on the broad quality of a proportion of the scientific output of programmes. In contrast, the time and effort dedicated to the review of books and book chapters by some of the field committees seems disproportionate to the information about the quality of programmes that is obtained from the exercise.

In parallel, however, the CAPES evaluation system is notable for the absence of site visits to programmes or interviews with programme coordinators and its almost exclusive reliance on domestic peers for its review work. These features of the system keep costs down. Any future decision to increase the use of visits and interviews, or the involvement of international peers are likely to add to the costs of the system.

6.3. Key recommendations concerning CAPES evaluations

1. Adjust the weighting of evaluation criteria in assessment of new courses to focus more on relevance, training and continuous improvement

The approval of new postgraduate courses through the systems of peer review currently in place creates an effective mechanism for assuring the quality of new academic postgraduate education in Brazil. Nevertheless, the OECD review team considers the current evaluation process for new courses could be improved by adopting the following modifications:

  • Revise the structure of the evaluation fiche for new courses to create a more transparent structure that follows the intervention logic for postgraduate training programmes, moving from inputs (including institutional context, supervisory staff, facilities) to processes (programme structure, approaches to incorporating practical experience, methods for supervision, mentoring and assessment) and expected outputs (graduation times and rates, graduate profiles) that provides a clearly formulated and valid rationale for each indicator used.

  • Include a separate section in the evaluation fiche on the relevance of the programme to national development needs, taking into consideration the development of new scientific areas and the knowledge and skills required for the further development of the country, including in natural sciences, social sciences and the arts.

  • Increase the weight attached in the evaluation of new courses to the training dimension of programmes and support provided to students, with an assessment of the likely capacity of the programme to equip students with relevant research and transversal skills (such as collaborative working, communication, project management or entrepreneurship).

  • Include a more explicit requirement for a programme development plan for all new programmes approved, setting out specific and measurable goals over time. This would act as a reference for subsequent periodic reviews and introduce a clearer focus on continuous improvement. The approach used by CONACyT in Mexico for assessment of programmes for the Programa Nacional de Posgrados de Calidad (PNPC) might provide some inspiration in this regard (CONACyT, 2015[6]).

2. Bring additional perspectives into the evaluation of new programmes

As argued in the preceding analysis, the current field committees undertaking the assessment of new programme proposals are composed exclusively of academic peers from the field in question. To bring a broader range of perspectives to the process and potentially promote innovation and inter-disciplinary cooperation, CAPES should involve one or more academics from other academic fields in the field committees undertaking the assessment of new courses.

In addition, to bring in expertise and perspectives from outside the academic community, CAPES should consider appointing specialists in economic development and the evolution of skills and knowledge requirements, as well as representatives of the private economy and the wider public sector to the Scientific and Technical Council (CTC-ES). If implemented effectively, this could ensure that final decisions on programme approval take into account broader national needs and developments.

3. Maintain programme-level accreditation in the medium-term, but consider the long-term desirability of transitioning to institutional self-accreditation for established institutions and programmes

Brazil’s postgraduate education system has grown rapidly in recent years and might still be considered to be in a phase of consolidation, when compared to postgraduate education systems in many other OECD and partner countries. In the medium term, it therefore makes sense to maintain course-level accreditation, to maintain oversight of the continued development of the system and ensure the promotion of quality. In the longer term, it could be possible to move to a system of institutional self-accreditation linked to strengthened model of institutional accreditation (see chapter 7). This would allow universities to start academic postgraduate programmes if they met certain criteria in terms of staff and profile and had been judged to have strong institutional quality systems in an institutional quality review. The provision of publicly funded scholarships and additional programme funding should certainly remain dependent on positive external evaluation of the programme, in line with practice in many OECD systems.

4. Clarify the objectives of periodic evaluations and rebalance the focus of evaluation criteria to include greater focus on student outputs and outcomes

The periodic (four-year) evaluations of postgraduate programmes currently devote disproportionate attention and resources to assessing the outputs of academic staff. Although the quality of staff is an important factor in the quality of postgraduate programmes, the CAPES evaluations should focus on assessing the conditions for, and performance of, postgraduate training, not the research output of academic departments. The OECD review team therefore recommends:

  • Rebalancing the weighting in the evaluation criteria for four-year assessments, by increasing the weight attributed to educational processes, student outputs and employment outcomes, and reducing the weight attributed to staff outputs.

  • Reducing the time and resources allocated to assessment of staff output and assessing only a limited sample of research output. The Qualis for journal rankings could be maintained, but should also be reviewed, to introduce more uniformity in the classification of journals between knowledge fields. Less time should be devoted to assessment of individual outputs (particularly books and book chapters). This would contribute to reducing the workload for field committees and making the entire peer review system more manageable in the medium-term.

  • Systematically including interviews with course and programme coordinators as part of the periodic assessment of courses and programmes, to gain addition insights into the operation and performance of the programme and answer questions arising from documentary evidence.

If a more detailed research assessment exercise is considered necessary to promote quality in the research function of higher education in Brazil, the relevant authorities should establish this as a separate, but related exercise, with clear and distinct objectives. All activities undertaken as part of the CAPES evaluation processes should focus on ensuring quality and promoting quality enhancement in the postgraduate training system.

5. Ensure those judging whether programmes are of international standing really have an international perspective.

Given Brazil’s aspiration to develop a world-class postgraduate training system, it would be valuable to gain an international perspective on the programmes judged nationally to be among the best in the country. The OECD review team therefore recommends that CAPES systematically involve non-Brazilian academics in the assessment of programmes pre-selected by field committees as candidates for being programmes of international quality or excellence. In light of the number of programmes involved, it is likely to be most feasible to concentrate this international involvement on programmes proposed for the top score of seven. It may be possible to organise international peer review committees who are able to review synthesised information about the programmes under review in English or Spanish and potentially conduct group interviews remotely or in person with programme coordinators.

6. Undertake evaluations of specific components of the CAPES system and aspects of academic postgraduate provision as inputs to future policy

The OECD review team identified two specific issues where further information and analysis appears to be required in order to plan future policy for academic postgraduate education in Brazil and its external quality assurance:

  • First, the full costs associated with the current system of external peer review are a “black box”. Peer review is inherently time-consuming and therefore expensive. The time academic staff spend involved in peer review is time they are not dedicating to their core activities of teaching, research and engagement with society. In order to help plan the future development of the system of peer review, CAPES should undertake an assessment of the cost of the time used by members of the field committees in the evaluation process, including the unit cost per programme evaluation.

  • Second, there is a wider question relating to the future of academic (stricto sensu) Master’s programmes. As noted, Master’s programmes in most OECD countries are now viewed as either purely professional qualifications (as in the United States), or an extension and deepening of undergraduate studies, which prepares students for work in knowledge-intensive sectors (as in most of Europe). A doctorate is regarded as a pre-requisite for an academic or research career in most of the world, including, increasingly, in Brazil. This leaves the question as to what academic Master’s programmes are for. Is the intention that Master’s graduates should go on to undertake a PhD and work in academia, or should they be prepared for work in the wider economy? If the latter is the case, it is questionable whether Master’s programmes should continue to be part of the highly academic and research-focused CAPES evaluation processes (notwithstanding the recommendations about rebalancing above).

    It would be valuable to undertake a systematic evaluation of the role of Master’s education in Brazil, including a specific focus on the profile and effectiveness of the Professional Master’s programmes created in recent years. This evaluation should consider, in particular, the destinations of previous graduates from these programmes and the views of the academic community and private and public sector employers on the relevance and future role for Master’s level education in Brazil.

References

[3] CAPES (2018), Sobre a Avaliação - Objetivos da Avaliação, CAPES website, http://www.capes.gov.br/avaliacao/sobre-a-avaliacao (accessed on 17 November 2018).

[2] CAPES (2017), CAPES divulga resultado final da Avaliação Quadrienal 2017, CAPES Website, http://www.capes.gov.br/sala-de-imprensa/noticias/8691-capes-divulga-resultado-final-da-avaliacao-quadrienal-2017 (accessed on 17 November 2018).

[4] CAPES (2017), Portaria Nº 161, de 22 de agosto de 2017 - Avaliação de Propostas de Cursos Novos, APCN, de pós-graduação stricto sensu. (Ordinance 161 of 22 August 2017 - Evaluation of Proposals for New Programmes (APCN), postgraduate stricto sensu), https://capes.gov.br/images/stories/download/legislacao/30082017-Portaria-N-161-de-22-de-agosto-de-2017.pdf (accessed on 17 November 2018).

[7] CAPES (2017), Regulamento para a Avaliação Quadrienal 2017 (2013-2016) Programas acadêmicos e profissionais., Coordenação de Aperfeiçoamento de Pessoal de Nível Superior, Brasília, http://capes.gov.br/avaliacao/sobre-a-avaliacao/legislacao-especifica (accessed on 17 November 2018).

[6] CONACyT (2015), Programa Nacional de Posgrados de Calidad (PNPC) Consejo Nacional de Ciencia y Tecnología - Marco De Referencia, Consejo Nacional de Ciencia y Tecnología, Mexico City.

[1] European University Association (2018), University Autonomy in Europe, https://www.university-autonomy.eu/ (accessed on 15 October 2018).

[5] INEP (2017), Instrumento de Avaliação de cursos de graduação Presencial e a distância - Reconhecimento e Renovação de Reconhecimento (Evaluation instrument for undergraduate programmes - classroom-based and distance - recognition and renewal of recognition), Instituto Nacional de Estudos e Pesquisas Educacionais Anísio Teixeira, Brasília, http://www.publicacoes.inep.gov.br (accessed on 11 November 2018).

Notes

← 1. The capacity of higher education institutions to provide lato sensu postgraduate programmes is verified through the institutional accreditation and re-accreditation procedures implemented by INEP as part of SINAES. The operation of lato sensu “specialisation” programmes more broadly is governed by a 2007 Resolution of the National Education Council. (CNE, 2007[94])

← 2. Of 32 European higher education systems examined by the European University Association’s ‘Autonomy Scorecard’, only ten require universities to seek prior accreditation to start a doctoral programme.

← 3. Including Federal, State and municipal public institutions and private institutions

← 4. The OECD understands that the maximum score initially attributed to a new course is three. Programmes are ultimately rated on a scale of 1-7, where scores 6 and 7 are reserved for programmes with doctoral provision and that are assessed to be operating at an internationally comparable level of excellence. Scores 4 and 7 can only be attributed following a full periodic review, once the programme is well established.

← 5. Several funding programmes are run by CAPES for postgraduate programmes with a CAPES evaluation score of at least three. For public institutions, the Programa de Demanda Social (DS) provides funding for student grants and the Programa de Apoio à Pós-Graduação (PROAP) provides funding for the programme itself (facilities, project etc.). For private institutions, the Programa de Suporte à Pós-Graduação de Instituições de Ensino Particulares (PROSUP) provides funding for student grants. Programmes in the public and private sectors that achieve a score of 6-7 (which necessarily have a doctoral programme) can obtain further grant funding from the Programa de Excelência Acadêmica (PROEX).

← 6. The relevant guidelines for approval of new programmes for different areas always suggest that a site visit may be conducted, but not that visits are systematically a part of initial course approval. (CAPES, 2018[98])

← 7. The Mexican Programa Nacional de Posgrados de Calidad (PNPC), for example, includes a specific criterion on ‘follow-up and academic development of students’ (CONACyT, 2015[99])

← 8. Until 2013, the periodic evaluations were conducted every three years.

← 9. The h-index is an author-level metric that attempts to measure both the productivity and citation impact of the publications of a scientist or scholar. The index is based on the set of the scientist's most cited papers and the number of citations that they have received in other publications. The index can also be applied to the productivity and impact of a scholarly journal as well as a group of scientists, such as a department or university or country.

End of the section – Back to iLibrary publication page