Chapter 5. Auditing for more effective government-wide monitoring and evaluation

Supreme Audit Institutions play a unique role in supporting monitoring and evaluation across government as third-party evaluators of government institutions, processes, policies, programmes as well as by evaluating evaluators within the Executive branch. Brazil’s supreme audit institution, the Tribunal de Contas da União (TCU), has similarly played both roles. This chapter explores the current approaches to evaluation in Brazil’s federal administration vis-à-vis international principles and practices at three levels, looking at (i) the system of monitoring and evaluation across government, (ii) monitoring and evaluation at the entity or programme level, and (iii) the communication and co-ordination in improving the culture of monitoring and evaluation. Recommendations are consequently made to TCU as to how it can reorient external control approaches towards supporting more consistent and reliable generation of evidence on policies and programmes in Brazil’s federal administration.

  

5.1. Introduction

Monitoring and evaluation (M&E) involves the systematic collection of evidence on the outcomes of policies and programmes in order to judge their relevance, performance and potential alternatives.1 It is fundamentally about creating and using evidence to inform decisions and better policy making. M&E is not a process, but a management tool for making informed choices about current and future policies and programmes. The information produced from M&E should feed back into the cycle in order to make it possible to derive whether the implementation mechanisms of public policies are working effectively, efficiently and economically.

A recent and promising effort towards more coherent and comparable government-wide M&E stemmed from the 2016 establishment of the Committee on Monitoring and Evaluation of Federal Public Policies (Comitê de Monitoramento e Avaliação de Políticas Públicas Federais, or CMAP) (MP, 2016a; 2016b; 2016c). Given its recent inception, the impact of this Committee on improved M&E is yet to be seen. The effectiveness and objectivity of this Committee could support restoration of citizens’ trust in face of increased demands for greater accountability and transparency in recent years. It can also serve to address the various references by domestic and international actors2 to a need for higher quality information and indicators on government performance in Brazil. In OECD member countries, monitoring of the government programme is a top responsibility of the Centre of Government (CoG). The fulfilment of this role, whether through CMAP or otherwise, will be key to the production and quality of comparable results across the public administration.

Brazil’s supreme audit institution, the Tribunal de Contas da União (TCU), is both an evaluator itself of policies and programmes as well as an “evaluator of the evaluators” in government. That is, TCU audits and evaluates the effectiveness of the M&E system and those responsible for it. TCU has historically played a central M&E role from a broader governance perspective in Brazil. Specifically, TCU has conducted audits, produced indices and developed guidance to provide greater accountability for government expenditures and budgetary goals, as well as aid auditors and executive branch managers alike in conducting better M&E. This chapter highlights some of TCU’s work related to the M&E governance function in Brazil, and explores opportunities to further enhance its contributions to a more effective M&E system both at the whole-of-government and entity level. The opportunities are grounded in an analysis of some of the key challenges facing the Brazilian government with regards to M&E, bearing in mind international standards and good practices.

5.2. Overview: monitoring and evaluation at the federal level

M&E serves different purposes, including informing budget decision-making, providing information on the actual or likely performance of government programmes and supporting government planning, such as the development of national plans. In addition, M&E can improve the governance by guiding managers’ decision making based on evidence of programmes’ effectiveness and efficiency (World Bank, 2006). Definitions for M&E vary. Monitoring can be seen as systematic, ongoing collection of data on specified indicators that provide management, and other stakeholders, with an idea on progress towards objectives. Similarly, performance monitoring is a continuous process of collecting and analysing data to compare how well a project, program, or policy is being implemented against expected results (OECD, 2002). Effective monitoring allows governments to change course when problems arise instead of waiting until the end of a programme when public resources have been fully expended.

An evaluation is a systematic and objective assessment of an on-going or completed project, programme or policy, including its design, implementation and results. Evaluations help managers and other stakeholders to determine the relevance and fulfilment of objectives, as well as the efficiency, effectiveness, impact and sustainability of a particular initiative. As discussed in greater detail below, an evaluation should provide information that is credible and useful, enabling the incorporation of lessons learned into the decision–making processes (OECD, 2002).

The underpinning concepts of M&E in Brazil’s public administration have evolved from bureaucratic (1980’s), to managerial (1990’s) to strategic (2000), as shown in Table 5.1 below. This evolution coincided with developments in the type of evaluation or tools used, as well as with the implementation of various initiatives to strengthen M&E in government. For instance, M&E within the Brazilian government, until the 1980s, was primarily limited to the audit function, including financial and compliance audits, until TCU introduced operational (i.e. performance) audits in 1982 (Vaitsman et. al, 2013). Such audits can be seen as a form of evaluation, as they go beyond compliance with laws and regulations and offer insight on the effectiveness, efficiency and economy of a particular policy or programme.

Table 5.1. Implementation of the evaluation function in Brazil

Predominant concept of public administration

Type of evaluation

Important events for M&E

1980s Bureaucratic

Operational audits

1970: Evaluation of Graduate Programmes; Accounting audits

1980: Yellow book translated and operational audits begin

1988: Federal Constitution

1991-95: First PPA

1990s Managerial

Managerial monitoring

1995-99: PPA Brazil in Action

1995: State Reform Plan

1998: Decree 2829, 28 Oct 1998 (links programmes and PPA)

2001: SIGPLAN created

2000s Strategic

Strategic evaluation

2000-03: PPA ‘Go Brazil’

2004-7: PPA ‘Brazil for All’

2004: SAGI created by MDS

2005: GESPÙBLICA

2008-11: PPA ‘Development with Social Inclusion’

2009: ‘Citizens’ Charter’

2012: Access to Information Act

2012-15: PPA ‘Brazil without Extreme Poverty’

Source: Adapted from Vaitsman, et. al (2013), https://doi.org/10.1080/13876988.2015.1110962

Much of the driving force behind Brazil’s M&E initiatives is linked to the PPA and efforts to strengthen performance-based budgeting. From the late 1980s, Brazil organised its budget process and activities around multi-year plans (Plano Plurianual de Acao, or PPAs), the Budgetary Guidelines Law (Lei de Diretrizes Orçamentárias, or LDO) and the Annual Budget Law (Lei Orçamentária Annual, LOA). These mechanisms and the budgetary process helped to institutionalise the evaluation function into the public administration in Brazil. For instance, the LDO refers to M&E, noting that executive branch entities should evaluate compliance with quarterly fiscal targets. The current PPA (2016-2019), in particular, intends to translate the government agenda into practical steps for implementation, including responsible actors and indicators for evaluations.

Responsibility for Brazil’s evaluation function is shared between various entities that are both internal and external to the policy or programme being evaluated. The M&E system is meant to unfold on three levels, whereby the first evaluation is undertaken by programme management, the second by the sectoral ministry in verifying the entire programme, and the third by the Ministry of Planning, Development and Management (Ministério do Planejamento, Desenvolvimento e Gestão, MP) in taking an evaluation of the whole plan (World Bank, 2006). The MP is designated in Decree No 6,601 (2008) as the central body responsible for co-ordinating the M&E processes related to the PPA, as well as provide methodological guidance and technical support for line ministries and their M&E units (Government of Brazil, 2008a). In addition, the Institute for Applied Economic Research (IPEA), a public foundation affiliated with the MP, provides technical and institutional support for the Brazilian government, including a publication on planning and evaluating public policies in Brazil. This publication represents an excellent resource for TCU to bolster its audits with research, logic models and other tools to analyse the effectiveness of M&E of government programs (IPEA 2015).

Trends towards a results-based M&E system and performance-based budgeting have been coupled with increasing expectations on central entities or the CoG to understand what is happening across government. One of the top four responsibilities of CoG institutions in OECD member countries is in monitoring the implementation of the government programme – with cross-cutting programmes on the rise (OECD, 2014a). As part of their responsibility in communicating results and providing accountability, good practice in the CoG includes ensuring that cross-governmental standards exist for reporting and explaining information about policies and results (IADB, 2014).

In Brazil, the MP provides some level of centralisation, but self-evaluation remains a core principle of M&E in the Brazilian government. Units within line ministries have the responsibility for designing and conducting evaluations themselves (Government of Brazil, 2001). These efforts are often linked to specific programs or initiatives. For instance, the Ministry of Social Development and the Fight Against Poverty (Ministério de Desenvolvimento Social e Combate à Fome) evaluates social policies and programs. The Secretariat of Evaluation and Information Management (Secretaria de Avaliação e Gestão da Informação, SAGI) also conducts M&E activities related to key social development policies and programmes. Brazil conducts evaluations on a predominantly annual basis, but sectoral or multi-annual goals of PPA may be evaluated over a longer time horizon, such as those specifying M&E in sectoral plans like the National Education Plan (PNE).

In addition, the federal government established in April 2016 the CMAP to “improve public policies, programmes and actions, as well as the application of resources and quality of public spending” (Article 1) (Brazil, 2016). This inter-ministerial initiative groups together key CoG institutions: Ministry of Planning, Development and Management (Ministério do Planejamento, Desenvolvimento e Gestão, MP), Ministry of Finance (Ministério da Fazenda, MF), Ministry of Transparency (Ministério da Transparência, Fiscalização e Controladoria-Geral da União, CGU) and the Office of the President of Brazil (Casa Civil) (MP, 2016a; 2016b; 2016c). In looking to cross-cutting policies and programmes, the first initiative of CMAP was to evaluate social programmes – for instance, looking at the presence of fraud in Bolsa Familia (Veja, 2016).

For the achievement of its objectives, the CMAP can: select and propose the guidelines for the M&E of selected policies, programmes or initiatives for evaluation (Article 3); establish thematic commissions for M&E (Article 3); request information and opening of databases used for M&E, save for instances of financial secrecy (article 5); propose Guidelines to entities responsible for M&E to facilitate its work (Article 6). Article 7 stipulates that the activities of the CMAP do not replace the M&E activities that are developed at the entity and programme level (MP, 2016a; 2016b; 2016c). The clarification of interaction between the central and entity level will be important, given that Brazil’s M&E system has traditionally been based on entities’ self-evaluation (World Bank, 2006). The CMAP initiative is the latest in the evolution of Brazil’s M&E system, and appears to be a diversion from the historical tie of M&E to the budget.

TCU is a key M&E actor in Brazil through the fulfilment of its mandate to examine government accounts and to perform supervision and audit of an accounting, financial, budgetary, operational and patrimonial nature (Constitution, article 71). Its accountability function is part of a wider network of other constitutionally appointed bodies, such as the Office of the Comptroller General (Controladoria-Geral da União, or CGU), and in conjunction with constitutional mechanisms that allow for citizen engagement and feedback on programmes and services. Civil society and coordination mechanisms also play a part in strengthening M&E in Brazil. Brazil’s M&E Network (Rede Bra-sileira de Monitoramento e Avaliação), a chapter of the broader Latin American and Caribbean (LAC) Monitoring and Evaluation Network established by the World Bank and the Inter-American Development Bank (IDB), promotes dialogue between various M&E actors, including the Parliament and civil society organisations. The Network functions as a community of practice for M&E practitioners and knowledge-sharing, particularly through the use of a collaborative online platform (Rede Brasileira, 2016). Like audits, which can have a coercive effect on government entities, such efforts can help to promote coherence and effective M&E through the institutionalisation of standards and good practices. Moreover, ombudsmen provide an important avenue for civil society to provide feedback and submit complaints related to government programmes and services.

Through their audit and advisory work, supreme audit institutions (SAIs) like TCU are helping executive branch entities to improve the design and implementation of M&E policies and practices at the whole-of-government level and the entity level. Their audits, reviews and advisory work can provide government entities and policy makers with evidence about which policies and programmes are working, and why. For instance, in an OECD survey of 10 SAIs, 7 of 10 had assessed; the existence of a reasoned evaluation programme in each ministry, including: the mechanisms for ensuring reliable, quality, auditable financial and non-financial performance information; mechanisms for integrating performance information in objectives; coherence between objectives, outcomes and government vision (OECD, 2016a).

In addition, SAIs conduct evaluations themselves, and their work can contribute to solutions for different challenges that governments face in evaluating results. OECD’s 2016 report, Supreme Audit Institutions and Good Governance: Oversight, Insight and Foresight, highlights good practices and government trends in M&E, and provides good examples of the various ways in which SAIs are helping to improve this function within government (OECD, 2016a). This work underpins the recommendations below. TCU’s current strategic plan assigns objective 8 to encourage the monitoring and evaluation of performance by the Public Administration. The strategic plan highlights the need for TCU to contribute to the capacity of entities, and encourage the production and dissemination of objectives, targets and performance indicators that support decision-making processes (TCU, 2015a).

The following recommendations provide nuanced ways in which TCU can achieve its own strategic objectives and can support the federal government in reaching international good governance principles in M&E, such as (OECD, 2016a):

  • Evidence-based decision-making through improved M&E systems at the whole-of-government level;

  • Results-based management through reasoned evaluation at the programme and entity level;

  • Greater information sharing for transparency;

OECD’s recommendations, and expected outcomes, are provided in Table 5.2.

Table 5.2. Recommendations: Auditing for more effective government-wide monitoring and evaluation (M&E)

TCU could strengthen the effectiveness of M&E across government by further emphasising the role of the Centre of Government (CoG) and the need to further standardise results-based management.

  • TCU could conduct periodic evaluations of government-wide M&E policies and mechanisms, putting greater emphasis on the CoG’s ability to foster more consistent, results-based management and M&E of cross-cutting initiatives.

  • TCU could strengthen efforts to assess the extent and effectiveness of entities in using M&E results for decision making.

TCU could help entities to improve institutional and programme evaluations by auditing their readiness, capacity and indicators to manage for results.

  • 2.A. TCU could periodically conduct readiness assessments of selected entities to evaluate their effectiveness and maturity of M&E and their culture of results-based management.

  • 2.B. TCU could strengthen reviews of entity and programme results, particularly indicators, for assessing their value for decision making and achieving medium to long-term goals.

TCU could assess the communication and coordination mechanisms, including interoperability of information and data systems, to improve M&E of cross-cutting government policies and programmes.

5.3. Emphasising the role of the Centre of Government in strengthening M&E systems from a government-wide perspective

TCU could strengthen the effectiveness of M&E across government by further emphasising the role of the Centre of Government and the need to further standardise results-based management.

Brazil’s evaluation function is shared between various entities that are both internal and external to the policy or programme being evaluated. The current M&E system in Brazil is decentralised. The MP provides some level of co-ordination and centralisation, yet units within line ministries retain the responsibility for designing and conducting evaluations (Government of Brazil, 2001). Moreover, government-wide M&E policies are evolving, but efforts made be ad-hoc or vary by sectors as well as across states (Rosenstein, 2015). The 2016 establishment of CMAP involving central government entities (MP, CGU and Casa Civil) may help to systematize M&E across government.

The variance in approaches in M&E has implications for standardisation and institutionalisation of M&E practices, as well as comparability of results. On the one hand, the flexibility afforded to units and line ministries is important to ensure that results generated from M&E are tailored enough to speak to the successes and failures of particular initiatives. On the other hand, standardisation and institutionalisation of M&E across government has a myriad of benefits, including comparability of government performance and safeguarding initiatives against undue influence. Indeed, political sensitivities and special interests can undermine robust M&E systems, since underperforming or failed initiatives can tarnish reputations of politicians or high-level officials. In such circumstances, incentives for deemphasising potential problems or minimising publicity of failures could result in indicators that are unreliable, as well as inaccurate assessments that convey overly positive results.

As mentioned above, the evaluation of M&E in Brazil has largely been linked to the PPA and efforts to strengthen performance-based budgeting. In 2010, the Brazilian government published a decree (N° 7.133) that established guidelines, general criteria and procedures for reviewing individual and institutional performance (Chamber of Deputies, 2010). The decree notes that institutional performance evaluations aim to assess organisational goals and targets in line with the PPA, the LDO and LOA (Chamber of Deputies, 2010). It refers to both “global” and “intermediate” targets. The decree provides some general guidelines for conducting evaluations and using their results, such as the need for using objective criteria and publishing results, yet key M&E practices and concepts are not readily apparent or explicit as a framework. Moreover, much of the discussion about M&E in the decree is discussed in the context of individual performance (as opposed to institutional) in the context of performance-based remuneration of government employees.

Brazil’s central institutions have made some efforts to streamline M&E standards that are results-based and that are not directly linked to the PPA. MP has provided guidance for executive branch managers to assist in M&E, and the Secretary for Public Management (Secretaria de Gestão Pública, or SEGEP) issued a performance evaluation manual to promote the use of performance management as a tool for driving improvements. For instance, in 2013, the MP recognised that evaluating performance and results is largely done upstream in the policy formulation process. In response, the Secretary for Public Management (Secretaria de Gestão Pública, or SEGEP) issued a manual to promote the use of performance management as a tool for driving continuous improvement in results. The manual aims to provide guidance to public sector management, public servants and employees of entities in the Federal Public Administration, among others, on how to operationalise performance evaluation (MP, 2013).3

Brazil’s CoG entities could continue to promote the institutionalisation and standardization of M&E as a concept that is broader than performance-based budgeting. The CMAP initiative intends to serve this purpose. To further promote broader applications of M&E, guidance issued by CMAP (in accordance with article 6) or through other initiatives could better communicate the importance of conducting and using M&E to (i) better support government planning, (ii) to help ongoing management of government programmes and activities and (iii) to underpin accountability (World Bank, 2006). Amongst OECD member countries, monitoring progress of reforms is one of the top four priorities of the CoG. This role of the CoG means developing new mechanisms that emphasises outcomes rather than just tracking expenditures. Another key role of the CoG is in strategic planning and steering of the government programme in areas that extent beyond the budgeting process. Indeed, international good governance principles promote that such national steering and planning is evidence-based, integrating findings from M&E processes, foresight activities and stakeholder consultation (OECD, 2016a; 2014a).

The decentralised M&E system in Brazil also has implications for the evaluation of cross-cutting programs that serve national goals. CoGs require reliable evidence on performance across government that allows for some degree of comparability of results from manager and entity level evaluations. Regarding cross-cutting initiatives where multiple ministries or actors are involved, a central entity or lead ministry needs the capacity to co-ordinate and evaluate. In Brazil, lead or sectoral ministries may be responsible for oversight of particular cross-cutting programmes, as is done in other OECD member countries. However, until the establishment of CMAP, a single entity with the sole responsibility for monitoring or evaluating cross-cutting goals did not exist. The entities that comprise the CMAP – MP, MF, CGU and Casa Civil – are now mandated with the evaluation and assessment of cross-cutting policies, namely in the social area. If CMAP is effective in carrying out its mandate, it can provide a horizontal view on the performance of key, select, government-wide initiatives that was previously lacking. Such information if reliable and of quality could be used to inform decision-making at the CoG.

In view of cross-cutting initiatives, and interrelated policy goals such as those included in the Sustainable Development Goals (SDGs), Brazil could continue to develop its national, government-wide view of M&E, considering opportunities to develop guidance that goes beyond the PPA and builds on existing guidelines for institutional performance evaluation. A national view would bear in mind coherence between different approaches to M&E, the fluidity of information flows and the utility value of open data and information for policy makers related to cross-cutting initiatives, in particular. This government-wide view is particularly relevant in the coming years, as the government improves and creates national policies and programmes to achieve the SDGs, particularly SDG 14 on data gathering and monitoring (SDG 14).

What can TCU do to enhance its contributions to improved M&E system at the government-wide level? SAIs, like TCU, are assessing the effectiveness and efficiency of government wide evaluation systems. TCU can use its existing audits and frameworks, and orient future work, to support a more systematic, results-based M&E system across government. More specifically, TCU could more systematically support M&E across government by taking the following actions:

  • TCU could conduct periodic evaluations of government-wide M&E mechanisms and policies, putting emphasis on the CoG’s ability to foster more consistent, results-based management and M&E of cross-cutting initiatives; and

  • TCU could assess the use of entity and programme results for decision-making across government.

TCU could conduct periodic evaluations of government-wide M&E policies and mechanisms, putting greater emphasis on the CoG’s ability to foster more consistent, results-based management and M&E of cross-cutting initiatives.

In an OECD survey of ten SAIs, seven had assessed the following particular elements of the entire system: the mechanisms for ensuring reliable, quality, auditable financial and non-financial performance information; mechanisms for integrating government-wide monitoring and evaluation with strategic planning; alignment with international good practices; and alignment with key national indicators (OECD, 2016a). Given the historical linkages between the PPA and M&E in Brazil, much of TCU’s work that touches upon M&E has necessarily done so in the context of evaluations and audits of budgetary processes and related budgetary policies and guidelines. For instance, the “Survey of monitoring and evaluation systems in the direct administration” aimed to characterize to what extent M&E practices are institutionalized. TCU analysed in detail the M&E of the Multi-Year Plan (the PPA) and the Identified Evaluation Systems in Sectoral Bodies (TCU, 2011).

TCU has also conducted audits related to the policy formulation stage and the MP as the central co-ordinator of M&E. In one example, TCU’s “Audit of government evaluation maturity indexes” assessed the maturity index used for evaluation of government’s programs in the direct administration of the Federal Executive (TCU, 2014b). TCU also contributes to more effective M&E in government at the entity level by “evaluating the evaluators.” For instance, TCU undertook a regulatory governance review of Infrastructure Regulatory Agencies, and assessed the maturity of the decision making processes in the energy, communication and transport regulatory agencies. This review included verification of the independence of directors, management of conflict of interest and transparency in decision-making (TCU, 2015b). Another example of TCU evaluating the evaluators is its review of supervision policies of the Central Bank, and identifying priority areas in a range of areas, from governance to internal controls (TCU, 2015c).

These examples demonstrate that TCU has taken a systemic view on the functioning of M&E that goes beyond assessment of the M&E in the budget context and at the entity or programme level. This is consistent with TCU’s explicit recognition of M&E as a key function of policy-making, demonstrated in TCU’s Framework to Assess Public Policies (TCU, 2014a) and the Framework on Centre of Government (TCU, 2016a). The latter focuses on the role of central institutions in key functions. It provides key questions on the role of CoG in M&E for audit teams to consider. For instance, it asks—how is the role of monitoring of policy implementation and evaluation of government performance exercised, taking into account the coherence in the actions of government?—to verify (i) if there is a Central Government organ responsible for monitoring policy priorities of government, (ii) if monitoring is done in order to ensure consistency between actions from the government, and (iii) if monitoring efforts prioritize the commitments in the plan of the government (TCU, 2016a).

The principles reflected in TCU’s existing Frameworks provide beneficial guidance for auditors, but TCU could make a greater emphasis in consistently and systematically applying them in its audits as criteria. Doing so will help to ensure that M&E is considered not only at the entity level, but also at the whole-of-government level. In addition, TCU could conduct periodic audits of cross-cutting M&E policies, focusing on the CMAP as its activities evolve to strengthen consistent government-wide M&E. Specifically, TCU could complement its existing frameworks with more criteria for directly assessing the responsibilities and capacity of the CoG in co-ordinating and overseeing M&E rollout across government, as well as to improve their own evaluation of cross-cutting programmes. Example questions below are inspired by the key themes that the UK’s National Audit Office (2014) uncovered in its audits of the CoG. Such questions could help TCU to focus its audits on the CoG’s role, capacity and effectiveness of its policies and mechanisms, including (NAO 2014):

  • Does the centre have a clear vision for how government should operate and is it demonstrating the leadership required to achieve that?

  • What are the roles and responsibilities of central entities in the M&E system in law, and how are they being fulfilled in practice?

  • How well does the centre incentivise departments to act in ways that promote overall government effectiveness?

  • How well does the centre incentivise departments to conduct rigorous and objective M&E?

  • What constraints do central institutions have in providing effective co-ordination, guidance and oversight in M&E?

  • Does a policy basis exist to incentivise coordination and communication across entities with regards to the results of M&E?

  • To what extent, if at all, are M&E policies fragmented, overlapping or duplicated? To what degree, if at all, do they cause confusion or clarification for entities and programme managers about their responsibilities in M&E?

The questions above can inform research objectives and criteria to aid TCU in strengthening M&E across government by promoting consistency of M&E activities. Systematic application of such criteria can lead to more consistent results on performance across government. In addition, through a greater emphasis on government-wide M&E, TCU could help the CMAP to identify ways to foster more comparable results from entities that extend beyond performance-based budgeting, considering other applications of M&E. TCU could do this by honing new audits towards the role of the CoG in formulating M&E policies, and synthesising key findings from existing work to inform the CoG’s understanding about areas for improvement to standardise and institutionalise M&E. TCU could also assess the CoG for its role in monitoring progress with reforms and ensuring departmental work plans reflect government-wide strategic priorities.

TCU could strengthen efforts to assess the extent and effectiveness of entities in using M&E results for decision making.

Effective M&E helps to ensure that at the central, entity and programme level, that decisions, trade-offs, policies and programmes are balanced and evidence-based (OECD, 2016a). It enables all actors to understand where gains can be made in efficiency, effectiveness and economy. This is particular important in times of fiscal consolidation. Moreover, central government entities, like those that make up CMAP, require reliable, objective and comparable information in order to monitor the implementation of government priorities. The emphasis that the CoG itself, or lead ministry of a cross-cutting initiative, places on gathering quality results will shape the M&E behaviour of entities and programme managers.

TCU’s approach to assessing M&E as a key function of government has included analysis of the use of results. TCU’s Framework to Assess Public Policies (TCU, 2014a) proposes that audit teams assess the extent to which “the monitoring system and evaluation of public policy is properly structured to produce information in order provide feedback to the decision-making processes in order to foster learning and improvement of actions to achieve results?” Supporting questions include the following:

  • Is there is clear definition of communication flows of evaluative information to promote timely feedback under the public policy cycle?

  • Is there sufficient, reliable and relevant data availability to support performance reporting policy?

  • Does the M&E system policy have processes, procedures and sufficient resources (e.g. financial, people, and structures) to ensure that M&E activities provide reliable information, timely and necessary for decision making?

Further, in 2013, TCU conducted an audit of 27 Federal agencies to assess the maturity of M&E systems across government. The maturity index (iSA-Gov) quantified the level of institutionalisation of systems, looking at four elements: (1) demands for evaluation, (2) production of ‘evaluative knowledge,’ (results), (3) organizational learning capacity and (4) the use of ‘evaluative knowledge’ (results). Managers that responded to the survey considered themselves as having a high capacity to use results (TCU, 2013). While the iSA-Gov tool found that use of evaluative knowledge was, on average, “present, sufficient and satisfactory to meet the needs of the actors,” other audits of TCU suggest room for improvement in this area – particularly in the use of performance information for budgeting, which is covered in more detail in Chapter 3.

TCU could emphasis this aspect of M&E—the polices and mechanisms for using results from M&E inn decision making. On way TCU could do this is to build on its iSA-Gov tool to focus specifically on the use performance or evaluative information. GAO’s body of work on “Managing for Results” and its “use of performance information index” could aid TCU in this effort. GAO developed this index to communicate the use of performance information of a random sample of mid-level and upper-level managers and supervisors at 24 federal Agencies, comparing results of survey waves in 2007 and 2013 (GAO, 2014). The set of survey questions that GAO used to develop the index are shown in Figure 5.1

Figure 5.1. Questions from GAO’s 2013 Managers Survey Used to Develop the ‘Use of Performance Information Index’
picture

Source: GAO (2014), http://www.gao.gov/special.pubs/gao-13-519sp/index.htm

To better understand which factors influenced how an agency scored in the index, the 2013 survey questions were mapped against leading practices that can enhance or facilitate the use of performance information for management decision making. These leading practices were developed by GAO through prior work, including:

  • Demonstrating management commitment;

  • Aligning agency-wide goals, objectives and measures;

  • Improving the usefulness of performance information;

  • Developing the capacity to use performance information; and

  • Communicating performance information frequently and efficiently.

Using its index, the GAO was able to comment on increases and reductions in use of performance information by entity and government-wide, offering a comparative perspective. TCU could replicate such an average on a new, more in-depth index that could be compared and reported on in conjunction with iSA-Gov findings. By generating an average for the whole-of-government, TCU can focus on elevating these findings to top decision makers or responsible bodies – for instance the MP in Brazil, or for the Congress meant to hold Executive to account. TCU could take such an index a step further, and generate an average of CoG entities’ use of results, as shown in Figure 5.2 below.

Figure 5.2. Federal Agencies’ Average Scores on Use of Performance Information Index – 2007 and 2013
picture

Source: GAO (2014), http://www.gao.gov/special.pubs/gao-13-519sp/index.htm

Furthermore, to strengthen the focus of such an index on the CoG, TCU could consider integrating questions that are directed to the CoG’s own awareness and use of information. TCU may be further inspired by the questions below from the U.K. NAO (NAO 2014):

  • Does the centre have a comprehensive view of the cross-government picture, supported by reliable management information, to inform decision-making?

  • Does the centre have adequate information to monitor the implementation of government priorities?

5.4. Auditing readiness, capacity and indicators for more effective M&E and results-based management

TCU could induce improvements in institutional and programme evaluations by auditing entities’ readiness, capacity and indicators to manage for results.

International indicators suggest the Brazilian government has consistently demonstrated an ability to learn from its experiences through M&E (Bertelsmann Stiftung, 2016a). In particular, the Bertelsmann Stiftung’s Transformation Index (BTI) gave the Brazilian government a score of at least 7 out of 10 in each of its past 6 assessments of “Steering Capability” (Bertelsmann Stiftung, 2016a). This element of the BTI includes an indicator for “Policy Learning,” meaning the ability of government to learn and innovate through effective M&E, observation and knowledge exchange, as well as consultancy by experts and practitioners (Bertelsmann Stiftung, 2016b). The BTI reflects more than M&E, as it focuses on broader learning aspects of government. Nonetheless, the indicator can help to broadly illustrate the Brazilian government’s recognition of and commitment to developing more effective M&E in the last decade.

While the BTI offers a general view that is positive, TCU’s aforementioned 2014 “iSA-Gov” Maturity Index sheds light on the complexity and divergence in M&E across Brazil’s federal administration. The Index quantified the level of institutionalisation of systems for monitoring and evaluating the performance and results of policies and programmes. The maturity level of the government programme evaluation systems, iSA-Gov, expressed public managers’ perception of the adequacy of the mechanisms and instruments employed to demand, produce and use the available evaluative knowledge. On the one hand, managers in the 27 Ministries surveyed perceived having a high organizational learning capacity – higher than other elements of the study. On the other hand, managers identified being less developed in the production of evaluative knowledge, or results (TCU, 2014b). The report conveyed various findings about the maturity of M&E in selected Brazilian institutions, such as the maturity of the M&E system increased with budgets and that high turnover could affect the development of internal evaluation capacity.

The decentralised nature of M&E in Brazil is driven by self-evaluation, allowing individual entities to focus on their own priorities when conducting evaluations. In addition, ministries can create their own data systems and indicators. Yet, these activities require skills, time and resources, which can vary across entities and will therefore affect the M&E system as a whole. A lack of capacity can lead to a number of issues. For instance, as discussed later in this chapter, indicators are often not aligned with the policy or programme objectives, making it difficult for internal or external actors, including TCU, to assess. In addition, some government entities commission external parties to conduct evaluations. This practice may lead to more robust and reliable evaluations, it raises questions about the capacity within public entities to carry out evaluations.

Evidence shows that managers and entities struggle with certain stages of M&E more than others. TCU could aid executive branch entities in strengthening M&E by providing more evidence on the current “condition” of M&E across government and the capacity needs of those required to conduct M&E. Doing so will help to ensure that M&E is not seen as a compliance exercise. Indeed, if evaluation happens as a compliance-based exercise, or is geared towards broadening the programme budget in future years, it may not lead to meaningful improvements. In supporting entities to improve institutional and programme evaluations, TCU could consider the following recommendations:

  • TCU could periodically conduct readiness assessments of selected entities to evaluate their effectiveness and maturity of M&E and their culture of results-based management; and

  • TCU could strengthen reviews of entity and programme results, particularly indicators, for assessing their value for decision making and achieving medium to long-term goals.

TCU could periodically conduct readiness assessments of selected entities to evaluate their effectiveness and maturity of M&E and their culture of results-based management.

TCU’s own findings highlight the need for increased understanding of what works in M&E in Brazilian government entities. Periodic reviews of the “readiness” of government entities to conduct M&E could help audited entities to establish a baseline for improving M&E activities. Assessing the readiness of an entity with regards to M&E is not just about evaluating policies, processes, procedures and structures, but it also involves understanding behaviours, incentives and the political economy within an entity. The following questions, some of which are elaborated on in subsequent sections, could provide a framework for conducting such assessments (adapted from Kusek et al., 2004):

  • What are the pressures and incentives that are encouraging, or discouraging, M&E in the entity, and why?

  • Who is the M&E advocate or champion within the entity, and what is his or her motivation for supporting it?

  • Who are the owners of M&E within the entity and what are their incentives, or disincentives, for conducting robust M&E?

  • How much information or data is enough for making informed decisions, without collecting too much?

  • What is the anticipated reaction to negative information or findings from the M&E efforts?

  • Where and what kind of capacity is there to support M&E in the entity?

  • How will the M&E and its results link to broader goals and objectives, such as national goals?

TCU could also look to the GAO’s “use of performance information index”, discussed above, which delineates the findings by level of management group. TCU could apply this approach to the assessment of entity or managers’ readiness for M&E, in order to pinpoint where needs are highest and thus where central agencies responsible for providing guidance should focus their attention. Canada’s Office of the Auditor General (OAG) may offer additional insights for TCU in conducting such assessments. Specifically, the OAG conducted a series of audits that looked at whether government entities’ evaluation units identified and responded to the various needs for effectiveness evaluations, and whether they had built the required capacity to respond to those needs. By including the Treasury Board Secretariat as a CoG entity in the scope of the audit, this evaluation also works to promote a culture of results-based management across government. The box below provides the criteria for analysis and the key findings of the audit (OAG, 2013 in OECD, 2016a).

Box 5.1. The Auditor General of Canada

From the period of 2004-2009, with a follow-up in 2013, the Office of the Auditor General in Canada (OAG) assessed the evaluation units of six government departments. Additionally, the audit assessed the oversight and support role of the Treasury Board of Canada Secretariat in monitoring and improving the evaluation function in government, particularly in regard to effectiveness evaluations (OAG, 2009). OAG used ten guiding expectations from various sources such as laws and policies as the criteria for their audit. The following audit criteria could guide TCU’s own audit for effectiveness of evaluation units:

  • ‘We expected that departments could demonstrate that programme evaluation plans take appropriate account of needs for effectiveness evaluation.’

  • ‘We expected that departments could demonstrate that they have acted on programme evaluation plans to meet key needs.’

  • ‘We expected that departments could demonstrate that their effectiveness evaluations appropriately meet identified needs.’

  • ‘We expected that departments could demonstrate that they regularly identify and act on required improvements in meeting needs for effectiveness evaluation.’

  • ‘We expected that departments could demonstrate reasonable efforts to ensure sufficient qualified evaluation staff to meet key needs for effectiveness evaluation.’

  • ‘We expected departments could demonstrate that the amount and the time frame of funding for effectiveness evaluation meet key needs.’

  • ‘We expected that departments could demonstrate that evaluators have sufficient independence from programme managers and that their objectivity is not hindered.’

  • ‘We expected that departments could demonstrate that they regularly identify and act on required improvements to capacity to meet needs for effectiveness evaluation.’

  • ‘We expected that the Treasury Board of Canada Secretariat has the resources required for government-wide oversight of the programme evaluation function.’

  • ‘We expected that the Treasury Board of Canada Secretariat could support the practice of government-wide evaluation by identifying needed improvements and determining and carrying out actions required of the Secretariat to help ensure that departments and agencies have the tools they need to achieve the desired results.’ (OAG, 2009).

The audit found that while the six departments examined followed systematic processes to plan their effectiveness evaluations, and completed most of the evaluations they had planned; their evaluations covered a low proportion of total programme expenses. Additionally, the audit found the departments in many cases had not gathered the performance information necessary to adequately assess effectiveness of evaluation (OECD, 2016a).

By including the Treasury Board Secretariat in the scope of the audit, this evaluation also works to promote a culture of risk based management across government. One of the listed objectives of the audit was to ‘Determine whether the Treasury Board of Canada Secretariat’s government-wide oversight of the programme evaluation function has regularly identified and addressed areas for improvement that ensure that departments have the capacity to meet needs for effectiveness evaluation’ (OAG, 2009). Through ensuring the adequacy of this oversight function for effectiveness evaluation, the audit has a far greater reach than simply assessing the six departments. Enhancing the oversight function to provide a kind of sustained support of government entities will more systematically promote a culture of risk management than auditing and recommending entities individually. In this case, the Treasury Board did introduce initiatives to address the need for improvements in evaluation across government, however, it did not provide sustained support for effectiveness evaluation. Particularly, there was not enough progress on developing tools to assist departments with the long-standing problem of lack of sufficient data for evaluating programme effectiveness.

Source: OAG (Office of the Auditor General of Canada) (2013), Spring 2013 Report, Status Report on Evaluating the Effectiveness of Programs, accessed February 2017, www.oag-bvg.gc.ca/internet/English/parl_oag_201304_01_e_38186.html; OAG (2009), Fall 2009 Report, Chapter 1, Evaluating the Effectiveness of Programs, accessed February 2017, www.oag-bvg.gc.ca/internet/English/parl_oag_200911_01_e_33202.html; OECD (2016a), Supreme Audit Institution and Good Governance: Oversight, Insight and Foresight, OECD Publishers, Paris, http://www.oecd.org/gov/ethics/supreme-audit-institutions-and-good-governance.htm.

TCU could strengthen reviews of entity and programme results, particularly indicators, for assessing their value for decision making and achieving medium to long-term goals.

A key element of evaluating for improved performance in government is that evaluation enables reliable, measured performance-related information to be fed into decision-making processes (OECD, 2016). High quality or high-value performance information is a precursor to managing by results. TCU’s work has embodied this principle and has sought to promote improved quality of performance information. Further, international principles require that “an effective system of oversight provides an assurance on the reliability and quality of disclosed information” (OECD, 2016a). As a key oversight body, TCU is responsible for providing this assurance. It is also reliant to a degree on the quality of performance information of auditees, as it impacts TCU’s own ability to evaluate performance in executing its mandate.

In playing the role of evaluator itself, or auditor, as well as evaluator of evaluators, TCU has sought to promote higher quality performance information and data. It should continue to do so, given that its work has shown that public managers struggle most with actually producing evaluative knowledge, let alone high-quality knowledge that is valuable for decision-making. As discussed, this depends largely on the capacity and skills to measure impact and outcomes and not just inputs or outputs. It also requires an awareness of which tools are suitable to the purpose.

It can be useful, at the outset, to have a general framework with the core elements that constitute the foundation of M&E, whether applied to performance-budgeting and programme evaluation alike. One such general framework, as illustrated in igure 5.3, depicts impacts and outcomes (both intermediate and final) as the result of activities that convert inputs to outputs (Zewo, 2011). The definitions for each of elements can vary. For example, the Institute for Applied Economic Research presents a similar logic model, including an intermediate outcome, but the principles are similar. In addition, the importance of contextual factors, such as culture and institutional arrangements, is often stressed in theory and practice related to results-based M&E.

Selection of M&E tools

Creating a robust M&E function at a ministry or programme level requires a number of methodological decisions, including determining the type of evaluation to conduct and when. The choice of assessment types depends on what is being measured. Table 5.3 provides examples of ex ante and ex post evaluations that entities can employ depending on the purpose of the assessment.

Table 5.3. Tools for ex ante and ex post evaluation

EX ANTE

Description

Regulatory impact assessment

Assesses the impacts of future laws and regulations against a range of intended effects (economic, costs of regulation, impacts on inclusive growth related objectives are unevenly integrated).

Budget Impacts

Typical assessment of the cost of future programmes or policy measures. Often focuses on the strict public finance implications. This can include ex ante audit/authorization for programmes.

Modelling/Scenarios/Gaps analysis

A range of analytical techniques are used in the academia, think tanks or in the public sector to assess policy impacts.

Advisory committee

In-depth, consultative reports from authoritative bodies, such as Inquiry Commissions (Sweden), Productivity commission (Australia)

EX POST

Description

Performance (or VFM)

Targeted review, typically carried out by an internal or external audit institution, to assess whether objectives are being achieved, and/or with what levels of efficiency and effectiveness.

Programme evaluation

In-depth evaluation of a specific programme, by reference to original rationale, and effectiveness in achieving objectives, cost of delivery, alternative modalities and saving/efficiency options.

Focused policy assessment

Short, sharp evaluation focused on one or more of the criteria used in programme evaluation. Often part of a spending review.

Spending review

Large-scale re-assessment of the disposition of resources within a sector of public expenditure by reference to new priorities and effectiveness in meeting objectives. Used to identify ‘fiscal space.’

Source: OECD (2015), Session Notes, OECD’s Public Governance Ministerial Meeting, 28 October 2015, http://www.oecd.org/governance/ministerial/session-notes-helsinki.pdf

Figure 5.3. Logic Model - Building blocks of resulted-based M&E
picture

Source: Zewo Foundation (2011), Outcome and Impact Assessment in International Development: Short presentation of the Zewo guidelines for projects and programmes, http://impact.zewo.ch/english/docs/Zewo_Wirkungsmessung_E_web.pdf

In Brazil, executive branch entities are required to provide the MP with bimonthly and quarterly reports as part of the annual accounts process. These reports are primarily focused on assessments of compliance with fiscal rules, as well as deviations from planned expenditures. As such, they are not intended for discerning the impacts of individual programmes and policies throughout a given year.4 The importance of accounts reporting for the budgetary process is discussed further in Chapter 3. In the context of programme evaluation, TCU plays a critical role in strengthening M&E in Brazil by incentivising the use of a wider variety of evaluation tools for analysis. In other words, TCU can use its audit recommendations to encourage more robust results-based management and evaluation in entities that would otherwise be tempted to focus on compliance assessments.

Selection of indicators

Once the methodological framework and tool or type of evaluation is selected, a variety of approaches exist for developing and selecting indicators, based on available data and existing information. In general, indicators should act as feedback mechanisms for managers, connecting activities to objectives, strategic goals and ultimately the entity’s mission (Zewo, 2011). Indicators can vary in their policy application. For instance, the purpose of some indicators can be to monitor progress or goals (e.g. performance indicators), while others may serve to raise awareness, but are not useful for monitoring, such as certain composite indicators. This chapter focuses largely on indicators for measuring performance as part of results-based M&E.

Performance indicators can be quantitative or qualitative variables that offer a reliable way to measure progress or achievement of an outcome (Kusek, 2004). When developing performance indicators, managers should focus on the usefulness of the indicators for achieving outcomes. Indicators should be a simple and reliable means to measure achievement, reflecting the changes connected to a policy or programme, or helping to assess their performance. They specify what is to be measured along a scale, determining how expected results are measured and what data are collected. Using similar indicators over a period of time also provides consistent measurement over time (Sterck, M et al, 2006). The table below provides examples of indicators based on the results-based M&E logic model.

Table 5.4. Types of performance measures

Types

Measures

Performance Measures

Input measures

What goes into the system? Which resources are used?

Output measures

What products and services are delivered? What is the quality of these products and services?

Outcomes measures

Intermediate: What are the direct consequences of the output?

Final: What are the outcomes achieved that are significantly attributable to the output?

Contextual measures

What are the contextual factors that influence the output (e.g. processes, antecedents and external developments)

Ratio Indicators

Efficiency

Cost / Output

Productivity

Output / Input

Effectiveness

Output / Outcome (intermediate or final)

Cost-effectiveness

Input / Outcome (intermediate or final)

Source: Adapted from Sterck, M. and B. Scheers (2006), “Trends in Performance Budgeting in Seven OECD Countries”, Public Performance and Management Review, 30(1), September, pp. 47-72, https://www.jstor.org/stable/pdf/20447616.pdf.

In the absence of indicators, it is impossible for governments to not only know what needs to be improved, but also what is working. However, too many indicators can lead to data saturation. Avoiding information overload is critical for the effectiveness of M&E and the use of indicators. Too much performance-related information can strain resources and create challenges in distinguishing the initiatives that work from those that do not, ultimately resulting in evaluation processes that are ineffective or costly. Moreover, excessive evaluation processes or indicators can lead to evaluations that managers see as an administrative burden. In turn, this can undermine a results-oriented culture that is conducive to producing reliable, timely and accessible evidence.

As discussed, indicators in Brazil are closely tied to the budgeting process and evaluations linked to the PPA. A 2015 audit by TCU noted indicators used in relation to M&E of PPA objectives are ineffective, and some objectives lack indicators and annual targets, among other issues (TCU, 2016b). Such challenges are not unique to Brazil. Many governments face challenges in developing performance information of sufficient quality, robustness and reliability to serve as a sound basis for informing resource-allocation decisions (OECD, 2016a). One challenge is that measuring change, whether linked to the budgetary process or for broader M&E of programme performance, involves not only decisions about what to measure, but also how to measure. Identification of selection of indicators is a key component of the latter. It is critical for managers at this level to understand what constitutes sound indicators that underscore quality performance information.

As noted by TCU’s own work, managers can often struggle to produce quality performance information, since indicators and targets may be lacking altogether. Further, TCU’s 2010 Performance Audit Manual elaborates on challenges managers may face in creating performance information, including inadequate or unreliable information systems as well as the difficulty in linking outcomes to specific policies or actions (TCU, 2010a). The manual offers general considerations for assessing the quality of indicators. Box 5.2 provides additional considerations and good practices, which TCU can focus on when assessing such elements in its audits (OECD, 2008).

Box 5.2. Good Practices for Developing Indicators

In Brazil, the PPA provides high-level guidance on indicators. In addition, the MP has developed guidance to assist managers in thinking about developing indicators for measuring performance. Managers can consider the following concepts and practices for developing indicators.

  • Invest time in the process of choosing indicators and targets. Reflect on all the options available to measure each result and refine targets and indicator sets over time as the programme, the understanding of partners, and the availability of information change.

  • Identify appropriate indicators at outcome level. Ensure that the programme does not only monitor outputs and that there is sufficient emphasis on changes at outcome level.

  • Minimise perverse incentives. Remember that “what gets measured gets done”. Choosing to measure one indicator may mean that the programme de-prioritises other important actions and results. Routine measurement of certain indicators can have perverse results. For example, measuring the time taken to process applications for grants can make government employees process the applications faster, but at the cost of due diligence processes.

  • Use multiple indicators or “baskets” of indicators to measure results at higher-level outcome and impact levels. A balanced set of indicators that measure different aspects and that may combine quantitative and qualitative measures is more likely to cancel out biases.

  • Use a mix of quantitative and qualitative methods to measure indicators. Quantitative indicators are often easier to collect and measure. However, quantitative indicators often do not give the full picture, and not every change that is important can easily be expressed in numerical format. Qualitative indicators could be more appropriate in certain circumstances.

  • Ensure that indicators and targets reflect the needs and participation of various groups. For instance, consider how to measure changes that are relevant to the poor and the vulnerable, especially by disaggregating data and checking for measurement biases for/against certain groups.

  • Avoid conflating indicators with targets. Targets are the change(s) that the programme wishes to achieve. Indicators are pieces of information that are used to measure change and performance, and can thus indicate whether this target has been reached

Indicator and Target Measured Together

Number of government managers trained in M&E increases

by 500 in 2018

Indicator and Target Separated

Indicator: Number of managers trained

Baseline: 1,000 trained in 2017

Target: 1,500 trained in 2018

  • Make indicators gender-sensitive. Measure whether men and women are equally participating in the programme activities, and insist on sex- and age-disaggregated data whenever feasible.

  • Promote partnership, inclusion and ownership in setting and using indicators and targets. Wherever possible, indicators and targets should be agreed jointly between the partner government and the international supporting organisations, and ideally with the participation of other local stakeholders and beneficiaries (this may include organisations that represent specific communities, such as women’s organisations, religious leaders, disability rights groups, etc).

  • Choose indicators that can be measured. When identifying indicators, consider whether this information is already available, and if not, how easy it will be to collect it given the context and the resources that are available.

  • Test indicators. Test indicators to make sure they are valid and appropriate measures of the result you want to achieve.

  • Keep it simple. Try to measure what is most important and do it as simply and cheaply as possible. Wherever possible, use information that is already available and that is routinely collected. Build on existing information systems, particularly those of national institutions.

Source: OECD (2008), The OECD DAC Handbook on Security System Reform: Supporting Security and Justice, OECD Publishing, Paris. https://doi.org/10.1787/9789264027862-en

TCU’s work can reinforce and build on the guidance provided by the MP by focusing on key practices for ensuring that M&E results in high-quality, reliable performance information. Moreover, the MP’s guidance as well as related laws do not require entities to consider the policy effects in the medium or long-term.5 TCU’s audits can aim to improve the quality of performance information, particularly indicators, so that information produced during M&E is reliable and useful for assessing intermediate and final outcomes and impacts. Looking ahead, the SDGs can provide TCU with indicators against which it can assess government results, where indicators are currently lacking. It is important that TCU take care not to fill the role that government must play in establishing indicators, as well as M&E. However, TCU could assess the reliability of indicators and data that the executive branch produces, and as noted, the formulation of the policies and processes aimed at facilitating the realisation of national goals and the SDGs. Doing so would be aligned with the 2017-2022 strategic plan of INTOSAI, which sets priorities for SAIs to monitor and evaluate the achievement of the SDGs in the 2030 agenda.

5.5. Strengthening cross-cutting government policies and programmes through improved data sharing, communication and coordination

TCU could assess the communication and coordination mechanisms, including interoperability of information and data systems, to improve M&E of cross-cutting government policies and programmes.

To aid in M&E, the Brazilian government has developed a plethora of national information and data management systems. For instance, in 2001, Brazil launched the Management and Planning Information System (Sistema de Informações Gerenciais e de Planejamento, SIGPLAN) to organise information and data for the government’s annual evaluation report. Other systems currently exist that focus on specific issues and sectors, and are maintained by individual ministries. They include the Integrated Planning and Budget System (Sistema Integrado de Planejamento e Orçamento, SIOP), which replaced SIGPLAN, and the Ministry of Education’s Integrated Monitoring, Execution and Control System (Sistema Integrado de Monitoramento, Execução e Controle, SIMEC). Other ministries, such as the Ministry of Health and the Ministry of Social and Agrarian Development, have their own indicators and mechanisms for monitoring performance of activities under their purview and within their relevant sectors.

In Brazil, M&E public policies and mechanisms vary not only by sector, but also by states at the subnational level. As discussed, the government may also contract out evaluation efforts to academics and M&E experts. Decentralised M&E allows for customisation and tailoring of individual approaches, indicators and other performance information to specific sectors and issues, but such variation can also pose challenges from a broader governance perspective. For example, decentralised performance information can be difficult to synthesise as inputs to government-wide achievement of outcomes. This can reflect a capacity or expertise issue, but it also can be a function of numerous systems designed for different purposes, indicators and of varying quality. For instance, the Ministry of Health has a Department of Monitoring and Evaluation that established its own key indicators to monitor programmes and measure the quality of Brazil’s health sys-tems. The Ministry of Education, as noted, has a different framework with its own indicators. Moreover, the variety of data management systems within the Brazilian government has historically created challenges related to interoperability. For example, some systems for sectoral information and data (e.g. The Ministry of Health and the Ministry of Education) have mechanisms that are more sophisticated, detailed and accurate than others, which can exacerbate interoperability challenges (Vaitsman, 2013).

Mechanisms for communication and coordination are essential in a decentralised M&E system. As discussed, there is a policy element that drives coordination and alignment between M&E initiatives. In addition, a key issue for facilitating communication and information sharing are the systems individual entities create to manage results and data for assessing achievement towards goals. These individual systems vary in the way they manage data and as a result evaluation is not homogeneous across government, making evaluation of crosscutting initiatives and programmes difficult. As such, it is important to improve communication, compatibility and interoperability between these systems. Although there are a number of definitions for interoperability, this report contends interoperability is the ability of ICT systems to communicate, interpret and interchange data in a meaningful way through processes where independent or heterogeneous information systems or their components managed by different jurisdictions/administrations work together in predefined and agreed terms and specifications. Interoperability enhances effectiveness, efficiency and responsiveness in government and is thus an important component of decentralised M&E systems.

In Brazil, there is already an initiative to enhance interoperability in government. The ePing is a basic framework for electronic government interoperability standards. It is overseen by the Logistics and Information Technology Secretariat (SLTI) of the Ministry of Planning, Development and Management (Ministério do Planejamento, Desenvolvimento e Gestão, MP). The framework is originally intended for the Executive branch of the federal government, but does not restrict voluntary participation by other branches and levels of government. For federal agencies in the Brazilian Executive, however, adoption of standards and policies contained in the ePing is mandatory (SLTI/MP nº 92) (Government of Brazil, 2014).

The ePing’s architecture covers the exchange of information between federal Executive branch agencies and their interactions with citizens, other levels of government and other branches of government. The interoperability framework includes architectural standards and guidelines to help integrate processes, applications, data, security, and networks. Implementation of ePing is gradual and decentralised. All of the federal government’s Executive branch purchases and hiring related to the development of electronic government must be consistent with the ePing specifications and policies. Further, the focus of ePing will be only specifications that are relevant to ensure interconnectivity of systems, integration of data, access to electronic government service and content management.

In the past, TCU has evaluated the government electronic programme. The government electronic programme, which developed e-Ping, began in 2000 in order to increase the supply and improve the quality of public services and information provided through electronic means (TCU 2006). In 2006 then, TCU evaluated the government electronic programme with the audit scope defined as ‘In what way have the Program actions contributed toward the provision of electronic public services directly to citizens?’ Since 2006, however, there has not been another evaluation either on the government electronic programme or e-Ping specifically.

TCU has assessed digital governance from different angles. For example, at the beginning of the 2015-2017 External Control Plan (TCU, 2015d), TCU launched an audit to «evaluate the use of digital technologies in the provision of public services», or better known as «digital government audit» (TC 010.638 / 2016-4). TCU has audited the performance of Logistics and Information Technology Secretariat (SLTI) of the Ministry of Planning, Development and Management (Ministério do Planejamento, Desenvolvimento e Gestão, MP (STLI / MP) as the central overseer of IT. Its performance has been followed since 2007, when the first IT Governance Survey of the federal agencies was launched. The findings of each survey are presented in reports of the IT governance panorama at federal level, along with accompanying determinations and recommendations to the STLI / MP (TCU, 2014d; 2014c; 2012a; 2010b; 2008). In 2017, STLI / MP will present an individual year-end report, leaving potential for it to become part of the the annual accounts process and thus of TCU’s year-end audit of the Consolidated Government Accounts.

Good governance principles emphasise the importance of mechanisms for transparency and openness in the flow of performance information, as well as the existence of fair and transparent systems for legislative and judicial review of the functioning of public administration (OECD 2016a). Moreover, adopting a government-wide strategy for public sector data based on these principles, among others, should be a priority to strengthen co-ordination, exploit synergies and create a shared view of open data within and across levels of government (OECD 2015a). In a recent survey, OECD found identified a number of SAIS that have activities to oversee mechanisms for transparency and the flow of performance information. For instance, in an OECD survey of 10 peer SAIs:

  • 9 of 10 had looked at the existence of clear lines of reporting on outputs and performance outcomes from entities to authorities and to users/stakeholders (including citizens).

  • 6 of 10 had looked at the accessibility and reliability of data systems for collecting, storing and using performance information, accessible for various levels of government.

  • 8 of 10 SAIs had looked at mechanisms for effective information sharing and transparency: between levels of government; within entities; and across entities.

In alignment with peer SAIs and building on past efforts, TCU could further assess communication and co-ordination mechanisms with government-wide implications, and to strengthen M&E, the design of policies and the execution of programmes. One way TCU could do this is to assess the interoperability of data sharing systems. For instance, TCU could assess the progress and effectiveness of e-Ping across government to improve to cross cutting policy evaluation. Given interoperability of government has not been assessed since 2006 and progress of implementation of ePing has never been evaluated, TCU can begin an audit on this government programme. Alternatively, the agency in charge of coordinating e-government and e-Ping is the Logistics and Information Technology Secretariat (SLTI) of the Ministry of Planning, Budget and Management (TCU 2006). This agency is responsible for consolidating e-government standards, implementing the National e-government plan and disseminating e-government actions. Through auditing this agency specifically, it would also be possible to evaluate the progress of e-Ping.

Finally, because e-Ping is implemented in a decentralised fashion, TCU could assess compliance and implementation of e-Ping on an agency by agency basis. In addition, to advance the goal of improved co-ordination and sharing of data, TCU could select specific programs that have this goal in mind in order to provide targeted recommendations that affect multiple entities, sectors or levels of government (see Box 5.3 below for an example of such a project).

Box 5.3. Transversal project on managing geospatial data at the federal, state, district and municipal levels

The Ministry of Planning has co-ordinated, since its inception in 2008, a transversal project aimed at the National Spatial Data Infrastructure - INDE was established by Decree No. 6,666, of November 27, 2008, as the integrated set of technologies; Policies; Mechanisms and procedures for coordination and monitoring; Standards, and agreements necessary to facilitate and order the generation, storage, access, sharing, dissemination, and use of geospatial data of federal, state, district, and municipal origin.

Currently, INDE has been constituted as a management tool capable of supporting the monitoring and evaluation of public policies, above all, with the possibility of capturing its impact on the territory. It allows the processing of multisectoral information of different natures, thus enabling the extraction of more complete and accurate information, more quickly.

The platform aims to gather the geospatial data produced by government agencies in a single internet portal, allowing the rational use of geographic information and the dissemination of a culture of visualization of public policies in the territory. In this sense, INDE is born with the purpose of becoming the reference source of public and open geospatial data, capable of democratizing access to information, facilitating access by citizens, society and various public sector bodies.

Goals

  1. Promote adequate planning in the generation, storage, access, sharing, dissemination and use of geospatial data;

  2. To promote the use, in the production of the geospatial data by the public organs of the federal, state, district and municipal spheres, of the standards and norms approved by the National Commission of Cartography - CONCAR; and

  3. Avoid duplication of actions and waste of resources in obtaining geospatial data, by disseminating the documentation (metadata) of the data available in the entities and in the public agencies of the federal, state, district and municipal levels.

Source: MP (Ministry of Planning, Development and Management) (2016d), Infraestrutura Nacional de Dados Espaciais (INDE), [National Spatial Data Infrastructure] http://www.planejamento.gov.br/assuntos/planejamento-e-investimentos/inde

References

Bertelsmann Stiftung (2016a), Brazil: country report, Transformation Index 2016, https://www.bti-project.org/fileadmin/files/BTI/Downloads/Reports/2016/pdf/BTI_2016_Brazil.pdf

Bertelsmann Stiftung (2016b), Transformation Index of the Bertelsmann Stiftung 2016: Codebook for Country Assessments, https://www.bti-project.org/fileadmin/files/BTI/Downloads/Zusaetzliche_Downloads/Codebuch_BTI_2016.pdf

Chamber of Deputies of Brazil (2010), Decree number 7,133 of 19 March, 2010 (DECRETO Nº 7.133, DE 19 DE MARÇO DE 2010), http://www2.camara.leg.br/legin/fed/decret/2010/decreto-7133-19-marco-2010-604126-normaatualizada-pe.pdf

Government Accountability Office (GAO) (2014), Managing for Results: 2013 Federal Managers Survey on Organizational Performance and Management Issues (GAO-13-519SP, June 2013), an E-supplement to GAO-13-518, accessed February 2017, http://www.gao.gov/special.pubs/gao-13-519sp/index.htm

Government of Brazil (2014), SLTI/MP nº 92, http://www.lex.com.br/legis_26329840_PORTARIA_N_92_DE_24_DE_DEZEMBRO_DE_2014.aspx

Government of Brazil (2008a), Decree number 6.601, DECRETO Nº 6.601, DE 10 DE OUTUBRO DE 2008, accessed February 2017, http://www.planalto.gov.br/ccivil_03/_Ato2007-2010/2008/Decreto/D6601.htm

Government of Brazil (2008b), LEI Nº 11.784, DE 22 DE SETEMBRO DE 2008 (Law number 11.874 of 22 September 2008, http://www.planalto.gov.br/ccivil_03/_ato2007-2010/2008/lei/l11784.htm

Government of Brazil (2001), Law No 10.180, LEI No 10.180, DE 6 DE FEVEREIRO DE 2001, accessed February 2017, http://www.planalto.gov.br/ccivil_03/leis/LEIS_2001/L10180.htm.

IADB (Inter-American Development Bank) (2014), Governing to Deliver, Accessed September 2016, https://publications.iadb.org/bitstream/handle/11319/6674/Governing-to-Deliver-Reinventing-the-Center-of-Government-in-Latin-America-and-the-Caribbean.pdf?sequence=1

Institute for Applied Economic Research (IPEA) (2015), Planning and Evaluation of Public Policies, http://www.ipea.gov.br/agencia/images/stories/PDFs/livros/livros/livro_ppa_vol_1_web.pdf

Kusek, J., and Rist, R., 2004, ‘Ten Steps to a Results-based Monitoring and Evaluation System’, World Bank, Washington, D.C., accessed February 2017, https://openknowledge.worldbank.org/bitstream/handle/10986/14926/296720PAPER0100steps.pdf

MP (Ministry of Planning, Development and Management) (2016a), “Government institutes Committee to monitor and evaluate public policies”, Ministry of Planning, Development and Management, accessed February 2017, http://www.planejamento.gov.br/noticias/governo-institui-comite-para-monitorar-e-avaliar-politicas-publicas

MP (2016b), “Interministerial Committee discusses public policies and evaluates effectiveness of actions”, Ministry of Planning, Development and Management, accessed February 2017, http://www.planejamento.gov.br/noticias/comite-interministerial-discute-politicas-publicas-e-avalia-efetividade-de-acoes.

MP (2016c), National Press (Impresa Nacional), Ministry of Planning, Development and Management, (now http://pesquisa.in.gov.br/imprensa/jsp/visualiza/index.jsp?data=08/04/2016&jornal=1&pagina=79&totalArquivos=204;

MP (2016d), Infraestrutura Nacional de Dados Espaciais (INDE), [National Spatial Data Infrastructure] http://www.planejamento.gov.br/assuntos/planejamento-e-investimentos/inde

MP (2013), Manual de Orientação para a Gestão do Desempenho [Guidance Manual for Performance Management], Secretaria de Gestão Pública [Secretary for Public Management] http://www.planejamento.gov.br/assuntos/gestao-publica/arquivos-e-publicacoes/manual_orientacao_para_gestao_desempenho.pdf/@@download/file/Manual_Orientacao_para_Gestao_Desempenho.pdf

NAO (National Audit Office of the United Kingdom) (2014), ‘The centre of government’ HC 171, Session 2014-15, 19 June 2014, https://www.nao.org.uk/wp-content/uploads/2014/06/The-centre-of-government.pdf

OAG (Office of the Auditor General of Canada) (2013), Spring 2013 Report, Status Report on Evaluating the Effectiveness of Programs, accessed February 2017, www.oag-bvg.gc.ca/internet/English/parl_oag_201304_01_e_38186.html.

OAG (2009), Fall 2009 Report, Chapter 1, Evaluating the Effectiveness of Programs, accessed February 2017, www.oag-bvg.gc.ca/internet/English/parl_oag_200911_01_e_33202.html

OECD (2016a), Supreme Audit Institution and Good Governance: Oversight, Insight and Foresight, OECD Publishers, Paris, http://www.oecd.org/gov/ethics/supreme-audit-institutions-and-good-governance.htm

OECD (2015), Session Notes, OECD’s Public Governance Ministerial Meeting, 28 October 2015, http://www.oecd.org/governance/ministerial/session-notes-helsinki.pdf

OECD (2015a), «Governments leading by example with public sector data», in Data-Driven Innovation: Big Data for Growth and Well-Being, OECD Publishing, Paris. https://doi.org/10.1787/9789264229358-14-en

OECD (2014a) ‘Centre Stage: Driving Better Policies from the Centre of Government’ OECD Publishing, Paris, https://www.oecd.org/gov/Centre-Stage-Report.pdf

OECD (2008), The OECD DAC Handbook on Security System Reform: Supporting Security and Justice, OECD Publishing, Paris. https://doi.org/10.1787/9789264027862-en

OECD (2002), Glossary of Key Terms in Evaluation and Results Based Management, OECD, Paris, accessed February 2017, https://www.oecd.org/dac/evaluation/2754804.pdf

Rede Brasileira (2016), accessed September 2016, http://redebrasileirademea.ning.com.

Rosenstein, B. (2015), Status of National Evaluation Policies. Global Mapping Report. 2nd Edition, Implemented by Parliamentarians Forum on Development Evaluation in South Asia jointly with EvalPartners, http://www.iape.org.il/Upload/Members_Uploads/17_385652.pdf.

Sterck, M. and B. Scheers (2006), “Trends in Performance Budgeting in Seven OECD Countries”, Public Performance and Management Review, 30(1), September, pp. 47-72, https://www.jstor.org/stable/pdf/20447616.pdf.

TCU (2016a) Framework for Evaluation of the Centre of Government TCU Publishing, Brasilia, http://portal.tcu.gov.br/lumis/portal/file/fileDownload.jsp?fileId=8A8182A25454C5A801545DC1433145ED

TCU (2016b), TCU Judgement 033.142/2015-7 Survey Report, http://bibspi.planejamento.gov.br/bitstream/handle/iditem/700/Acordao_TCU_948_2016.pdf?sequence=1

TCU (2015a), Plano Estratégico do Tribunal de Contas da União para o período 2015-2021, [TCU Strategic Plan, 2015-2021], PORTARIA-TCU Nº 141, DE 1º DE ABRIL DE 2015, Brasilia, http://portal.tcu.gov.br/tcu/paginas/planejamento/2021/index.html.

TCU (2015b), Agências reguladoras de infraestrutura: avaliação da governança da regulação, [operational audit. infrastructure regulatory agencies. evaluation of regulatory governance], TCU Judgement 240/2015 – Plenary, Performance Audit, TC 031.996/2013-2, http://www.tcu.gov.br/Consultas/Juris/Docs/judoc/Acord/20150304/AC_0240_05_15_P.doc

TCU (2015c), TCU Judgement 0548/2015, Acordo TC-020.137/2014-1 – Plenary, Performance Audit http://www.tcu.gov.br/Consultas/Juris/Docs/judoc/Acord/20150320/AC_0548_09_15_P.doc

TCU (2015d), External Control Plan of Brazil’s Federal Court of Accounts (Plano de Contrôle Externo do Tribunal de Contas da União) April, 2015, Brasilia, http://portal.tcu.gov.br/lumis/portal/file/fileDownload.jsp?fileId=8A8182A153234E0A01535D1006D60567.

TCU (2014a), Framework to Assess Governance in Public Policies, TCU Publishing, Brasilia, http://portal2.tcu.gov.br/portal/pls/portal/docs/2686056.PDF

TCU (2014b), Evaluation of the Maturity Index used for evaluation of government’s programs in the direct administration of the Federal Executive, TCU Judgement 1209/2014 – Plenary – Survey report http://www.tcu.gov.br/Consultas/Juris/Docs/judoc/Acord/20140516/AC_1209_16_14_P.doc

TCU (2014c), Relatório de levantamento. avaliação da governança de tecnologia da informação na administração pública federal [Evaluation of information technology governance in the federal public administration], Acordao 3117/2014, Processo TC 003.732/2014-2, http://portal.tcu.gov.br/lumis/portal/file/fileDownload.jsp?fileId=8A8182A14D78C1F1014D794C57073235

TCU (2014d) Diversos órgãos e entidades da Administração Pública Federal com vistas a avaliar a implementação dos controles de TI informados em resposta ao levantamento do perfil de governança de TI de 2012 [consolidation of audits carried out in various organs and entities of the Federal Public Administration with a view to evaluating the implementation of the IT controls informed in response to the survey of the 2012 IT governance profile] ACÓRDÃO Nº 3051/2014, Processo nº TC 023.050/2013-6, http://www.tcu.gov.br/Consultas/Juris/Docs/judoc/Acord/20141107/AC_3051_44_14_P.doc

TCU (2013), Avaliação do perfil e do índice de maturidade dos sistemas de avaliação de programas governamentais dos órgãos da administração direta do poder executivo federal [Evaluation of profile and the maturity index of the evaluation of governmental programs of the organs direct administration of the federal executive power], Judgement TC 007.590/2013-0, Accessed February 2017, http://portal.tcu.gov.br/lumis/portal/file/fileDownload.jsp?inline=1&fileId=8A8182A14D92792C014D92800B323307

TCU (2012a), Avaliação da governança de tecnologia da informação na administração pública federal. oportunidades de melhoria [ Evaluation of governance of IT in the Federal Public Administration: opportunities for improvement], Acórdão Nº2585/2012, Processo nº TC 007.887/2012-4, https://www.capes.gov.br/images/stories/download/editais/resultados/11102016_TCU_RelatorioFinal_2012.pdf

TCU (2012b), Avaliar se a gestão e o uso da tecnologia da informação estão de acordo com a legislação e aderentes às boas práticas de governança de TI [Evaluation of whether the management and use of information technology is in accordance with legislation and adhering to good IT governance practices], Acórdão Nº 1233/2012, Processo nº TC 011.772/2010-7, http://www.ifam.edu.br/portal/images/file/0000029368-Acord+%C3%BAo%201233_2012_TCU-Plenario.pdf

TCU (2011), Identified Evaluation Systems in Sectoral Bodies, TCU Judgment 2781/2011– Plenary – Survey Report, TC-032.287/2010-0, http://www.tcu.gov.br/Consultas/Juris/Docs/judoc/Acord/20111031/AC_2781_43_11_P.doc

TCU (2010a), Performance Audit Manual, TCU: Brazilian Court of Audit, Brasilia, http://psc-intosai.org/data/files/21/60/D0/A0/81B07510C0EA0E65CA5818A8/tcu_performance_audit_manual.pdf

TCU (2010b), levantamento destinado a avaliar governança de tecnologia da informação no âmbito da administração pública federal [Evaluation of information technology in the Federal Government], Acórdão Nº 2308/2010, Processo TC 000.390/2010-0, http://www.ticontrole.gov.br/lumis/portal/file/fileDownload.jsp?fileId=8A8182A24D7BC0B4014D8C4BF55573F5

TCU (2008), Situation of Information Technology in Government, AcórdãoAC-1603-32/08-P, Processo 008.380/2007-1, https://contas.tcu.gov.br/juris/SvlHighLight?key=41434f5244414f2d434f4d504c45544f2d3430323639&sort=RELEVANCIA&ordem=DESC&bases=ACORDAO-COMPLETO;&highlight=&posicaoDocumento=0&numDocumento=1&totalDocumentos=1

TCU (2006), Evaluation of the Government’s Electronic Programme [Acórdão N.° 1386/2006 – Plenário], accessed February 2017, http://portal.tcu.gov.br/lumis/portal/file/fileDownload.jsp?inline=1&fileId=8A8182A14D92792C014D928268E24B27

Vaitsman, J. et al (2013) Policy Analysis in Brazil, Policy Press at the University of Briston, Bristol, https://doi.org/10.1080/13876988.2015.1110962.

Veja (2016), “Government scans social area to prevent fraud” (Governo faz varredura na área social para evitar fraudes), Veja, accessed online, January 2017, http://veja.abril.com.br/economia/governo-faz-varredura-na-area-social-para-evitar-fraudes/

World Bank (2006), Towards the Institutionalisation of Monitoring and Evaluation in Latin American and the Caribbean: Proceedings of a World Bank, Inter-American Development Bank Conference, Editors May E., et al., http://documents.worldbank.org/curated/en/524591468225577264/pdf/362230ENGLISH010monitoring01PUBLIC1.pdf

Zewo Foundation (2011), Outcome and Impact Assessment in International Development: Short presentation of the Zewo guidelines for projects and programmes, Stiftung Zewo, http://impact.zewo.ch/english/docs/Zewo_Wirkungsmessung_E_web.pdf

Notes

← 1. Noting Law 11,784, this report refers only to the performance evaluation of the entities of public administration, not to the evaluation of public servants.

← 2. Brazil’s external auditor and Supreme Audit Institution (SAI), the Tribunal de Contas da União (TCU), and other international bodies including the OECD have promoted improved capacity for generating and using high quality results and performance information.

← 3. With reference to Law 11,784 of September 22, 2008 (Government of Brazil, 2008b), and Decree 7,133 of 19 March 2010 (Chamber of Deputies, 2010).

← 4. The website of the MP lists the bimonthly reports and reports issued every quarter, for each year since 2004. They are available here: http://www.planejamento.gov.br/assuntos/orcamento/informacoes-orcamentarias/rel-de-avaliacao-fiscal-e-cumprimento-de-meta/relatorios-de-avaliacao-fiscal-e-cumprimento-de

← 5. Relevant to Law 11,784, Chapter II (capitulo II), on evaluation of performance http://www.planalto.gov.br/ccivil_03/_Ato2007-2010/2008/Lei/L11784.htm; and to decreto nº 7.133, de 19 de março de 2010, which outlines the general criteria and procuedures to be followed for individual and institutional performance reviews and subsequent payment of bonuses, http://www.planalto.gov.br/ccivil_03/_Ato2007-2010/2010/Decreto/D7133.htm.