3. Developing a greater culture of evaluation of international instruments

The evaluation of international instruments can provide valuable information about their implementation and impacts. Ultimately, evaluation improves the international rulemaking process at several levels. It enhances the quality of international instruments and is a key tool to ensure that international instruments remain fit-for-purpose - that is, that they continue to meet their objectives and address the needs of constituencies. Evaluation can also help to promote the wider adoption of international instruments and to build trust in IOs and their practices.

“Evaluation” refers here not to evaluating the quality of the provisions of international instruments themselves, which would consider whether they set out clear and comprehensive rules. Rather, it refers to evaluating the effectiveness, use, implementation, or impacts of these instruments. Such an evaluation often involves the collection and analysis of data related to the policies that the instruments address; who is using them, why and how; the costs and benefits of using the instruments (intended or unintended); and the extent to which they achieve their objectives in practice. IOs collect a range of data in relation to their instruments, and the information generated reflects the nature of the instrument(s) concerned (see Chapter 1).

There is a broadening commitment amongst IOs to developing a greater culture of evaluation of international instruments, despite the fact that they generally find evaluation to be a challenging and resource-intensive activity. For example, while IOs may have the technical expertise and resources to conduct evaluation of their instruments, domestic constituents generally possess the detailed information regarding their implementation and impacts, as well as knowledge of their coherence with national regulatory frameworks (OECD, 2016[1]). IOs may also face methodological challenges, such as difficulties associated with measuring and isolating impacts.

Against this backdrop, this section of the IO Compendium aims to inform the evaluation practices of IOs by setting out the variety of available approaches, as well as their associated benefits and challenges. The evaluation practices considered in this section are carried out by IOs themselves (i.e. evaluations by other organisations of international instruments are not considered).1 The discussion is grounded in the existing practices of IOs, collected through the framework of the IO Partnership.

IOs conduct evaluations of their instruments for a variety of purposes. These include encouraging the implementation and use of their instruments, supporting advocacy initiatives, gauging the assistance needs of members, assessing levels of compliance, and feeding evaluation results into monitoring procedures (see Chapter 2).

Evaluation can also contribute to improvements in the design of international instruments by highlighting areas for updating. Overall, evaluation can help international rule-makers to take stock of the costs and benefits associated with their instruments (Parker and Kirkpatrick, 2012[2]). This can facilitate consideration of how these costs and benefits are distributed, whether there are differential impacts across members, and whether the benefits outweigh the costs.

The benefits of evaluation can be magnified when applied to the stock or a sub-set of instruments, and co-operation with other IOs active in relevant fields can enhance the outcomes where instruments may complement one another (see Chapter 5).

Ex ante impact assessments can serve to clarify the objectives and purpose of international instruments before the rulemaking process commences, supporting efficiency as well as effectiveness. It also encourages rule-makers to examine the variety of potential pathways for action – including the possibility of inaction – in advance of the adoption of instruments (OECD, 2020[3]). Ex ante impact assessment also facilitates the systematic consideration of potential negative effects and costs in advance of adoption, which can support their mitigation (OECD, 2020[3]).

Figure 3.1 and Table 3.1 enumerate the existing mechanisms to evaluate international instruments, building on the typology outlined in the Brochure (OECD, 2019[4]) and the responses to the 2018 Survey. The typology was developed to represent the evaluation practices undertaken by the IOs that are part of the IO partnership. It is therefore illustrative only and does not necessarily provide a complete picture of all possible kinds of evaluation practices that IOs could use.

This section contains key principles that may support IOs in enhancing their evaluation practices. These principles are derived from the experiences of a wide range of IOs. They also build on the OECD Best Practice Principles for Regulatory Impact Assessment (OECD, 2020[3]) and Reviewing the Stock of Regulation (OECD, 2020[5]), which synthesise domestic experiences in the evaluation of laws and regulations. Given the diversity of IOs and the kinds of normative instruments they develop, not all of the below principles will be relevant or practical for all IOs. Nevertheless, they can provide useful guidance and inspiration for IOs wishing to develop a greater culture of evaluation.

Institutionalising the systematic evaluation of normative instruments developed by IOs is an important step towards ensuring their continued relevance. The level of formality of such institutionalisation can vary – for example, ‘institutionalising’ an evaluation commitment could mean including evaluation practices in IOs’ rules of procedure, or via the creation of a unit whose job it is to carry out evaluations. Either way, it is a demonstration of the commitment of IOs to the continual improvement of their instruments and to ensuring that they remain fit-for-purpose.

When evaluation processes are clearly prioritised, defined and accessible (including who has the responsibility for overseeing and carrying out these processes), this can help embed evaluation into everyday organisational culture and practice.

IOs find it challenging to evaluate their instruments for a number of different reasons (see Section 5). Indeed, it may not even be possible to effectively evaluate every kind of IO instrument. For a type of instrument that has not been evaluated before and for which no evaluation best practice can easily be identified, it makes sense to first assess the “evaluability” of the instrument – looking at its objectives and considering how it is implemented and by whom, and whether evaluation will be possible and/or useful. In most cases, the answer will be “yes”, but the scope and breadth of evaluation may differ.

The first level of evaluation for International instruments is to evaluate their use or implementation – who is implementing the instrument, why, where, when and how (see Chapter 2). This is particularly relevant for non-binding instruments.

Because IOs often do not have oversight of the implementation of their instruments, especially for non-binding instruments, evaluation of use is not always straightforward and it can be hard to collect complete data. Nevertheless, even incomplete data can provide extremely useful information that can lead to international instruments being revised – or withdrawn – and help IOs better target and design the support they provide to encourage implementation of their instruments (see Chapter 2). Evaluation of use is also possible for international instruments that were not developed with clearly measurable objectives.

The next level of evaluation for international instruments is to evaluate their impacts. This is a much more complex undertaking than evaluation of use and IOs face major challenges related to availability of data; the difficulty of establishing causality (e.g. how much of the observed results can be attributed to the IO instrument in question vs other factors such as the enabling environment and complementary actions by other actors); and the fact that normative work can take a long time to have an impact.2 Because of these challenges and more, IOs generally have less experience conducting impact evaluation. Nonetheless, many IOs are conscious of this fact and are actively looking at how they can successfully move from evaluating use to also evaluating impact. There are a wide range of effective practices and methodologies for evaluating impact, and some IOs have prepared guidance documents to help others perform effective impact evaluation.3 IOs could also explore collaboration with other stakeholders such as academia or NGOs if the required technical expertise for impact evaluation is not available in-house (see Chapter 4).

A culture of evaluation cannot be created from scratch overnight. Defining the scale and objectives of the evaluations less ambitiously in the beginning could allow to take intermediate outcomes and use these to build confidence in evaluation processes within IOs, leading eventually to a greater willingness to go further in terms of evaluation. Even small amounts of data and limited results from smaller scale evaluations can demonstrate valuable impact and be important in influencing more actors to implement international instruments (see Chapter 2).

Developing guidance documents aimed at those responsible for planning or undertaking the evaluation will help to harmonise practices and set expectations for the IO and its stakeholders. A common approach is especially important if evaluations are carried out in a decentralised manner (for example, not led by the IO secretariat, but conducted by members or by external consultants).

Guidance could consider elements such as how to address:

  • Objective setting: how to set objectives that are practical and viable and establish clear and measurable evaluation criteria.

  • Selection of people to undertake the evaluation: outline the criteria /qualifications needed for undertaking the evaluation in question.

  • Evaluation costs: ensure the costs involved in the process of evaluation are proportionate to the expected impacts of the international instrument.

  • Benchmarking: when possible, consider benchmarking comparisons across jurisdictions.

  • Stakeholder engagement: ensure inclusive and effective consultation with relevant stakeholders/ those affected or likely to be affected.

  • Use of technology: think about how digital technologies can be used to increase efficiency of evaluation processes, analyse or collect data.

  • Use of data: make use of all available sources of information, and consider including less traditional ones such as open source data, satellite data, mobile phone data, social media etc.

  • Confidentiality, impartiality and independence: think about how to reflect these qualities at each stage of the evaluation process.

When international instruments have clearly-measurable objectives, these serve as helpful criteria for the evaluation. However, when this is not feasible or leads to an incomplete understanding of the instrument, it becomes important to provide qualitative descriptions of those impacts that are difficult or impossible to quantify, such as equity or fairness. Depending on the nature of the instrument and the level of evaluation foreseen, objectives might be specific to one instrument, or could apply to a set of instruments or type/class of instruments. Alternatively, the objectives for the instrument may be set or modified by the State or organisation implementing the instrument, according to local circumstances.

Using and documenting a rigorous process to establish objectives for international instruments – involving, for example, data collection, research and consultation with stakeholders likely to be affected by the implementation of the instrument – can help ensure objectives are coherent across different instruments of the same IO. It can also contribute to making the objective-setting process more transparent, potentially increasing the acceptance of and confidence in both the instruments themselves and the evaluation practices that later rely on these objectives.

The establishment of objectives needs to be part of the development process of IO normative instruments (see Chapter 1). Where possible, the process of objective-setting should be embedded within the larger practice of ex-ante objective setting and impact assessment (OECD, 2020[3]).4

Before developing an instrument, typically the IOs can consider the use of alternative options for addressing the objectives that have been established, including the effects of inaction. They should collect the available evidence and solicit scientific expertise and stakeholder input in order to assess all potential costs and benefits (both direct and indirect) of implementing the proposed instrument. The results of this assessment can help to improve the design of the proposed instrument, and communicate these results openly (where possible) to increase trust and stakeholder buy-in in the international instruments or the IO’s evaluation culture more broadly.5

The IOs should consider evaluating their instruments on more than an individual basis. Evaluating a sub-set of instruments or the whole stock of international instruments can introduce greater strategic direction into the practices of IOs by providing a detailed overview of the range of instruments applied and lessons on which instruments work better than others (OECD, 2020[5]).

IOs can begin with analysing sets of instruments within a given sector, policy area, or initiative, and gradually expand to include wider ranges of instruments. This will allow for the identification of gaps in portfolios where new international instruments may be needed and overlaps or duplication between existing instruments can be addressed.

The open availability of information about evaluation processes and transparent dissemination of evaluation results are important to build trust and demonstrate that a given IO has a sound culture of evaluation and of accountability for its instruments.

Consultation of key stakeholders at each stage of the evaluation process greatly contributes to transparency and can also increase the credibility of evaluation results (see Chapter 4). Sharing the draft conclusions of evaluation exercises for comment may help the evaluating body to strengthen its evidence base.

Evaluation reports should be made available as broadly as possible, including within the IO, to IO members and possibly even the broader public (unless, for example, there are issues related to the protection or confidentiality of stakeholders). Establishing a repository of past evaluation results (for example on the IO website) offers a means to achieve this. Providing copies of evaluation reports directly to stakeholders who contributed to the evaluation process is also good practice (UNEG, 2014[6]).

Not only should the results of evaluations be used, but the IO should be able to show how they have been used by the IO, its respective governing bodies, its members or other stakeholders to:

  • Improve International instruments and/or their implementation, including closing regulatory gaps in the stock of instruments (see Chapter 1).

  • Identify follow-up actions related to other IO practices and items that need to be fed into the next cycle of IO decision-making.

  • Identify lessons-learned during the evaluation process that can improve the evaluation process itself (e.g. improving guidance documents, objective-setting processes or communication of results).

  • Advocate the value of international instruments (see Chapter 2).

In comparison with the other practices described in this Compendium, evaluation is not as frequently used by IOs. Nevertheless, more and more IOs are taking up evaluation practices. In the 2018 Survey of IOs conducted by the OECD, the great majority of IOs (28 out of 36) reported having adopted some form of evaluation mechanism. Of these 28, 14 IOs reported having a systematic requirement to conduct evaluation. Only 8 IOs reported having no evaluation practices at all (Figure 3.2).

When taking a look at the different categories of IOs,6 it becomes clear that evaluation is most frequently conducted by IGOs with smaller, closed memberships or Secretariats of Conventions. This is likely to be a function of the formality of the instruments used (secretariats of conventions) and the practicality of conducting evaluations with smaller memberships (“closed” IGOs).

When engaging in evaluation, IOs most frequently focus on evaluating the use, or implementation of international instruments, as opposed to their impacts (OECD, 2019[4]) (OECD, 2016[1]). For example, a number of IOs, including OIML and ISO, report some sort of periodic review of the use or implementation of their instruments to decide whether these should be confirmed, revised, or withdrawn. One example of an IO that does evaluate impacts is the Secretariat of the Convention on Biological Diversity (CBD), which reported conducting mandatory reviews of the effectiveness of its instruments (the protocols). However, despite the relatively low uptake of the practice, IOs and their constituencies nevertheless acknowledge the need to review the impacts of instruments in order to assess their continued relevance and/or the need for their revision. This was a clear take away from the second meeting of international organisations7 and is reflected in the results of the 2018 Survey.

While all categories of instruments are evaluated, the results of the 2018 Survey show that instruments qualified as “standards” by the IOs are the type of instrument most frequently reviewed. International technical standards more specifically (e.g. ASTM International, IEC, ISO) undergo regular evaluation with the aim of ensuring quality, market relevance and that they reflect the current state-of-the-art. Often, evaluations of international technical standards take place on a systematic basis and with a set frequency (e.g. at least every 5 years). Whilst they are not as frequent, there are various examples of other types of instruments being evaluated, including conventions (e.g. the evaluation of all six Cultural Conventions by UNESCO), and even voluntary instruments (e.g. MOUs etc.).

According to the 2018 Survey, out of those who reported having adopted some form of evaluation mechanism, half reported having made evaluation a systematic requirement for their instruments. The other half does not have such a general requirement, i.e. only a subset of their instruments is subject to evaluation or evaluations are carried out only on an ad-hoc basis. There are different ways to embed evaluation requirements, including as clauses of specific instruments themselves, or in broader rules of procedure, guidelines, or terms of reference (see Chapter 1).

Whether or not there is an obligation to take action in response to the evaluation of international instruments often depends on the outcome of the evaluation itself. For example, if an instrument is ‘confirmed’, no action may be required. Whereas if the evaluation results in a proposal to ‘revise’ or ‘withdraw’ the instrument, further action will be necessary. In some cases, IOs may only recommend action rather than impose it on their members.

Regarding the governance of evaluation processes, this is generally a shared responsibility between the IO Secretariat and members. Survey responses indicate that this shared responsibility is systematic for some, but for others is decided for each instrument on an ad-hoc basis.

As far as the entity in charge of the evaluation, in some IOs technical committees responsible for the development of the instrument are also in charge of the evaluation. This is mainly the case for standardisation organisations. Other IOs have a permanent standing body or unit dedicated to the evaluation of instruments, including a governance or global policy unit, or the department which has developed the instrument. Other less frequent forms of evaluation governance include ad-hoc working groups, and governing boards or presidential councils that assume evaluation responsibilities. Only in very few cases is an external body contracted to conduct the evaluation.

There is a notable tendency for IOs to evaluate their instruments ex post, rather than ex ante (Figure 3.2). In the 2018 Survey, only 12 IOs indicated that they conduct ex ante impact assessment and, of these, most do not do so on a regular basis (only three IOs reported that they always perform an ex-ante evaluation, and two reported that they do so frequently). When ex post evaluation is carried out, it is generally for a single instrument rather than for the overall stock (OECD, 2019[4]). Many ex post review processes of IOs are time-bound, with provisions often mandating a review-process five years after implementation.

Among those international organisations which routinely conduct evaluation, the analysis of individual instruments represents the most common type. According to the 2018 Survey results, two in three IOs evaluate the use, implementation or impact of single instruments. Within this group, international private standard-setting organisations stand out for the consistency with which evaluation is applied, its scope and format, and its embeddedness within the development of instruments (Box 3.4, see Chapter 1). The uniformity of this practice illustrates a mutual learning process across IOs, and opens new spaces for co-ordination (see Chapter 5). Beyond the OECD, these forms of evaluation are not replicated by IGOs. However, there is no necessary institutional reason for this, and IGOs could consider integrating similar minimum review requirements into their instruments.

The evaluation of individual instruments also occasionally extends to the core and/or founding instruments of an international organisation. The rationale for undertaking such an evaluation is clear. As these instruments frame, establish the basis for, and encompass an expansive range of international rulemaking activities undertaken by the IO in question, knowledge that these instruments are fit-for-purpose is vital to lending credibility and legitimacy to the organisation, demonstrating the effectiveness of their efforts, and making the case for the wider adoption of their instruments. In addition to the practices included in Box 3.5, this form of evaluation would equally apply to the ILO SRM (Box 3.1); the OECD SSR (Box 3.6), IUCN’s review of Resolutions, and UNESCO’s Evaluation of its Culture Conventions (Box 3.2); and the Standards Review procedures outlined in Box 3.3.

Some IOs have demonstrated an organisation-wide ambition to assess the effects and relevance of their instruments and embark on broader evaluation efforts. The 2015 and 2018 IO Surveys showed that stock reviews were not as frequent as ex post evaluations of individual instruments. Nevertheless, in recent years, some initiatives have emerged, with several organisations recently conducting broad reviews of their sets of instruments (Box 3.6).

There is no common or frequently-used methodology employed by IOs to conduct evaluations. The methodologies to evaluate impacts remain nascent and specific to each IO, often vary according to the different kinds of instruments within the IO (Box 3.7) (OECD, 2016[1]).

Of the IOs responding to the 2018 Survey, 13 reported having written guidance on evaluation. Some IOs have developed their own internal guidelines or evaluation policy, while others reported that they used the Handbook by the United Nations Evaluation Group (UNEG) - an inter-agency group that brought together the evaluation units of the UN system (UNEG, 2014[6]). Other sources of written guidance on evaluation include the UNEG document library (which now includes guidelines for evaluation under COVID-19) and the Inspection and Evaluation Manual of the United Nations Office of Internal Oversight Services Inspection and Evaluation Division (OIOS-IED) (UNEG, 2021[29])(UNEG, 2020[30]) (OIOS-IED, 2014[31]).

In the exceptional cases in which IOs conduct an ex ante assessment before developing an international instrument, this is pursued for example with a list of questions and factors that are systematically posed to the rulemaking body. The results can be submitted for members’ approval prior to embarking on the rule-making process.

The evaluation of international instruments is still far from systematic across IOs, largely because they face a number of common challenges to evaluation. For one thing, the subject-matter – evaluation of normative activity itself – is recognised as being extremely difficult. Moreover, there are challenges related to resources, co-operation with constituents, organisational culture and the ability to use the results of evaluation, as well as specific challenges regarding the evaluation of impact and the type and the age of the instrument in question.

One of the most prevalent challenges is the resource intensiveness of evaluations. This resource challenge applies across IOs, but can be particularly challenging for IOs with smaller secretariats and limited resources. Resource intensiveness is a challenge in terms of both quantitative resources (time and money), and qualitative resources (expertise). Expertise can be very expensive to obtain if it is not available in-house and a cost-benefit analysis may be needed to determine if the evaluation is justified (OECD, 2016[1]).

Another set of difficulties in evaluating instruments stems from the respective role of IOs and their constituencies. IOs typically need to have a mandate from their members to engage in evaluation activity, but this is not necessarily straightforward. This arises from both the dynamics between governing bodies and members, as well as the heterogeneity across members’ interests. Firstly, the relevant governing bodies of the instrument need to have a key role in the evaluation, particularly to ensure follow-up to the recommendations. Nevertheless, it can be difficult to achieve consensus among governing bodies and IO members on precisely what should be evaluated, the depth of the evaluation and on the development of specific recommendations. Secondly, the heterogeneity of IO members can mean that they have very different needs and capabilities and, consequently very different objectives. For example, the engagement by different Member States and Associates in the CIPM Mutual Recognition Agreement (Box 3.5) varies significantly due to the different needs of their economies and highly divergent scientific and technical capabilities. Countries with advanced metrology systems prefer to focus on the higher-level capabilities, while economies with emerging metrology systems focus on the provision of more basic services. These challenges can sometimes be overcome by conducting broad-based consultations and using an iterative process, leading to a consensus built around broad, common objectives.

The increased evaluation of international instruments also faces some challenges of organisational culture. IOs may be reluctant to ‘lift the lid’ on instruments, in case the results of evaluation are not positive and the consequent need to publish negative findings. This is a sensitive issue but could be turned into a positive message on the value of evaluation if there is better awareness of the benefits that evaluation can bring both for IOs and their members, and particularly if IOs demonstrate improvements on the quality and effectiveness of their instruments following an initially disappointing or negative assessment. To help minimise these challenges, IOs have much to gain from ensuring that they get the messaging around evaluation right. This involves promoting evaluation as an assessment of the effectiveness of instruments, as opposed to a review and comparison of their members’ performance in relation to those instruments.

IOs can also face challenges when it comes to using the results of evaluations. Even if the evaluation itself can be conducted, this does not necessarily mean its benefits will be fully realised. For example, if new technologies are needed in order to implement recommendations following an evaluation, the appropriate infrastructure or necessary resources may not be available.

Regarding evaluation of impact, there are significant methodological difficulties associated with measuring and assessing the effects of international normative activity given the potentially diffuse scope of application and the problem of establishing causality (attributing specific effects to international instruments). International instruments often lack assessment measures which allow for the measurement of both quantitative and qualitative data, thereby limiting understanding of the full breadth and complexity of their achievements (or reasons for their absence). Gathering the data required to evaluate impact can be particularly difficult for IO Secretariats because this information is mostly held by their members. Even in cases where there is willingness to share this information, there may be practical impediments.

Considering the different types of instruments, evaluation can be more challenging for voluntary instruments than for those which are mandatory (see Chapter 1). Voluntary instruments tend to be more flexible and there may be little homogeneity in terms of how they are implemented.

Depending on the age of the instrument, it can also be challenging for an evaluation to account for both implementation and impact. This is particularly relevant for recent instruments, which are merely trying to achieve adequate levels of ratification. To address this, evaluations need to consider the maturity of the instrument(s) in question and set realistic objectives from the outset.

Although the evaluation of international instruments remains relatively scarce, the increase in evaluation efforts is progressively contributing to improving the knowledge-base and understanding about the implementation and impacts of international instruments. With this growing experience and the emergence of new information technology tools, IOs can unlock new opportunities to gather and process broad quantities of data and information about international instruments and share it more fluidly among interested parties – whether between countries and IO secretariats or among IOs themselves, to leverage common information sources (see Chapter 5).

National regulators gather important information via domestic ex ante and ex post evaluation, which can fill the information gaps faced by IO secretariats on the impacts of their instruments. A 2017 survey of the OECD Regulatory Policy Committee (RPC) indicates that a third of member countries review the implementation of international instruments to which they adhere. Of those, six share the results of these evaluations with the relevant IOs (OECD, 2018[36]). In the 2018 IO Survey, 12 IOs indicated that they occasionally take into consideration the results of national evaluation of international instruments transposed into domestic legislation.

References

[13] ASTM International (2020), ASTM Regulations Governing Technical Committees | Green Book, https://www.astm.org/Regulations.html#s10.

[18] BIPM (2016), Recommendations from the Working Group on the Implementation and Operation of the CIPM MRA, https://www.bipm.org/utils/common/documents/CIPM-MRA-review/Recommendations-from-the-WG.pdf.

[24] CITES (2011), Resolution Conf. 14.8 - Periodic Review of Species included in Appendices I and II, https://cites.org/sites/default/files/document/E-Res-14-08-R17_0.pdf.

[10] European Commission (2016), Inter-Institutional Agreement on Better Law-Making in the European Union, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32016Q0512%2801%29.

[34] FAO (2020), FAOLEX Database, http://www.fao.org/faolex/en/.

[9] ILAC (2019), Structure of the ILAC Mutual Recognition Arrangement and Procedure for Expansion of the Scope of the ILAC Arrangement (ILAC - R6:05/2019), https://ilac.org/?ddownload=122555.

[19] ILAC (2015), ILAC Mutual Recognition Agreement (ILAC MRA), https://ilac.org/ilac-mra-and-signatories/.

[8] ILO (2011), Standards Review Mechanism (SRM), https://www.ilo.org/global/about-the-ilo/how-the-ilo-works/departments-and-offices/jur/legal-instruments/WCMS_712639/lang--en/index.htm.

[15] ISO (2019), Guidance on the Systematic Review Process, https://www.iso.org/files/live/sites/isoorg/files/store/en/PUB100413.pdf.

[26] ISO (2018), ISO 19011:2018 Guidelines for Auditing Management Systems, https://www.iso.org/standard/70017.html.

[27] ISO (1986), ISO Technical Management Board (ISO/TMB), https://www.iso.org/committee/4882545.html/.

[14] ISO/IEC (2018), ISO/IEC Directives Part I + IECD Supplement, https://www.iec.ch/members_experts/refdocs/iec/isoiecdir1-consolidatedIECsup%7Bed14.0%7Den.pdf.

[25] ISO/IEC (2017), ISO/IEC 17011:2017 Conformity Assessment — Requirements for Accreditation Bodies Accrediting Conformity Assessment Bodies, https://www.iso.org/standard/67198.html.

[21] IUCN (2018), Impact of IUCN Resolutions on International Conservation Efforts, https://portals.iucn.org/library/node/47226.

[17] OECD (2021), International Regulatory Cooperation and International Organisations: The Case of ASTM International, OECD Publishing.

[5] OECD (2020), OECD Best Practice Principles for Reviewing the Stock of Regulation, OECD Publishing, Paris, https://www.oecd.org/gov/regulatory-policy/reviewing-the-stock-of-regulation-1a8f33bc-en.htm (accessed on 25 March 2021).

[35] OECD (2020), OECD Study on the World Organisation for Animal Health (OIE) Observatory: Strengthening the Implementation of International Standards, OECD Publishing, Paris, https://dx.doi.org/10.1787/c88edbcd-en.

[3] OECD (2020), Regulatory Impact Assessment, OECD Best Practice Principles for Regulatory Policy, OECD Publishing, Paris, https://dx.doi.org/10.1787/7a9638cb-en.

[12] OECD (2019), Better Regulation Practices across the European Union, OECD Publishing, Paris, https://dx.doi.org/10.1787/9789264311732-en.

[22] OECD (2019), OECD Standard-Setting Review (SSR), OECD Publishing, Paris, https://one.oecd.org/document/C/MIN(2019)13/en.

[4] OECD (2019), The Contribution of International Organisations to a Rule-Based International System: Key Results from the Partnership of International Organisations for Effective Rulemaking, https://www.oecd.org/gov/regulatory-policy/IO-Rule-Based%20System.pdf.

[36] OECD (2018), OECD Regulatory Policy Outlook 2018, OECD Publishing, Paris, https://dx.doi.org/10.1787/9789264303072-en.

[1] OECD (2016), International Regulatory Co-operation: The Role of International Organisations in Fostering Better Rules of Globalisation, OECD Publishing, Paris, https://dx.doi.org/10.1787/9789264244047-en.

[7] OECD (2014), International Regulatory Co-operation and International Organisations: the Cases of the OECD and the IMO, OECD Publishing, Paris, https://dx.doi.org/10.1787/9789264225756-en.

[16] OECD; OIML (2016), International Regulatory Co-operation and International Organisations: The Case of the International Organization of Legal Metrology (OIML).

[11] OIML (1955), Convention Establishing an International Organisation in Legal Metrology, https://www.oiml.org/en/files/pdf_b/b001-e68.pdf.

[31] OIOS-IED (2014), OIOS-IED Inspection and Evaluation Manual Inspection and Evaluation Manual, https://oios.un.org/sites/oios.un.org/files/images/oios-ied_manual.pdf (accessed on 26 March 2021).

[28] OTIF (2019), Explanatory Note on the Draft Decision on the Monitoring and Assessment of Legal Instruments, http://otif.org/fileadmin/new/2-Activities/2G-WGLE/2Ga_WorkingDocWGLE/2020/LAW-19054-GTEJ2-fde-Explanatory-Notes-on-the-Draft-Decision-text-as-endorsed.pdf.

[2] Parker, D. and C. Kirkpatrick (2012), The Economic Impact of Regulatory Policy: A Literature Review of Quantitative Evidence, OECD Publishing, Paris, http://www.oecd.org/regreform/measuringperformance (accessed on 21 October 2020).

[29] UNEG (2021), UNEG Document Library, http://uneval.org/document/library (accessed on 26 March 2021).

[30] UNEG (2020), Detail of Synthesis of Guidelines for UN Evaluation under COVID-19, http://uneval.org/document/detail/2863 (accessed on 26 March 2021).

[6] UNEG (2014), UNEG Handbook for Conducting Evaluations of Normative Work in the UN System, http://www.uneval.org/document/detail/1484 (accessed on 4 December 2020).

[33] UNEG (2013), UNEG Handbook for Conducting Evaluations of Normative Work in the UN System, http://www.uneval.org/document/detail/1484.

[20] UNESCO (2019), Evaluation of UNESCO’s Standard-Setting Work in the Culture Sector, https://unesdoc.unesco.org/ark:/48223/pf0000223095.

[32] UNESCO (2013), Evaluation of UNESCO’s Standard-setting Work of the Culture Sector, Part I: 2003 Convention for the Safeguarding of the Intangible Cultural Heritage, https://unesdoc.unesco.org/ark:/48223/pf0000223095.

[23] WCO (2020), WCO Performance Measurement Mechanism (PMM), http://www.wcoomd.org/zh-cn/wco-working-bodies/capacity_building/working-group-on-performance-measurement.aspx.

Notes

← 1. Some IOs may distinguish between ‘internal evaluations’ (where the evaluation is carried out by a dedicated, independent evaluation unit that is part of the IO) and ‘self-evaluations’ (where the evaluation is carried out directly by the unit responsible for the instrument for their own purposes). In both cases, external consultants/specialists may be contracted to assist, but the evaluation is still driven internally by the IO. This chapter does not make this distinction between ‘internal’ and ‘self-evaluation’ – both are relevant here.

← 2. UNEG Handbook for conducting evaluations of normative work in the UN system (2014) www.uneval.org/document/detail/1484.

← 3. For example, the OECD DAC Better Criteria for Better Evaluation (2019) www.oecd.org/dac/evaluation/revised-evaluation-criteria-dec-2019.pdf, the UNEG Handbook for conducting evaluations of normative work in the UN system (2014) and the European Commission’s Guidelines on evaluation, https://ec.europa.eu/info/sites/info/files/better-regulation-guidelines-evaluation-fitness-checks.pdf.

← 4. According to the OECD Regulatory Policy Outlook (2018), ex ante regulatory impact assessment refers to the “systematic process of identification and quantification of the benefits and costs likely to flow from regulatory and non-regulatory options for a policy under consideration”.

← 5. For more information on best practice in impact assessment see: the OECD’s publication on Regulatory Impact Assessment (2020) www.oecd.org/gov/regulatory-policy/regulatory-impact-assessment-7a9638cb-en.htm and the European Commission’s Guidelines on Impact Assessment https://ec.europa.eu/info/sites/info/files/better-regulation-guidelines-impact-assessment.pdf.

← 6. See Glossary section.

← 7. http://www.oecd.org/regreform/regulatory-policy/IO-Meeting-Agenda-17-april-2015.pdf.

Metadata, Legal and Rights

This document, as well as any data and map included herein, are without prejudice to the status of or sovereignty over any territory, to the delimitation of international frontiers and boundaries and to the name of any territory, city or area. Extracts from publications may be subject to additional disclaimers, which are set out in the complete version of the publication, available at the link provided.

© OECD 2021

The use of this work, whether digital or print, is governed by the Terms and Conditions to be found at http://www.oecd.org/termsandconditions.