Annex A. Explanation of the template for the 50 evaluation profiles

This Annex sets out the rationale underpinning the selection of factors considered to be important in reaching a balanced conclusion on the effectiveness of an SME and entrepreneurship policy or programme. These are captured in the rows of each of the 50 evaluations in Annex B and described below. Where the factor uses a “scoring system”, this is also set out. Finally, where the information used was not available in the published documents and had to be derived from those conducting the evaluation this is also noted.

  • DATES: This specifies the years in which the programme operated. Some programmes operate for many years, whereas others have only a short life. It might be expected that the longer-life programmes will be both more likely to be evaluated and to be found to be successful1, but this has yet to be clearly demonstrated.

    The specific years in which a programme operated may also influence outcomes. For example, new firms started in a recession show poorer performance than those beginning in prosperous times, with this persistence continuing for up to a decade2. Public programmes to promote start-ups might therefore be expected to have different impacts under different macro-economic conditions.

  • OBJECTIVES: The specification of objectives prior to the start of the programme is a key recommendation from Part I. It emphasised that Objectives and Targets should be specified in a format that enables them to be evaluated. Only then can a reliable judgement be reached on whether the policy was successful. These Objectives should be specified when the policy is formally announced.

    When ranking objective-setting, we used a scale from 1 to 3. We ranked 1 when the programme had only general objectives, 2 when the programme had selected indicators close to its objectives and 3 when the programme also had specific milestones and target values. Since this information was infrequently documented in the published review, it had to be obtained from those that had conducted the evaluation.

  • TOPIC: A key choice facing policy-makers is between different forms of intervention. They have to decide the policy funding priorities and the appropriate policies to deliver such priorities. In theory, if SME and entrepreneurship policy was delivered efficiently, the marginal impact – say in terms of cost per job created – of each policy instrument would be identical. So, for example, loan guarantee programmes and business advice programmes would be equally effective per unit of expenditure.

    Eight SME and entrepreneurship policy groupings are used within this Framework. A key role for evaluation is to offer insights into the relative cost-effectiveness of both the policy groupings and the individual policies within the groupings. Where this impact varies widely there is a case for transferring funding from the high to the low cost-effectiveness policies.

  • TARGET GROUPS: Most policies focus upon either specific groups of individuals – such as the unemployed or the disadvantaged – or on specific types of firms such as new enterprises or those seeking to export. It is therefore important to determine the relative effectiveness of people-based, compared with firm-based, programmes, as well as policies selecting certain types of enterprises such as new start-ups, innovative SMEs or scale-up firms. For this reason, where target groups are specified this is noted.

  • SOURCE OF EVIDENCE: This shows the sourced document from which information on each of the 50 evaluations was derived. These are primarily government reports or articles in academic journals.

  • REGIONAL/LOCAL FOCUS: Access to programmes frequently varies by location. While some programmes are delivered nationally, others have a restricted regional, or even, local focus. This distinction, as shown in Part I, is important since the comparative effectiveness of national/regional/local delivery mechanisms can vary. Evaluation can thus provide insights that could help policymakers to choose how best to deliver policy, depending on the focus of the programme.

  • IMPACT VARIABLES: This specifies the business performance variables that the programme is expected to enhance. Most frequently, these include sales and employment but they frequently differ depending on the focus of the programme. As emphasised in Part I, these impact variables should be specified in advance of the operation of the programme.

  • SURVIVAL: The high rate of closure of new firms in particular3, but also of smaller SMEs, means that a failure to take full account of firm exits biases evaluation findings in favour of survivors which, by definition, are more successful than those that have exited. This emphasises the importance of tracking panels of both recipients and “controls” over time, so as to identify the survivors and non-survivors in both groups. This is a vitally important element of a successful evaluation and it is a key element of our overall summary Evaluation Quality Score (EQS), which we discuss below.

  • DATA SOURCES: This sets out the original sources of data used to conduct the evaluation. As emphasised in Part I, the data should be representative of participants and of a control group of otherwise similar non-participants.

  • STEP LEVEL AND EVALUATION QUALITY SCORE (EQS): The current review selects only those evaluations using advanced analytical methods. For each it provides a Six Steps classification, as described in Part I. Almost 90% of included evaluations scored the highest possible score of VI. In contrast, only 6 of the 41 evaluations reported in OECD 2008 reached Step VI.

    To reflect this improvement in evaluation reliability since 2008 a new, and considerably more challenging, measure has been developed: Evaluation Quality Score (EQS). This is our own 1-5 classification where the lowest score is 1 and the best score is 5.

    Rank 1 is when the evaluation was based only on a limited sample, where evaluation methods were very basic and/or not implemented properly, where impact variables did not match programme objectives and where survival analysis was missing.

    Rank 2 is when the evaluation was based only on a limited sample, where evaluation methods, although basic, were appropriately implemented but where impact variables did not match programme objectives, and where survival analysis was missing.

    Rank 3 is when the evaluation was based on an adequate and representative sample, evaluation methods were appropriately implemented, but impact variables did not match programme objectives and survival analysis was missing.

    Rank 4 is when the evaluation was based on an adequate and representative sample, evaluation methods were appropriately implemented, impact variables matched programme objectives, but survival analysis was missing.

    Rank 5 is when the evaluation was based on an adequate and representative sample, evaluation methods were appropriately implemented, impact variables matched programme objectives, and survival analysis was included. A glossary of evaluation methods is provided as Annex D.

  • RELIABILITY COMMENTS: In some cases, we have reservations over specific aspects of the evaluations, for example in cases where control groups are used but these groups may not have been ideally selected – the control is those that have not received the public support (Khandker, Koolwal and Samad, 2010[1]). A valid control group should consist of a comparable group of firms/individuals with “otherwise similar” characteristics and status to the treated group. Some studies use rejected applicants for a programme as the control but, if those making the accept/reject decision are able to forecast success, then the rejected group cannot be considered to be “otherwise similar” to those accepted.

  • KEY FINDINGS: This provides a brief synthesis of the findings of the evaluation. It distinguishes between those evaluations pointing to a (statistically significant) positive effect on a specified metric, one where there is no (statistically significant) effect and one where the effect on the metric is (statistically significantly) negative – the reverse of what was intended. In many cases there are several metrics on which programmes are evaluated and so it is important to distinguish the metrics where the findings are positive from those where impact is either zero or negative.

  • PROGRAMME EXPENDITURE: The inclusion of expenditure potentially enables a comparison to be made between the impact of large and small programmes. Reflecting our above discussion on the different topics of evaluations, this would ideally lead to being able to compare, across programmes, cost per job created, facilitating a policy discussion on priorities.

  • MACRO IMPACT: In addition to those benefitting directly from a programme, there are frequently other groups who either benefit or lose out4. Some recognition of the external effects of a policy is desirable, but these groups can be difficult to identify. We therefore limit our analysis in this area to making reference to any evidence of external impact – either positive or negative.

  • POLICY IMPACT OF THE EVALUATION: Most importantly, the final column of the evaluation profiles in Annex B reports the extent to which the authors of the evaluation reported that policymakers, as a minimum, were aware of the results of the evaluation or, ideally, had taken it into account in policy decisions. This information was not provided in any of the published sources. For this reason, all evaluation authors were contacted and asked about the policy impact of their evaluation. 40 replied to this request5. It should be recognised that this is self-reported data, with its well-recognised limitations, but the importance of the issue justifies the approach.

Notes

← 1. On the grounds that clearly unsuccessful policies are likely to be aborted quickly. However, there are other reasons, possibly unrelated to effectiveness, why programmes are short-lived – most notably changes of government.

← 2. (Sedláček and Sterk, 2017[88]) find that 90% of the variation in employment in cohorts of new firms is driven by the economic conditions in the year of firm birth

← 3. Two thirds of new firms have closed in five years [ (Anyadike-Danes and Hart, 2018[87])]

← 4. For example, a programme to encourage the unemployed to begin a business may lead to the closure or reduced profitability of other similar businesses in the surrounding locality. In contrast, programmes to promote innovation are argued to generate positive “spillovers” to others in the locality.

← 5. We also took this opportunity to confirm/ modify the information we had derived from their published sources.

Metadata, Legal and Rights

This document, as well as any data and map included herein, are without prejudice to the status of or sovereignty over any territory, to the delimitation of international frontiers and boundaries and to the name of any territory, city or area. Extracts from publications may be subject to additional disclaimers, which are set out in the complete version of the publication, available at the link provided.

© OECD 2023

The use of this work, whether digital or print, is governed by the Terms and Conditions to be found at https://www.oecd.org/termsandconditions.