5. Synthesis and implications

Part II of this Framework has provided a review of SME and entrepreneurship policy evaluations from two sources. The first is a “review of the reviews” and the second is our own review, which is limited to those evaluations of the highest technical quality.

Unsurprisingly there were several commonalities in their findings and messages. The most frequent was the use of the word “mixed” to describe the impact both of many individual policies and programmes and across the set of different policy measures applied. Some programmes clearly “worked” and, equally clearly, a smaller number did not. However, the most frequent assessment was that the effectiveness of a programme varied according to the metric on which it was judged. For example, a programme might be effective for larger but not for smaller SMEs; or it might be effective in enhancing the profitability of an SME but have no effect on employment. The choice and subsequent specification of the metric(s) used to judge a policy is therefore a key issue.

A second commonality is that it is now clear, for most countries and for most policy areas, that there are no longer either technical or data-based reasons for either not conducting evaluations, or for conducting sub-optimal evaluations. The reviews reported in Part II identify cases of high-quality evaluations that have been conducted in recent years across all the key areas of SME and entrepreneurship policy and across high-, medium- and low-income countries. Any reluctance to undertake reliable evaluations cannot therefore be explained on grounds of imperfect data or lack of access to expertise.

We now turn to key lessons that emerged primarily from the review of the selected 50 evaluations, all of which were of high technical quality and hence reliability. We begin with the lessons for policymakers and then turn to those relevant for evaluators.

For policymakers, the key consideration is the specification of objectives. The review makes it clear that most policies seem to have a diverse range of objectives which, in some but not all cases, are explicitly stated. It shows that policies frequently succeed on some objectives but not on others, so generating the “mixed” picture. There is therefore merit in tightly specifying a smaller number of objectives that are “common” across all policy areas. This will facilitate comparisons of the cost-effectiveness of different interventions and provide the case for shifting budgets to the most cost-effective policies. So, for example, job creation in areas of disadvantage could be enhanced by policies improving access to finance, by the provision of free business advice, and/or by programmes to enhance enterprise culture. Specifying a single or small number of objectives and then focusing evaluations on them would provide valuable insights into the policy area best able to deliver the objectives.

This Framework suggests having the three “common” metrics of Sales, Employment and Survival, which would be used in all evaluations of SME and entrepreneurship policies. These could then be supplemented by others appropriate within a specific policy area – such as Patents for Innovation-focused programme evaluations or Wages for Enterprise Culture and Skills or Areas of Disadvantage programmes – but these should be very few in number.

Making a judgement on cost-effectiveness also requires data on programme expenditure, yet in 10 out of the 50 reviewed evaluations, this information was unavailable. It is to be hoped that expenditure data is available to policymakers, even if external evaluators were unable to obtain it.

A further important finding for policymakers is that, based on evaluations using good quality methods and data, there appear to be no major policy areas where programmes are consistently ineffective. However, in line with (OECD, 2007[1]), doubts continue to remain over both “Soft” Business Advice/Coaching/Mentoring/Counselling programmes and programmes grouped as Enterprise Culture and Skills. This concern is partly based on the findings of the 50 evaluations examined, but is also based partly on the review of the meta-evaluations. Furthermore, it was also not possible to find any evaluations of Cluster policy that satisfied the technical requirements of this Framework.

So, although there is, as yet, no clear case for abandoning policies in these areas, we believe that if new initiatives are introduced in these areas, a precondition should be that a reliable evaluation is undertaken and results published.

We now turn to the conduct of evaluations. This is relevant both to evaluators and to policy makers concerned with commissioning evaluations. It is appropriate to begin by emphasising the huge improvement in the availability of reliable evaluations since the (OECD, 2007[1]) review. Using the Six Steps measure, only 6 out of 41 (15%) evaluations reported in 2007 were ranked at Step VI for reliability. In contrast, 43 out of the 50 evaluations included here – 86% – were ranked at Step VI.

Despite this progress, there remain clear areas for improvement. First, despite governments undertaking the high-quality and potentially influential evaluations reported in chapter 4, only about one-third of the evaluations clearly led to policy change. Even if an evaluation did not lead to an identifiable change in policy, it can be considered to have value when policymakers were reported as being aware of its findings. This was the case in approximately 75% of cases, but the aim has to be 100%. We see this as an important, but comparatively easy to address area.

More problematic is that, despite the marked improvement in the technical quality of evaluations in recent years, several important technical issues are not adequately addressed. First, it is concerning to find that survival was explicitly addressed in only one-third of studies. No study of new/small firm performance can be considered wholly reliable unless it addresses survival/non-survival.

Second, it has been noted above that the “mixed” picture that emerges from both this and earlier reviews in part reflects the multiple metrics chosen to evaluate programmes. An issue for evaluators is to investigate whether some metrics are consistently more likely than others to show positive impact, negative impact, or no impact. For example, it may be that programme X is classified as “mixed” because it included a metric that has been shown to be unresponsive to policy in several other evaluations. In short, evaluators should investigate whether there is a case for more fine-grained evaluations that can show the effects of policy on different metrics.

Third, it has already been stated that programme expenditure information is often missing from evaluation reports. However, in addition, even where data on programme funding has been collected, it has often proved difficult to use it to estimate and compare the cost-effectiveness of programmes. In part this is because of currency issues and because of the very different duration of programmes. In principle it is relatively easy to set out cost-effectiveness in ways that permit comparison across countries and time periods, but it is undertaken very rarely. It therefore remains an important but, as yet unresolved, challenge for evaluators to make better use of programme-cost data and then to use it to compare the cost-effectiveness of programmes.

Finally, a challenge for future evaluations is to combine assessment of the microeconomic and macroeconomic impacts of policies and programmes. It has been noted that SME and entrepreneurship policies and programmes can have both positive and negative effects beyond the recipient firms. For example, technical progress has been shown to generate positive local externalities. On the other hand, the creation of a new firm frequently also leads to the exit of others. Neither effect is adequately captured in the type of micro studies reviewed here. This means policymakers are unable to reliably judge the full impact of programmes.

Reference

[1] OECD (2007), OECD Framework for the Evaluation of SME and Entrepreneurship Policies and Programmes, OECD Publishing, Paris, https://doi.org/10.1787/9789264040090-en.

Metadata, Legal and Rights

This document, as well as any data and map included herein, are without prejudice to the status of or sovereignty over any territory, to the delimitation of international frontiers and boundaries and to the name of any territory, city or area. Extracts from publications may be subject to additional disclaimers, which are set out in the complete version of the publication, available at the link provided.

© OECD 2023

The use of this work, whether digital or print, is governed by the Terms and Conditions to be found at https://www.oecd.org/termsandconditions.