2. The state of play in SME and entrepreneurship policy evaluation

We begin by reviewing the current “state of play” of our knowledge of the impact of public policy on SMEs and entrepreneurship. We conclude that, despite the scale of SME and entrepreneurship policy expenditure and long-established calls for careful evaluations of its impact, not only are SME and entrepreneurship evaluations undertaken less frequently than in other areas of public policy, but many of those undertaken are of lower quality and hence are less reliable. In addition, we show that where high-quality SME and entrepreneurship evaluations have been undertaken, the results are “mixed” in terms of their findings in terms of whether or not the policies have any impact or have an overwhelmingly positive impact across relevant outcomes.

We then ask why there is limited reliable evaluation evidence to draw on, and whether the reluctance to evaluate can be explained by the technical and political complexities of the issues addressed. We identify a series of broad challenges for instilling reliable evaluation of SME and entrepreneurship policies, alongside the technical issues of establishing control groups, which are discussed elsewhere in this Framework. We conclude that, although the complexities are real, they are far from insurmountable. In contrast, the benefits to SMEs and taxpayers of placing evaluation at the heart of the policy-making process are considerable and are now more easily attained because of recent data improvements and exemplar cases.

The United States Government Accountability Office (GAO) report for 2012 (U.S. Government Accountability Office, 2012[1]) reviewed 53 SME and entrepreneurship programmes in four different agencies with an aggregate budget of USD 2.6 billion. It found:

“for 39 of the 53 programs, the four agencies have either never conducted a performance evaluation or have conducted only one in the past decade. For example, while SBA (Small Business Administration) has conducted recent periodic reviews of 3 of its 10 programs that provide technical assistance, the agency has not reviewed its other 9 financial assistance and government contracting programs on any regular basis” (ibid. p56).

Infrequent evaluation of SME and entrepreneurship policy is a common problem internationally.

Furthermore, SME and entrepreneurship evaluations are often of poor quality, and hence lack reliability. For example, a report for the UK National Audit Office (Gibbons, McNally and Overman, 2013[2]), identified 35 UK Government evaluations spanning the policy areas of labour market activation, business support, education and spatial policy. They conclude:

“none of the business support evaluations provided convincing evidence1 of policy impact. In contrast, 6 out of 9 education reports and 6 (arguably 7) of the 10 labour market reports were of a sufficient standard to have some confidence in the impacts attributed to policy”2.

On the international level, the What Works Centre for Local Economic Growth at the London School of Economics reviewed 690 (small) business support programmes across OECD countries (What Works Centre for Local Economic Growth, 2016[3]). It found that only 23 of the evaluations, or 3.3%, met the Centre's minimum standards of reliability.3 

This Framework therefore recommends that:

Recommendation: Every three years, all major SME and entrepreneurship programmes should be the subject of a reliable evaluation, defined as a minimum of Step V, only the very “short-lifers” being excluded.   
        

The recommended periodicity is based on evidence that the impacts of business support on SMEs and entrepreneurship tend to occur within three years of the intervention (Drews and Hart, 2015[4]). This evaluation regularity should be in line with reasonable proportionality objectives, i.e. evaluation costs as a share of programme costs. (OECD, 2007[5]) recommended an evaluation budget of 1% of programme costs. This is still the target of this Framework. It can stimulate good evaluation outcomes, particularly given reduced evaluation costs as data availability has improved, for example through more accessible official government data.

In addition to limited numbers of reliable evaluations, those evaluations that are available internationally provide unclear evidence on the impact of SME and entrepreneurship policies. Whereas some programme evaluations show effectiveness in meeting objectives, others indicate no programme impact on core policy objectives. This is shown, for example, by the variety of impact findings from the 50 evaluation cases featured in this Framework. Similarly, the extensive international meta review referred to above by the What Works Centre (What Works Centre for Local Economic Growth, 2016[3]) found that:

“Business support and advice had a positive impact on at least one business outcome in 14 out of 23 evaluations. Five evaluations found that business advice didn’t work in any outcome evaluated, and one study found negative effects against the stated objective, although other positive effects were also recorded.

Business advice programmes show largely mixed results across the board. The nine evaluations looking at productivity show consistently mixed results, with one third of studies finding positive results, just over one third of studies finding no impacts, and just under one third of studies finding mixed results. Of the 17 studies that look at employment outcomes, only six report positive programme effects, whilst eight evaluations report zero effects. For the two studies that look at employment duration or small business survival, results are substantially worse, with no positive findings4. Results for sales and turnover outcomes are somewhat better than for employment and productivity, with eight of 16 studies reporting positive results.”

In part because the evaluation findings are so mixed, there is a powerful body of both policy and academic opinion that now asks whether public policy is currently combining impact on SMEs and entrepreneurship with value for money for the taxpayer.

More widespread reliable evaluation evidence is needed to respond to these concerns and to identify what works and what does not. Here we consider four reasons why the current evaluation evidence base is not fit for the purpose of steering policy to the most impactful measures. It sets aside the question of the need for more reliable control-group based methods, which is dealt with elsewhere.

The impact of a programme needs to be established with reference to its Objectives. This cannot be achieved where the objectives are not specified. Where Objectives are set they are often too vague or distant from the intervention to serve as the impact assessment yardsticks, such as improve entrepreneurial culture or increase participation of the population in entrepreneurship. Clear, quantifiable objectives are needed that are tied back to the outcomes of the programme.

The problem is clearly demonstrated by (Sara, 2016[6]) in their review of start-up support for young people in the European Union. They identified 66 publicly funded start-up support measures, 34 of which specifically targeted young people, with 21 being fully documented.

They say:

“Most of the start-up support measures reviewed do not have specific and measurable objectives or targets that could be used to guide the evaluative research. Such targets are often inserted afterwards by the researchers as part of the evaluation exercise. The majority of measures specify higher-level aims, focused, for example, on enterprise development and increasing employability.” p43

Most surprisingly, this absence of Objectives and Targets is even found for nine out of the ten evaluations that were evaluated using the counterfactual design – Steps IV to VI of our Six Steps to Heaven framework. The single exception was the German Start-Up Coaching programme.5

This absence of identifiable Objectives and Targets for SME and entrepreneurship policies is noted elsewhere. For example, in their evaluation of Almi business advice programmes in Sweden, (Widerstedt and Månsson, 2015[7]) say:

“The small scale intervention was particularly difficult to evaluate. The objective of the intervention was unclear, both from a policy standpoint and from the perspective of the firms. The fuzzy intervention logic, unspecified target group and unknown intervention objectives creates an expectation of very small impacts on growth.”

Our review in Part II points to this being a widespread characteristic of SME and entrepreneurship policies in many countries.

The Objectives and Targets for a programme cannot just be “anything you happen to hit”6. A policy intervention should be introduced to address a specific problem and that problem has to be specified in order to justify the use of taxpayers’ money on the intervention. The evaluation should establish its impact with respect to the problem identified. If the programme does not “solve” a specified problem but addresses – presumably by chance since that was not the intention – a different “problem” then that would of course be helpful but the original problem remains. However, it is more likely that if the programme does not solve the originally specified problem then either it doesn’t solve any problem or it solves an unimportant problem (or else it would have been specified when the programme was introduced). The key issue is that only a good quality evaluation can pick up these links, and this is only possible when Objectives are clearly specified up-front, when the programme is initiated.

The specification of Objectives (e.g. stimulate business start-ups by young people) and Targets (e.g. create 1000 new youth-run businesses) for SME and entrepreneurship policy is particularly important given that the impacts of policy may vary strongly according to the types of entrepreneurs or SMEs it aims to support (innovation-oriented start-ups, existing SMEs, micro enterprises, entrepreneurs from disadvantaged populations etc.).

One of the most telling recent criticisms of entrepreneurship policies, as they currently stand, is that they are not sufficiently targeted on addressing obstacles hindering impactful start-ups. For example, (Acs et al., 2016[8]) say:

“We find that most Western world policies do not greatly reduce or solve any market failures but instead waste taxpayers’ money, encourage those already intent on becoming entrepreneurs, and mostly generate one-employee businesses with low-growth intentions and a lack of interest in innovating.”

Their view is that public policy interventions are only justified by the presence of market failures and that these occur most clearly when there is a divergence between public and private gains. Policies to promote innovation and growth-oriented start-ups are justified on the grounds of public benefits such as job and income creation. In contrast, policies to stimulate new firm formation in general are less clearly justified by market failures.

This reinforces the point about needing to be clear about policy Objectives and Targets. For (Acs et al., 2016[8]), policy-makers have to be clear that public funds should be directed towards innovative enterprises with the skills and motivation to grow and so generate public benefits. Equally explicit is that public support should not be available for “one-employee businesses with low-growth intentions”.

This approach would, of course, exclude the vast bulk of SMEs and entrepreneurs in all countries from public support. It would also put low priority on social benefits that may be achieved without enterprise innovation and growth, for example through business creation and operation by individuals who are unemployed or disadvantaged in the labour market.

This serves to reinforce the importance of an open discussion about which groups of SMEs or entrepreneurs, if any, should receive public support and for what reasons. Once that discussion is over, the purpose of the policy has to be made clear and captured in the specification of its Objectives and Targets.

This Framework therefore emphasises that:

Recommendation: Governments should specify in advance the Objectives and Targets for each policy and programme introduced. This should include the specific groups of entrepreneurs or SMEs to be supported and a clear justification for the policy intervention in terms of the problem it aims to solve.   
        

With respect to entrepreneurship policies, (Acs et al., 2016[8]) say:

“A central-payer health care would remove healthcare-related distortions affecting employment choices; greater STEM education would produce more engineers of which some start valuable new firms; and labor market reform to encourage hiring immigrants in jobs they have been educated for would reduce inefficient allocation of talent to entrepreneurship”

This poses the question of whether SME and entrepreneurship policy objectives can be more cost-effectively achieved by ‘Macro’ policy approaches compared with dedicated SME and entrepreneurship programmes offering finance, advice and other support directly to these firms and entrepreneurs. To address this critique requires the conduct of evaluations that are able to provide a valid comparison of cost-effectiveness across a range of policy areas, including ‘Macro’ interventions.

As an example, it might be argued that it would be more beneficial for existing SMEs to have public funds used to improve policing and security than to be provided with business advice7. However, the scale of the police budget is unlikely to be influenced by the interests of SMEs8. Similar issues arise with decisions on the provision of the high-speed digital communications infrastructure needed by SMEs and entrepreneurs or the extent to which the education provided in schools and colleges promotes skills for entrepreneurship. In contrast, there is much more likely to be a mechanism by which SMEs influence the scale and nature of business advice.

The challenge in this situation then, is to develop and use comparative evaluation evidence across different types of SME and entrepreneurship programmes, including ‘Macro’ interventions, and use this to shift resources to the most effective policy interventions. Such evaluations rarely happen, in part because of the boundaries across different ministries.

Despite its clear advantages for policymakers, we are unaware of a comprehensive evaluation system being in place in any country to compare the impacts of dedicated SME and entrepreneurship policy actions with alternative ‘Macro’ approaches that could have equal or more substantial impacts and could potentially provide greater cost effectiveness.

Nevertheless, the development of an evaluation culture, reflected in more policy assessments being undertaken across all the domains of policy intervention affecting SME and entrepreneurship activity, increases the likelihood of valid comparability assessments being available.

This Framework therefore proposes that as a minimum:

Recommendation: The introduction of new policy interventions should be based on evaluation evidence benchmarking expected cost-effectiveness against existing policies.   
        

Time has four clear consequences for evaluation. First, in many countries, both policy objectives, and the means of delivering policy, have changed considerably over time reflecting changed political priorities9. Many programmes therefore have only a very short life, making evaluation problematic.

Second, some policy initiatives are expected to have an effect within months – such as providing assistance to SMEs to attend a trade fair – whereas the effect of others may take a generation or more to appear – such as enterprise education programmes in schools. This implies that evaluation approaches will be expected to differ for policies expected to have short-, medium- or long-run effects. This, in turn, makes it more difficult to compare the cost-effectiveness of all programmes.

A third consequence, reflecting these frequent policy changes, is that the SMEs or individuals (entrepreneurs or potential entrepreneurs) who are the intended focus of policy, find both the switching and the diversity of forms of support confusing and respond by “opting-out” of the public support network altogether (Bennett and Robson, 2004[9]). This also creates evaluation issues because of problems in deriving samples of “control” firms.10

Finally, where evaluations have taken place and taken it into account, policy impact clearly varies over time. In a rare example of tracking the impact of the same programme over a number of years and using the same reliable methodology, (Drews and Hart, 2015[4]) concludes:

“For survival, impact of assistance is found to be immediate, but limited. Concerning growth, significant impact centres on a two to three year period post intervention for the linear selection and quantile regression models – positive for employment and turnover, negative for productivity. Attribution of impact may present a problem for subsequent periods. The results clearly support the argument for the use of longitudinal data and analysis, and a greater appreciation by evaluators of the time factor.”

This criticism emphasises that SME and entrepreneurship policy development, and particularly its evaluation has, to date, not taken full account of the political context in which the policy is delivered. This is particularly relevant when seeking to draw lessons from policy evaluations that have taken place in other countries, or in the same country but at earlier time periods.

Two cases illustrate the point. The first is the UK experience of “Think Small First”. This was intended to ensure that regulatory reform would give full consideration to SMEs at the early policy development stage. (Kitching, 2019[10]) reviews the impact of this policy by examining the Small Companies (Micro-Entities’ Accounts) Regulations 2013, which was intended to reduce the accounting requirements for small firms. Kitchen documents that the evolution of this legislation reflects the concern of large enterprises of SMEs receiving a cost advantage and is reflected in the “watered-down” form of the final legislation. This illustrates that the political context influences the nature of legislation. It is also likely to influence whether evaluations are undertaken and, if so, their scale and nature.

A second example is taken from New Zealand, where policy has evolved over time. New Zealand moved away from isolated or ad-hoc programmes and towards a greater emphasis upon their interdependencies and inter-connectedness.

(Jurado and Battisti, 2019[11]) document the powerful political dimension of these changes between 1978 and 2008 and link policy shifts to a small number of individuals who they call “institutional entrepreneurs”. They include policy advisers and senior officials within government, but also individuals from the business sector, international organisations such as the OECD, and academia.

They say:

“New Zealand became a signatory to the OECD Bologna Charter in 2000, which laid out the key issues affecting SMEs. Its involvement further exposed policy makers to the value that SMEs could generate.... and influenced how policy was developed”

“Our results depict a policy process where like-minded actors made up of key individuals and groups of stakeholders within the SME policy subsystem, held strong views about the direction of SME policy in order to enable economic growth. In the case of SME policy development, this moment of change occurred when key individuals promoted a particular aspect of SME policy, and the prevailing political discourse became more interested in developing the entrepreneurial qualities of individuals with the ultimate aim of developing successful SMEs.”

Policy evaluation was a powerful positive weapon which constituted the evidence-base for institutional entrepreneurs in New Zealand to recommend, and then implement, policy change. By 2010, SME policy evaluations of Step VI quality such as (Morris and Stevens, 2010[12]) were being used to bring about policy improvements in New Zealand.

Here the lesson is that, although programme impact is strongly influenced by what may appear from the outside to be little more than administrative decisions, frequently made by un-elected public officials, they can draw upon evidence from high-quality evaluations. The ability to generalise about outcomes from seemingly similar programmes enables these “administrative” decisions to be taken based on reliable evidence.

This Framework therefore proposes that in undertaking evaluations, a thorough and sensitive understanding is required of how programmes have been administered and delivered, often over long periods of time.   
        

This section summarises the lessons that can be learned from the above assessment of the current state of SME and entrepreneurship policy evaluation. These lessons are drawn upon in both the discussion of policy evaluations described in Part II and then in providing policy insights in Part III.

  • Every three years, all major SME and entrepreneurship programmes should be the subject of a reliable evaluation, defined as a minimum of Step V, only the very “short-lifers” being excluded.

  • The Objectives and Targets of the programmes should be specified, but open to modification in the light of changed circumstances and experience.

  • The Objectives and Targets should be specified in a format that enables them to be evaluated and a judgement reached on whether the policy was successful.

  • The Objectives and Targets should be specified when the policy is formally announced.

  • The impact of dedicated SME and entrepreneurship policies should be benchmarked against each other and against ‘Macro’ policies such as regulatory reform, infrastructure improvements and the tax regime.

  • Evaluations should be used to frame future policy changes.

  • Evaluation findings are clearly sensitive to the methods used, to “administrative” decisions and to economic context.

  • This makes the case for more evaluations but only those above the minimum quality threshold.

  • With this evidence, policy makers will be able to take better account of “administrative decisions” on how policy is delivered and on the role played by different macro-economic contexts.

References

[8] Acs, Z. et al. (2016), “Public policy to promote entrepreneurship: a call to arms”, Small Business Economics, Vol. 47/1, https://doi.org/10.1007/s11187-016-9712-2.

[9] Bennett, R. and P. Robson (2004), “The role of trust and contract in the supply of business advice”, Cambridge Journal of Economics, Vol. 28/4, https://doi.org/10.1093/cje/28.4.471.

[4] Drews, C. and M. Hart (2015), “Feasibility Study – Exploring the Long-Term Impact of Business Support Services”.

[13] Drinkwater, S., J. Lashley and C. Robinson (2018), “Barriers to enterprise development in the Caribbean”, Entrepreneurship and Regional Development, Vol. 30/9-10, https://doi.org/10.1080/08985626.2018.1515821.

[2] Gibbons, S., S. McNally and H. Overman (2013), Review of government Evaluations: A Report for the National Audit Office.

[14] Greene, F., K. Mole and D. Storey (2007), Three decades of enterprise culture?: Entrepreneurship, economic regeneration and public policy, https://doi.org/10.1057/9780230288010.

[15] Harrison, R. and C. Leitch (1996), “Whatever you hit call the target: an alternative approach to small business policy”, in Small firm formation and regional economic development.

[11] Jurado, T. and M. Battisti (2019), “The evolution of SME policy: the case of New Zealand”, Regional Studies, Regional Science.

[10] Kitching, J. (2019), “Regulatory reform as risk management: Why governments redesign micro company legal obligations”, International Small Business Journal: Researching Entrepreneurship, Vol. 37/4, https://doi.org/10.1177/0266242618823409.

[12] Morris, M. and P. Stevens (2010), “Evaluation of a New Zealand business support programme using firm performance micro-data”, Small Enterprise Research, Vol. 17/1, https://doi.org/10.5172/ser.17.1.30.

[5] OECD (2007), OECD Framework for the Evaluation of SME and Entrepreneurship Policies and Programmes, OECD Publishing, Paris, https://doi.org/10.1787/9789264040090-en.

[16] Ramlogan, R. and J. Rigby (2012), “The Impact and Effectiveness of Entrepreneurship Policy”, NESTA Compendium of Evidence on Innovation Policy Intervention August.

[6] Sara, R. (2016), Start-Up Support for Young People in the EU: From Implementation to Evaluation, Eurofound.

[1] U.S. Government Accountability Office (2012), Annual report: Opportunities to reduce duplication, overlap and fragmentation, achieve savings, and enhance revenue.

[3] What Works Centre for Local Economic Growth (2016), Evidence Review 2: Business Advice.

[7] Widerstedt, B. and J. Månsson (2015), “Can business counselling help SMEs grow? Evidence from the Swedish business development grant programme”, Journal of Small Business and Enterprise Development, Vol. 22/4, https://doi.org/10.1108/JSBED-06-2012-0073.

Notes

← 1. Our emphasis.

← 2. Very similar comments were made by (Ramlogan and Rigby, 2012[16]) who find that evaluations that reported on additionality/net effects or that use methods of causal inference to determine the impact and effectiveness of policy .... tended to be found in the academic literature rather than amongst those reports of government schemes that are publicly available.

← 3. The LSE report uses a Maryland Scale. Their “minimum” standard is broadly equivalent to Step V.

← 4. We highlight this sentence on the grounds that in Part II we make the case that a failure to take account of survival when the policy target is new and small firms lowers markedly the reliability of any findings.

← 5. But not the German start-up subsidy programme.

← 6. See (Harrison and Leitch, 1996[15])

← 7. For example (Drinkwater, Lashley and Robinson, 2018[13]) found that crime was the most important single obstacle facing the owners of micro-establishments in both Jamaica and Guyana. Across the wider Caribbean it was in third place.

← 8. Aspects of SME and entrepreneurship policy are often cross-departmental, and a problem arises when SME and entrepreneurship activity can be supported by departments that are not primarily responsible for SMEs and entrepreneurship. Mechanisms are needed to ensure that these other departments do take SME and entrepreneurship interests into account.

← 9. One example is New Zealand (Jurado and Battisti, 2019[11]), which we discuss shortly. Another is the UK where policy changed from a focus on new firm creation in the 1980’s to growth firms in the 1990s to a wider social focus in the 2000’s (Greene, Mole and Storey, 2007[14]).

← 10. By this we mean that disillusioned non-participants in public programmes may have very different characteristics from others in the control group – and be much closer to those in the treatment group.

Metadata, Legal and Rights

This document, as well as any data and map included herein, are without prejudice to the status of or sovereignty over any territory, to the delimitation of international frontiers and boundaries and to the name of any territory, city or area. Extracts from publications may be subject to additional disclaimers, which are set out in the complete version of the publication, available at the link provided.

© OECD 2023

The use of this work, whether digital or print, is governed by the Terms and Conditions to be found at https://www.oecd.org/termsandconditions.