4. The use of evidence in strategic decision-making in the LSGUs in Poland

The degree to which LSGUs use evidence in making decisions and publicly share the evidence underpinning decisions – both before the decision is made and once it is being implemented – speaks to their capacity to remain accountable to citizens, foster trust in public institutions and ensure public interventions are sound and forward-looking. This chapter discusses how Polish LSGUs use evidence for designing, implementing and reviewing their policies and regulations. In doing so, it focuses primarily on municipalities’ monitoring and evaluation (M&E) practices, which are key tenets of evidence-based decision-making.

The chapter first provides an overview of the monitoring mechanisms, processes and tools used by LSGUs (especially municipalities) for monitoring their public interventions, examines them critically and identifies a number of areas for improvement. It then presents the key features of their evaluation systems and practices, with special attention to factors determining quality such as stakeholder engagement and availability of data and resources (including in-house capacity). Lastly, the chapter presents an assessment of the extent to which evidence is effectively used by LSGUs in decision-making, and identifies areas for further promoting the use of evidence in that context.

The use of evidence in policy-making derives mostly from the existence of an M&E system, with relevant indicators and analysis. A robust monitoring system first and foremost implies the presence of an institutional framework for monitoring that provides: i) the legal basis to undertake monitoring; ii) clearly mandated institutional actors with allocated resources to oversee or carry out monitoring; iii) and macro-level guidance on when and how to carry out monitoring (OECD, 2019[1]).

Monitoring the performance of policy priorities is a key tool that has been used by OECD governments at all levels in order to improve strategic and operational decision-making and increase accountability in the use of public funds. In Scotland, for instance, the National Performance Framework sets “national outcomes” that reflect the values and aspirations of the people of Scotland and which are monitored via a publicly available website (Scottish Government, 2020[2]). In Mexico, the Federal Planning Law of 5 January 1983 requires that every state develop a State Development Plan, in line with the sexennial National Development Plan, and monitors its implementation (Mexican Government, 2013[3]). Thus, the institutionalisation of performance monitoring in legal and policy frameworks underlines the importance that governments attach to this practice and contributes to clarifying the role and mandates of the actors in the system.

In Polish LSGUs, government-wide policy priorities are, for the most part, formalised in local development strategies (LDS). For this reason, the following section mainly focuses on the mechanisms in place in LSGUs to monitor LDS (see Chapter 3 for more information on LDS). Nevertheless, as discussed in Chapter 3, LSGUs also adopt other sectoral or thematic planning instruments that require monitoring (e.g. spatial planning, property management, environmental protection, nature conservation and water management). As pointed out in Chapter 2, greater co-ordination between these planning instruments and their monitoring systems would allow for better strategic decision-making in LSGUs.

In Poland, at the local level, municipalities may prepare LDS. As explained in Chapter 3, when LSGUs prepare their LDS, they must take into account the orientations set out in the “Strategy for Responsible Development for the period up to 2020 with a perspective up to 2030” and adapt them to their local needs (Polish Government, 2017[4]). The November 2020 amendment to the Act on Principles of Implementation of Development Policy1 also seeks to encourage municipalities to develop LDS, in an effort to make these planning instruments mandatory eventually. Today, 95.5% of municipalities (OECD questionnaire) report that they already have an LDS. As described in Chapter 3, this high percentage can probably be explained by the fact that, in order to apply for European Union (EU) funds, LSGUs must show a close link between the activities for which they apply and the LDS.

At the regional level, LSGUs also develop regional development strategies (RDS), which must be aligned to the national development objectives (Polish Government, 2017[4]). Marshal offices also set up regional operational programmes (ROPs), which are co-financed from European Structural and Investment Funds (ESIF). Indeed, development policies in Poland (whether at the national or local level) are largely supported by EU funds (OECD, 2009[5]) and, as a result, are subject to European regulations. Thus, ROPs are an important RDS implementation tool, albeit not the only one.

Recipients of EU funds are mandated to monitor and evaluate the use and impact of these funds. For instance, the Common Provisions Regulation (CPR) for 2014-20 requires that member states set up monitoring committees for each operational programme (Article 49) (European Parliament and Council, 2013[6]). In Poland, regional monitoring committees are in charge of monitoring the attainment of objectives set out in the ROP, as well as the allocation of EU funds assigned within this document.

The regional managing authority conducts “[an annual] review [of the] implementation of the regional operational programme and progress made towards achieving its objectives” and prepares an annual implementation report (AIR) for its ROP which is finally approved by the regional monitoring committee (Articles 49 and 50 of the CPR) (European Parliament and Council, 2013[6]). Such regional AIRs from all 16 voivodeships are then consolidated at the national level with AIRs from sectoral national operational programmes (NOPs) in order to prepare the AIR at the level of a Partnership Agreement (PA). This is done by the national co-ordinating authority located in the Ministry of Development Funds and Regional Policy, which is in charge of co-ordinating all 16 ROPs and a couple of sectoral NOPs (see Figure 4.1 for a schematic view of the interaction between the regional monitoring committees and the national co-ordinating authority).

As a result, those LSGUs and regions that receive EU funds have clear frameworks for monitoring the use of these funds and often, as a result, the implementation of their development strategies. Other than that, many LSGUs and regions do not monitor their LDS or RDS. Indeed, data from the OECD questionnaire shows that just over 70% of municipalities and 80% of counties monitor the implementation of the LDS. Furthermore, as will be further explored in the following section, what is important is not only to monitor LDS/RDS but also to do so in a way that is of high quality and serves for decision-making. For this, guidelines and capacities are crucial. Indeed, beyond the creation of a legal framework and clearly assigned mandates, a sound monitoring system requires macro-level guidance on when and how to conduct monitoring and a clear understanding of what monitoring is.

Monitoring differs from evaluation in substantive ways. The objectives of monitoring are to facilitate planning and operational decision-making by providing evidence to measure performance and help raise specific questions to identify implementation delays and bottlenecks. It can also strengthen accountability and public information, as information regarding the use of resources, the efficiency of internal management processes and outputs of policy initiatives is measured and publicised. Monitoring is driven by routines and ongoing processes of data collection. Thus, it requires resources integration into an organisational infrastructure (OECD, 2019[1]).

Monitoring evidence can be used to pursue three main objectives (OECD, 2019[1]):

  • It contributes to operational decision-making, by providing evidence to measure performance and raising specific questions in order to identify implementation delays or bottlenecks.

  • It can also strengthen accountability related to the use of resources, the efficiency of internal management processes or the outputs of a given policy initiative.

  • It contributes to transparency, providing citizens and stakeholders with information on whether the efforts carried out by the government are producing the expected results.

Each of these objectives requires a different monitoring setup.

In Poland, the monitoring requirements imposed on subnational governments related to the use of EU structural funds mostly promote accountability in the use of structural funds (OECD, 2009[5]). Recipients of the funds must prepare an annual report on the use and impact of the funds (Figure 4.2 offers an example of how this AIR is prepared for the ROP). This is underlined by the 2014-20 guidance document on M&E for structural funds (EC, 2015[7]), which states that the annual report is “one of the key elements of the monitoring of an operational programme.” While this format is very useful in order for stakeholders to control and approve the implementation of programmes and strategies, it is not necessarily relevant for operational and timely decision-making.

As a result, subnational governments that report on EU funds may perceive monitoring as more of a control tool than one that is truly useful for improving the implementation of their policies and decision-making. Evidence (Meyer, Stockmann and Taube, 2020[8]) suggests that this bias towards control may also exist at the national level in regard to monitoring of national policy priorities.

First, for performance monitoring to serve as an effective management tool for operational decision-making, it must be embedded in a performance dialogue that is carried out regularly and frequently enough to allow decision-makers and public officials to adapt their actions and/or resources in the face of implementation difficulties (Vági and Rimkute, 2018[9]). Today, OECD data shows that only 17 out of 36 municipalities (OECD questionnaire) prepare frequent reports to monitor their policy priorities. Of these 17, only a few do so more than once a year. Thus, subnational governments may wish to consider setting up internal performance dialogues, between the highest decision-making level (the mayor or the marshal office for instance) and the heads of departments/services in order to review key objectives and indicators on the implementation of their development strategy. Chapter 7 includes a broader discussion of human resources issues in subnational governments in Poland.

Setting up a performance dialogue would create an incentive for municipal and/or regional departments to resolve implementation issues at the technical level through a gradual escalation process. If the problem is still unresolved after two quarters, it could be referred to the city council/mayor’s office/marshal office twice a year for a decision. Figure 4.2 demonstrates how this performance dialogue could be articulated between the lower levels of decision-making and the centre of the subnational government, over one year. To set up such a performance dialogue, the subnational government would have to clarify the respective responsibilities of each actor throughout the monitoring value chain: co-ordination and promotion of monitoring, data collection, data analysis, reporting, use of data. The November 2020 amendment to the Act on Principles of Implementation of Development Policy (Polish Government, 2017[4]) stipulates that municipalities should benefit from setting up their own monitoring system but does not detail how this could be done.

LSGUs could additionally consider selecting a limited number of priority objectives and indicators from their LDS, in order to focus resources dedicated to monitoring policies with high visibility and economic and social impact. Selecting a limited number of high visibility indicators could also serve to better communicate to citizens, either through a website updated with monitoring information on regional or local development strategy or through the elaboration of a communication document on the progress made on the strategy.

Generally speaking, these efforts could be supported by the Ministry of Development Funds and Regional Policy, which could consider drafting specific guidelines on LDS monitoring as well as the different goals pursued by monitoring and their corresponding methodologies. This is for example what is being done in Colombia, where the National Planning Department provides guidelines for subnational governments (mostly departments) to monitor their strategic plans (see Box 4.1).

First, developing clear performance indicators, their baseline and targets is important to monitor policy priorities such as the ones that are in the Polish local development strategy (LDS) or regional development strategy (RDS). Thirty-four municipalities declare that they define performance indicators for their development strategy ex ante, meaning that they have defined these indicators in conjunction with their development strategies. This could mean, however, that they declared the names of the indicators without including a quantifiable target.

Nevertheless, one-third (14) of surveyed municipalities still report that they did not define their indicators ex ante. More importantly, when they exist, there is a lack of systematic linkage between each objective contained in the LSGU strategies and the indicators chosen to monitor them, thus making it hard for stakeholders to assess the progress of these objectives. In fact, indicators are often not included in the LDS document itself. Municipalities could take the opportunity of the next round of LDS in order to clearly define the indicators with which they will measure the implementation of their strategies.

Finally, LSGUs use a wide variety of indicators in their policy priorities. A number of databases are available for subnational governments to access data useful for building and updating indicators. The regional territorial observatories, for example, have the role to collect, disseminate and exchange data on the development of a given region and public interventions carried out at the regional level (for more on the observatories see Box 4.2).

Additionally, the strateg web platform (https://strateg.stat.gov.pl/) proves a valuable tool to monitor development policies at the national and voivodeship levels. One of the main purposes of the platform is to gather data on statistical indicators relevant to the implementation of cohesion policy. Starting from mid-2021, another web portal from Statistics Poland, will be created to monitor public services. This platform (www.smup.gov.pl) should offer interesting data on the quality, quantity, accessibility and cost-effectiveness of over 80 public services.

Today, 34 out of 37 surveyed municipalities report that the availability of data is a challenge in conducting and promoting M&E. In fact, several municipalities report that regional and national statistics are not relevant to monitor their development strategies. Indeed, this type of data is best suited for long-term impact and context indicators, for which the underlying changes are less certain than for outcome indicators (DG NEAR, 2016[14]). Administrative data and data collected as part of an intervention’s implementation are usually better suited for process, output and intermediate outcome indicators, such as those found in LDS. All categories of subnational governments, and specifically all categories of LSGUs, could therefore consider using their own administrative data to monitor their LSGU strategies. This may be especially useful in the context of performance dialogue, for which data needs to be highly responsive. Such data can be sometimes found on the web portal bdl.stat.gov.pl/BDL/start.

In order to set up a monitoring system capable of producing credible and relevant data, as well as analyse this data, governments require capacities. The concept of capacity can be defined as “the set of forces and resources available within the machinery of government. [It] refers to organisational, structural and technical systems, as well as to the individual skills that create and implement policies that meet the needs of the public, in accordance with political orientation” (OECD, 2008[15]).

First, two types of skills are crucial:

  • analytical skills, to review and analyse data and draw conclusions

  • communication skills, to structure the monitoring dashboard and progress reports in a way that has impact and targets the end user.

Above all, monitoring performance requires having sufficient resources to collect data on a regular basis, calculate indicators, analyse data, etc. which requires a critical mass of trained agents and managers. In Polish municipalities these resources appear to be limited. While larger municipalities have audit or monitoring units in charge of these tasks (27 municipalities, of which 15 in functional urban areas [FUAs]), in small ones, monitoring activities are mostly performed by the mayor’s office (see Figure 4.4).

Moreover, there is a marked difference between the resources available at the regional level and at the municipality/county level, insofar as voivodeships are the managing authorities of structural funds and therefore have clearly allocated resources for reporting on the use of these funds. Initiatives such as the RTOs or even workshops and seminars conducted by the Ministry of Development Funds and Regional Policy also have tended to benefit the regional level more than LSGUs. This report suggests transferring more funds from the national/regional level to LSGUs for the purpose of monitoring.

Policy evaluation can be defined as a structured and objective assessment of a planned, ongoing or completed policy, programme, regulation or reform initiative, including its design, implementation and results. Its aim is to determine the relevance and fulfilment of objectives, efficiency, effectiveness, impact and sustainability as well as the worth or significance of public intervention. The term “evaluation” covers a range of practices, which can be embedded into various policy-planning and policy-making processes. For instance, many OECD countries use spending reviews. The area of regulatory policy is also one where the use of evaluation is well developed, with requirements for ex ante regulatory impact assessments (RIAs) and ex post reviews/evaluations. In the present section, “policy evaluation” will refer to the evaluation of public policies including strategies, programmes and (in some cases) laws and regulations.

Evaluation practices have developed significantly in Poland since the early 2000s. Similarly to monitoring practices, such development has largely taken place in the context of EU cohesion policy, of which Poland is by far the largest beneficiary among EU member states (EC, 2020[16]). Indeed, according to government officials, more than 2 000 studies assessing different levels of cohesion policy implementation have been produced to date. Already in a 2009 case study, the European Commission (EC) highlighted the evaluation system in Poland as an interesting example of a “systemic development that is cumulative in nature and goes beyond one programming cycle” (EC, 2009[17]). The National Evaluation Unit is in charge of steering and co-ordinating evaluation activities relating to cohesion policy interventions as well as state budget programmes (more limited coverage). Overall, Poland’s evaluation system of cohesion policy interventions is decentralised (as is often the case for programme-level evaluations), with evaluation units at the level of NOPs and ROPs. It receives support from an inter-ministerial team also involving Polish Evaluation Association representatives.

Poland has also strengthened its regulatory management practices at the national level, including ex ante RIA, stakeholder engagement and, to a lesser extent, ex post evaluation. As shown in Figure 4.5 below, despite observed improvements, the country is not yet among the best-performing countries in this area. In the same vein, interviews with government officials suggest that there is significant room for improvement in this area both at the national and subnational level, particularly when it comes to effectively implementing existing requirements (as shown by low scores in the “systematic adoption” indicator).

There is a common framework under development (Public Service Monitoring System) aimed at guiding the M&E of public services delivered by municipalities and counties. It will start operating in mid-2021 at https://smup.gov.pl and, for each public service (more than 80 covered), there will be indicators on quality, quantity, accessibility and cost-effectiveness. In addition, according to national level authorities, guidelines exist through which the national government seeks to promote the development of policy evaluation at the subnational level; they are coupled by financial support as well as capacity-building activities. Awareness and effective use of existing resources are however not widespread and appear to be particularly low in smaller LSGUs.

Evaluation-oriented efforts seem to be geared primarily to programmes and projects, especially those benefitting from EU cohesion policy funding. National government officials acknowledged that assessing local authorities’ ability to implement relevant guidelines is not straightforward as it depends heavily on contextual factors and that there is currently limited information at the subnational level in this respect. Some of the OECD questionnaire responses suggest, however, that the analysis of regulatory impacts is not systematic. According to one LSGU representative, for example, there is neither a formal procedure in place nor specific guidance for assessing the impact of regulations – which tends to occur on a rather informal and ad hoc basis. It must however be noted that, regarding regulatory decisions, all subnational governments have only limited latitude. Their regulatory powers are, in principle, restricted to situations in which legislative competency is expressly provided in national legislation (Kulesza and Sześciło, 2012[19]). Indeed, as shown in Figure 4.6, 33 out of 36 responding municipalities indicated that their regulatory decisions are mostly based on legislation or regulations provided by the national government; 21 of them said that they based their regulatory decisions on legislation or regulations provided by the regional self-government. These results are consistent with the feedback provided by national government officials during interviews.

A majority of regional self-governments (6 out of the 11 that responded to the OECD questionnaire) declared supporting the development of policy evaluation activities at the LSGU level. This support is provided through a variety of means including guidelines, financial assistance and capacity-building activities. In certain cases, LSGU-level authorities are members of the steering groups for the evaluation of ROPs (established at the voivodeship level). In addition, half of the responding marshal offices indicated that, when developing policies at the regional level, they consider their potential impact on the functioning of LSGU authorities, e.g. resources needed for implementation, existing administrative or enforcement capacity, etc. Examples provided by respondents of instances where such considerations take or have taken place predominantly refer to programmatic and planning documents in the context of RDS, both on an ex ante and an ex post basis.

Despite Poland’s progress with regard to evaluation over the past 15 to 20 years, a number of challenges remain in promoting M&E policies at the subnational level. These notably have to do with limited awareness and understanding of the benefits of evaluation (which in turn undermines ownership and active involvement), limited resources and capacity, insufficient use of evaluation results in policy-making as well as the quality of evaluations themselves. However, a number of large municipalities offer promising examples in this regard.

According to the OECD questionnaire results, municipalities use policy evaluation to assess a number of aspects related to their functioning and effectiveness. As shown in Figure 4.7 below, 29 out of the 39 responding municipalities declared that they use evaluation to assess the delivery of public services. Eleven out of the 16 municipalities indicating that they use evaluation to assess progress towards sectoral targets (environmental, safety, well-being) were within FUAs. Similarly, the use of evaluation for allocating and calculating national subsidies is primarily made by larger municipalities within FUAs and hardly ever by those outside FUAs with low accessibility. In addition, representatives from relatively larger, urban municipalities indicated that evaluation serves to assess progress against the objectives set forth in their development strategies. Twenty-six municipalities said that evaluation is linked to EU-funded interventions, which once again illustrates the role of monitoring, reporting and evaluation requirements associated with EU funding as drivers of evaluation activities. Citizens’ satisfaction with municipalities’ activities is another recurrent area of focus for evaluation.

The size and resource endowments of municipalities appear to play an important role in their evaluation-related practices. Large municipalities dedicate more time and resources to that end, with some of them encompassing dedicated units specialising in the assessment of the quality of policies (e.g. Płock). In contrast, one of the respondents stressed that the evaluation of policies and programmes was not comprehensive in its coverage and was often not performed.

Twenty-seven out of 36 responding municipalities indicated that the regulatory decisions they make are mostly based on evidence and analysis of potential impacts, including social and economic trends and estimations. OECD questionnaire responses suggest that a number of municipalities attach significant attention to the feasibility of implementation and, when applicable, the enforceability of policy and regulatory decisions – including for those that stem from or are underpinned by national legislation. The vast majority of responding entities declared that, when developing regulations at the local level, they consider their potential impact on the local environment, such as natural resources needed for implementation and existing administrative or enforcement capacity. The focus and scope of these considerations seem to vary significantly across LSGUs and policy areas. These include, according to OECD questionnaire responses, considerations of impacts on relations with neighbouring municipalities (e.g. in the context of mobility or counteracting social exclusion), employment, environment and quality of life, and municipal finances. In addition, some of the examples provided by respondents refer to public expenditure in addition to regulatory interventions. A number of responses underscored the need to consider implementation capacity carefully, particularly regarding municipal zoning plans (as decisions at that stage will have an impact on the allocation of resources and responsibilities).

Evaluation activities carried out at the municipality level in Poland vary greatly in terms of both depth and scope (with very few of them applying them systematically) and they are seldom comprehensive. The focus of ex post assessments is often on implementation progress rather than outcomes or longer-term impacts, which limits the use of evaluation results further down the line.

The vast majority of responding municipalities (29 out of 36) declared conducting some sort of ex ante assessment of the potential impacts (including costs and benefits) of their regulatory and policy decisions. Of these, 19 did so for some regulations only (and 3 for all regulations). Local zoning plans were recurrently identified as undergoing ex ante assessment. Additional examples include financial analysis (including simulation and forecasts) pertaining to waste collection and waste management fees, changes in real estate taxes, low-carbon economy plans and proposed regulatory amendments in areas such as education. Certain municipalities declared going beyond financial analysis and declared to resorting, in addition, to risk analysis, economic analysis, assessment of social impacts and environmental impact assessment (no further details provided).

The above-mentioned responses ought to be interpreted with caution, as in-depth interviews conducted with LSGU representatives have shown that the actual content and level of ambition of ex ante assessments conducted at the local level vary greatly. For example, one of the most comprehensive approaches reported (in the context of strategic planning) involves the use of foresight methodologies as well as extensive consultations with citizens including publication of their results and recourse to expert panels. On the other side of the spectrum, certain LSGUs claimed that impacts are seldom assessed on a systematic basis.

Similarly to ex ante assessments, 24 out of 33 responding municipalities claimed to conduct ex post assessments of the impacts of their policies and regulations, although the vast majority of them (18) do so for some regulations only and none conducts ex post assessments of all regulations.2 As in the case of ex ante assessments, there seems to be a broad variety of situations as far as the depth and scope of these assessments. In some municipalities, ex post assessment of development strategies is systematic (sometimes on a mandatory basis) and involves evaluating their effectiveness and relevance (in light of changes that occurred since the moment of adoption). In other cases, however, ex post analysis seems to be narrower in scope and has to do primarily with budget execution or implementation progress rather than outcomes or longer-term impacts (thus neglecting an essential dimension). Responses from municipalities contrast with those from counties, a majority of which indicated not to carry out ex post assessments. 

Lack of applicable procedures or formal frameworks for co-operation, resource constraints and insufficient interest in promoting evidence-based decision-making are key shortcomings of evaluation systems in LSGUs. Access to data as well as analytical skills in the administration also seems to be holding back the development of evaluation practices at the self-government level.

LSGUs consulted for the present project displayed mixed perceptions of their evaluation systems and their outputs. They often pointed out the need for clearer frameworks and guidance. In addition, responses illustrate the need for institutional co-ordination (encompassing national, regional and local authorities) for meaningful design and implementation of evaluation – not the least since delivering a number of public services involves, to some extent, different levels of government.

Only about one-quarter of responding municipalities declared to be satisfied with the current system for evaluating the performance of public services (of which a majority within FUAs), whereas about 20% were not and nearly half declared their level of satisfaction to be “neutral”. Moreover, a number of respondents indicated that no such system had been set up in their municipality.

When asked to provide additional details, municipality representatives mentioned some promising examples, e.g. systematic qualitative and quantitative evaluation of local development strategies (LDS), with a focus on quality of life; setting up of an evaluation system for evaluation of public services on the strategic and operating levels coupled with attempts at conducting benchmarking through co-operation with other LSGUs (e.g Kraków Functional Area).

However, a majority of answers exposed the perceived shortcomings in evaluation systems, which are also related to the challenges in developing and implementing evaluation practices. In a number of cases, evaluations seem to rely almost exclusively on satisfaction surveys administered to residents. These are valuable in themselves but usually need to be complemented by other sources of evidence in order to yield robust analytical results. Some respondents reported the absence of applicable procedures or formal frameworks for the evaluation of public services, as well as resource constraints and difficulties accessing relevant data. “Lack of interest” in engaging in decision-making processes as well as lack of willingness to promote evidence-based decision-making, either on the side of stakeholders or at the political level, were also identified as important shortcomings. In addition, some respondents indicated that evaluation efforts were primarily geared at fulfilling specific criteria related to EU funding, thus systematically neglecting other potentially important aspects. Two key determinants of the quality of evaluations and evaluation systems are discussed in more detail next: stakeholder engagement and availability of data and resources.

Stakeholder engagement3 activities relating to evaluation and regulation at the municipality level seem to be relatively widespread. However, they are often conducted informally and with limited transparency (particularly in smaller municipalities). Moreover, there is no systematic use of stakeholder inputs for decision-making purposes, which is also found in Chapter 8.

The lack of formal procedures for stakeholder engagement may not afford equal opportunities for voicing their needs and concerns to all stakeholders. Resources and capacity are crucial for meaningful stakeholder engagement, especially in situations where technical expertise is required. A large majority of responding municipalities (28 out of 31) declared resorting to stakeholder engagement in the context of either ex ante or ex post assessment of their regulatory and policy decisions.4 Of these, 22 declared to do so for some regulations only, which is roughly consistent with responses about the uptake of ex ante and ex post assessment. Meetings and conferences or discussions in person are by far the consultation methods the most frequently used (sometimes combined with municipal consultation platforms), although online consultation venues are explored to a considerable extent. Discussions during in-depth interviews suggest that interactions with stakeholders often take place informally, with no systematic written records, particularly in smaller municipalities – some of which reported channelling consultation-related messages through prominent local figures such as priests and other well-known personalities, in order to reach certain population groups including the elderly.

There seems to be room for more systematic and extensive follow-up on the feedback provided by stakeholders, which is further evidenced in Chapter 8. A majority of those municipalities carrying out stakeholder engagement activities said they provide some sort of feedback to consulted stakeholders on their contribution, although some of the stakeholders interviewed indicated that this feedback, when it exists, is often limited. The extent to which the input provided by consulted stakeholders is effectively used to inform decision-making notably depends on the nature of the regulation at hand, with 17 and 8 municipalities declaring that this is the case for some regulations and major regulations respectively. A similar response distribution applies to the publication of input received from stakeholders.

Data availability and resource constraints, including analytical capacity, are major obstacles to the development of evaluation practices at the LSGU level. Access to relevant data in a timely fashion as well as constraints in terms of available resources and analytical capacity within LSGUs appear to be critical factors holding back the uptake of evaluation practices as well as the systematic use of evaluation results for decision-making purposes. While efforts to address these challenges have been deployed by the national government (e.g. development of a framework of indicators for public interventions, training actions, guidance, etc.), further work is still required – particularly in light of LSGUs’ heterogeneity including in terms of interest in and understanding of the potential benefits of evaluation.

According to OECD questionnaire results, key challenges for local self-governments in promoting and conducting policy evaluations5 include: financial resources to conduct evaluation; availability of data; time lags to obtain and measure results from long-term policies; and human capital constraints (capacities and capabilities) to manage and conduct evaluations within the LSGU staff. Resource constraints, both financial and in terms of capacity, also come up very prominently in OECD questionnaire results, as does the use of policy evaluation results in policy-making and the quality of evaluation results (see Figure 4.9). Indeed, resource endowments are a key determinant of the scope and extent of evaluation activities at the LSGU level since evaluations are primarily financed by municipality funds. All 33 municipalities having responded to this part of the OECD questionnaire declared that evaluations are financed by municipalities funds, compared to only 7 for EU funds and 5 and 3 for regional self-government and national government funds respectively (several response options were allowed).

Responses from municipalities regarding constraints to the development of evaluation practices are consistent with those from regional self-government authorities. Three-quarters of the responding marshal offices viewed financial constraints on local self-governments as a major challenge to the promotion of policy evaluation activities at the local level. Insufficient quality of policy evaluation results was in turn a major challenge in this respect according to the two-thirds of the responding marshal offices. Other challenges include interest and “demand” as well and shortcomings in the framework for policy evaluation at the subnational level. In addition, some respondents pointed to limited analytical capacity including the ability to draw conclusions based on data and indicators, as well as to the need for better-quality and regularly available data at a more disaggregated level.

Access to data and information by municipalities seems to be limited at present, especially data collected by the national government as part of its statutory work. The project to improve the monitoring of public services that is currently underway (Public Service Monitoring System, SMUP) is expected to contribute to developing evaluation (both ex ante and ex post) by making some of the relevant data available – as a very large component of municipalities’ actions relates to public service delivery.

It would be advisable to assess whether additional resources could be made available to LSGUs in order to facilitate the development of evaluation practices. For example, the 2007-13 Human Development Capital Operational Programme financed projects under which employees of municipalities received training in M&E. In the same vein, venues should be explored to improve communication and institutional co-operation across levels of government to ensure appropriate knowledge and understanding by LSGUs (particularly smaller ones and those with limited capacity) of resources already available, e.g. guidelines, data sources, training opportunities sources of expertise, etc. Institutional co-operation among LSGUs, possibly with regional level authorities in a co-ordinating or facilitating role, could also be pursued with a view to optimising the use of available resources. This may notably involve the networking of knowledge and expertise; certain smaller LSGUs may not need or be able to afford to have all the necessary knowledge and capacity on an ongoing basis but they may benefit from access to a network of evaluation experts. Special attention should be paid to LSGUs without FUAs and with low accessibility, as they may encounter obstacles to partaking of such networks and resources.

Overall, there seem to be no clear patterns regarding the use by subnational governments of evaluation results in their decision-making processes (with the exception of managing authorities for EU cohesion policy funding, whose evaluation units are responsible for carrying out the evaluation and monitoring of the implementation of related recommendations). The use of evaluation results appears to depend on their available capacity as well as the internal organisation and personal involvement of leading political figures or high-ranking officials. Certain larger municipalities in FUAs appear to be quite advanced in that respect, with evaluations performed on a systematic basis, clearly embedded in the policy cycle and coupled by corrective or improvement measures as appropriate. This situation contrasts with that in municipalities where evaluation is either not carried out at all or tenuously linked to decision-making.

When asked about how policy evaluations findings are used to improve services and policies, 24 out of 37 responding municipalities indicated that these findings were discussed in the municipal council or its committees, and 12 declared that they make them public for stakeholder engagement purposes. Crucially, very few said that there is a link between evaluation findings and resource allocation (nine) and between evaluation findings and staff incentives and remuneration respectively (three). Similarly, only seven declared to have structured or institutionalised mechanisms (e.g. management response mechanism) in place to follow up on evaluation findings and four reported no direct influence of evaluation results on policy-making (whereas seven declared that they do not evaluate policies and programmes). These results are consistent with feedback received during interviews that there is no systematic use of evaluation results by decision-makers at the political level. When asked about challenges regarding the elaboration of LDS or plans, more than half of the responding counties pointed to the lack of evaluation of the impact of programmes that are part of the local development strategy as a challenge.6

In general terms, there is a need for clear frameworks and processes to enable and promote evidence-based policies and regulations, and to ensure systematic linkages throughout the policy cycle (including budgeting) as well as adherence to good practices over the long term. Steps that may be taken to that end include establishing proportionate requirements for transparent and evidence-based decision-making (e.g. to provide a public and substantiated explanation in cases where evaluation findings are not taken into account) as well as establishing clear and explicit linkages between available evidence and resourcing. As advocated by the OECD in their best practice principles for regulatory impact assessment (RIA) (OECD, 2020[20]) among others, administrations can also promote the use of evaluation results in decision-making by including such use among the criteria that are considered in civil servants’ performance assessments.

Clear institutional frameworks could also help address existing co-operation shortcomings. When asked about major challenges regarding co-ordination for policy design and implementation across levels of public administration in Poland, respondents from the national government highlighted the lack of awareness of the different types of co-ordination arrangements as well as excessively rigid legal forms of co-operation. Other obstacles mentioned include a lack of skilled personnel to participate in co-ordination arrangements and changing regulatory frameworks. Based on evidence from other parts of the OECD questionnaire as well as from in-depth interviews, insufficient co-ordination and awareness also appear to hinder the development of evaluation practices and culture at the municipal level. Dissemination of good practices and exchange of experiences between relatively better equipped and more experienced municipalities and their less advanced counterparts in this respect could also prove beneficial.

Policy-makers and stakeholders cannot use evidence or the results of M&E if they do not know about them (Haynes et al., 2018[21]). The first step to promote the use of evidence is therefore that it be publicised and communicated to its intended users (OECD, 2020[22]).

First, publicity of evidence, such as evaluation results, ensures greater impact and thus increases the likelihood that they are used. OECD data (2020[22]) shows that a majority of surveyed countries make evaluation findings and recommendations available to the general public by default, for example by publishing the reports on the commissioning institutions’ website. The drive for publicity has also been in part led by EU integration, insofar as publicity of evaluation results is a requirement of EU regulations. Indeed, as indicated in Article 54.4 of the Common Provisions Regulation (CPR) for 2014-20, all evaluations shall be made available to the public. With reference to this article, the European Cohesion and Regional Development Fund’s guidelines on monitoring and evaluation recommend that evaluations be published online, with abstracts written in English to facilitate the exchange of results across countries. Countries are also invited to go beyond those legal requirements by using more technology-advanced means of publishing, such as interactive electronic maps. In Poland, all evaluations of cohesion and structural fund programmes are made available on www.ewaluacja.gov.pl. The website also includes evaluations of regional programmes and methodological tools for evaluators. For instance, it provides research methodology advice, a glossary and evaluation guidelines addressed to employees of any public administration.

However, data shows that only a third of Polish municipalities surveyed make their evaluation results public to engage stakeholders with the implications of evaluation results (see Figure 4.10). In the case of small municipalities, this may partially be explained by a lack of the necessary tools to do so – such as a searchable online database.

Nevertheless, publicity of evaluation results is crucial to promote use and in short ensure greater follow-up of recommendations, while providing accountability to citizens on the use of public funds (OECD, 2020[22]). A dedicated page could be created at the national level on www.ewaluacja.gov.pl for subnational governments to upload the results of their evaluations. This page could also include the evaluations carried out by the managing authorities at the regional level, which are already included in this platform.

More importantly, OECD data shows that the uptake of evidence by policy- and decision-makers may be more likely when information is easily accessible to them. Indeed, research shows that in isolation, publicity of evidence alone does not always significantly improve the use of evaluation in policy-making. Rather, evidence needs to be translated into understandable language and respond to users’ knowledge needs (OECD, forthcoming[23]). In Poland, for instance, research shows that despite the existence of an online platform to publicise evaluation results, actual uptake remains mainly limited to employees of the commissioning departments (Stockmann, Meyer and Taube, n.d.[24]). This is why tailored communication and dissemination strategies that increase access to clearly presented research findings are very important for evidence use (OECD, 2020[22]). The EU Cohesion Fund guidelines for M&E for 2014-20 acknowledge the importance of user-oriented, organised and tailored communication of evaluation processes and reports.

A simple and cost-effective way to make results more user-friendly is to include executive summaries of a defined length in evaluation reports. For instance, the United Kingdom (UK) What Works Centre, including the Education Endowment Foundation, the Early Intervention Foundation and the What Works Centre for Local Economic Growth produce a range of policy briefs to disseminate key messages to its target audience. Therefore, when subnational governments commission evaluations, they could request that these systematically include executive summaries with recommendations in order to increase their uptake.

Moreover, M&E evidence could be more strategically communicated. While the 16 Regional Territorial Observatories (RTOs) already aim to disseminate evidence on regional development in order to increase the impact of M&E of territorial policies, research has found that the way data is shared across these observatories’ websites is heterogeneous (Maśloch and Piotrowska, 2017[13]). While the strategy’s online platform (https://strateg.stat.gov.pl/) gathers statistical data related to the implementation of cohesion policy at the national and regional levels in a systematic and visually impactful way, it does not explicitly share evaluation reports.

The Polish national government could consider transforming the strategy platform into a one-stop-shop for evidence on LDS, including evidence from the LSGU level and evaluations. This could be an opportunity to make use of evidence synthesis tools, such as those used in the UK What Works Centres (see the next section on evidence synthesis for more information on these tools).

Additionally, the use of social media is also a good alternative for evidence to be shared more effectively, in particular when it comes to monitoring results. Subnational governments may wish to share “information nuggets” on social media, as a cost-efficient way to share monitoring data on their development strategies. This is done on a national level, for example through the Twitter handle “news.eval”. In Canada, for instance, ministerial departments have started diffusing evaluation findings beyond departmental websites via platforms such as Twitter and Linkedin.

After Poland acceded to the EU, numerous steps were taken at the national level to promote evaluation practices and, crucially, the systematic evaluation culture and system for cohesion policy interventions. These include training (also co-funded by the EU in many cases), meetings with regional administration officials to present and discuss evaluation research findings, a yearly evaluation conference and a postgraduate study programme (Academy of Evaluation). These actions have helped to improve the analytical capacities within regional administrations as well as to convey a strong message relating to the importance of evaluation for meeting public policy goals (Wojtowicz and Kupiec, n.d.[25]). Another noteworthy initiative in support of evaluation in this context is the creation of a national evaluation database for the evaluation of cohesion policy (see Box 4.3 below) as well as a Recommendation Management System to monitor the implementation of recommendations contained in the evaluation.

While it is undeniable that the government’s response to requirements in the context of cohesion policy has acted as a major driver for evaluation development over time, there seems to be significant scope for broadening and enhancing the awareness and knowledge of the benefits that can be derived from performance measuring and evaluation more generally. In terms of actors, this applies in particular to municipalities, whose involvement in capacity building and awareness-raising activities does not seem to be widespread. In terms of areas of application, this chiefly includes policy interventions other than those supported by cohesion policy funding.

One important misunderstanding that needs to be addressed in this context has to do with the way in which evaluations tend to be perceived. It is crucial for LSGU officials to understand that, far from being a mere audit or control mechanism, evaluation can empower them in their daily work by helping increase performance and accountability. Evidence-based decision-making in the context of regulatory activity is of crucial importance, as laws and regulations affect all areas of business and life and determine our safety and lifestyle, the ease of doing business and the achievement of societal and environmental goals. Ex ante RIA helps make better decisions by providing objective information about the likely benefits and costs of particular regulatory approaches, as well as critically assessing alternative options. Ex post evaluation is in turn a crucial tool to ensure that regulations remain fit for purpose, that businesses are not unnecessarily burdened and that citizens’ lives are protected (OECD, 2018[26]).

With the abovementioned benefits in mind, it would be valuable for the government to engage in further awareness-raising activities aimed at LSGUs, possibly in co-operation with regional authorities. These efforts may be combined with capacity building covering analytical skills and methods, with a view to strengthening overall institutional capacity for evidence-informed policy-making. This requires making public institutions more innovative, more effective and better prepared to deal with global and local challenges, by identifying the skills and competencies that they need to develop, the institutional procedures that need to be in place and the incentives to be provided (OECD, n.d.[27]).

Strategic rethinking may also be warranted with regard to the analytical focus of evaluation activities as well as the allocation of evaluation-related sources. A study was carried out in 2017 that focused on recommendations included in evaluation research conducted within the Polish National Cohesion Strategy between 2011 and 2015 (Sobiech and Kasianiuk, 2017[28]). Its findings suggest that the extensive knowledge produced by evaluation research is not fully capitalised upon for policy-making and reveals “a significant discrepancy between the evaluation purposes (accountability and knowledge production) and the dominant recommendations focused on program implementation”. It also refers to “evaluators’ inability to provide a rationale for the proposed modifications”. As many as 20 out of 33 responding municipalities reported evaluations to be carried out by external consultants. While external expertise from external evaluators is beneficial, investing in the development of sufficient inhouse capacity is required to use evaluation findings in a meaningful fashion and help address the abovementioned shortcomings. Officials with sufficient capacity may for example be able to synthesise and convey available evidence, thus acting as “brokers” vis-à-vis decision-makers. Guidance and/or assistance to that end may be provided by existing structures, such as the Centre for Evaluation and Analysis of Public Policies, which already performs the role of knowledge broker in Poland (OECD, 2020[22]).

In a similar vein, it is important to ensure that LSGU staff are able to critically assess evaluation findings and prevent decisions from being made on the basis of insufficiently robust evidence – which is a concern that the Polish government has taken a number of steps to address (see Box 4.5). Developing such knowledge and skills among other categories of actors beyond government may, together with stakeholder engagement, further contribute to creating demand for evidence-informed decision-making. In the same vein, the involvement from academics in evaluations (as part of their scientific research) was also reported by a number of municipalities in their OECD questionnaire responses. Understanding the precise modalities of such involvement as well as its contribution to better evaluation and replication potential would be valuable going forward.

Several methods exist for reviewing and assessing the evidence base. Evidence synthesis methodologies seek to not only aggregate evaluation findings and review them in a more or less systematic manner for a discussion of methods) but also assess and rate the strength of the evidence. Evidence syntheses provide a useful dissemination tool since they allow decision-makers to access large bodies of evidence, as well as rapidly assess the extent to which they can trust it (OECD, 2020[22]). Indeed, they serve to aggregate, standardise, review and rate the evidence base of interventions (see Box 4.6). More often than not, however, evidence synthesis is conducted by institutions at arm’s length of government, such as clearinghouses or knowledge brokers, much like the UK What Works Centres or the Campbell Collaboration (Neuhoff et al., 2015[29]; Results for America, 2017[30]).

In that light, the RTOs could consider adopting the role of knowledge brokers at the local level. Knowledge brokers are institutions that usually play a key role in strengthening the relationship and collaboration between evidence producers and policy-makers by helping policy-makers access evidence and navigate research material that may be unfamiliar through the use of evidence synthesis methodologies. RTOs could thus use and share evidence synthesis tools, such as evidence rating systems, in order to facilitate use by local decision-makers.

The November 2020 amendment to the Act on Principles of Implementation of Development Policy stipulates that municipalities should benefit from setting up their own monitoring system but does not detail how this could be done. OECD data shows that only 17 out of 36 surveyed municipalities prepare frequent reports to monitor their policy priorities. Of these 17, only a few do so more than once a year.

  • The Ministry of Regional Policy and Development Funds could therefore encourage municipalities to conduct monitoring as an effective management tool by providing them with tailored guidance and methodologies for monitoring LDS.

Notwithstanding Poland’s efforts to support the development of evaluation among public authorities, there is scope for ensuring more systematic involvement and greater ownership at the LSGU level in this area.

  • National and regional authorities should thus capitalise upon the evaluation ecosystem that has developed around EU cohesion policy interventions to convince LSGUs that performance measurement and evaluation can help them to serve citizens better.

Despite Poland’s progress with regard to evaluation over the past 15 to 20 years, a number of challenges remain in promoting M&E policies at the subnational level, notably regarding limited resources and capacity.

  • To address these shortcomings, it would be valuable for national and regional authorities to assess whether additional resources can be made available to LSGUs to facilitate the development of M&E practices, including through enhanced analytical capacity and the ability to capitalise on tools for advanced data analysis.

Policy-makers and stakeholders cannot use evidence or the results of M&E if they do not know about them. The first step to promote the use of evidence is therefore that it be publicised and communicated to its intended users. National and regional authorities have a crucial role to play in fostering the use of evidence, including evaluation results, by improving knowledge management and making relevant information readily available.

  • It would be useful to capitalise on existing platforms and institutions to that end. For example, the Thematic Groups for Exchanging Experience in the Association of Polish Cities may be a suitable venue. In addition, the Ministry of Development Funds and Regional Policy may consider setting up a dedicated page on www.ewaluacja.gov.pl for LSGUs to share evaluation results. In the same vein, the knowledge and resources of both RTOs and Voivodeship Centres of Regional Surveys (Wojewódzkie Ośrodki Badań Regionalnych, WOBR) may be leveraged on to improve knowledge management and access to relevant information and analysis.

In the same vein, tailored communication and dissemination strategies that increase access to clearly presented research findings are very important for evidence use.

  • The new platform for Monitoring Local Development could be used as a one-stop-shop for evidence on LDS, including evidence from the LSGU level and evaluations, with improved communication and dissemination practices.

Evidence syntheses provide a useful dissemination tool since they allow decision-makers to access large bodies of evidence and assess the extent to which they can trust it. Indeed, they serve to aggregate, standardise, review and rate the evidence base of interventions.

    • Regional level authorities should encourage RTOs to use and share evidence synthesis tools, such as evidence rating systems, in order to facilitate use by local decision-makers.

  • Regional authorities should also favour the dissemination of good practices and exchange of experiences relating to the use of evidence for decision-making between relatively better equipped and more experienced LSGUs (typically larger ones in FUAs) and their less advanced counterparts.

Publicising evaluation results is crucial to promote use and in short ensure greater follow-up of recommendations, while providing accountability to citizens on the use of public funds. The following could be envisaged:

  • Improving the use of evidence through better knowledge management, for example creating a dedicated page at the national level on www.ewaluacja.gov.pl for subnational governments to upload the results of their evaluations. This page could also include evaluations carried out by managing authorities at the regional level.

However, OECD data shows that the uptake of evidence by policy- and decision-makers may be more likely when information is easily accessible to them. Indeed, research shows that, in isolation, publicity of evidence alone does not always significantly improve the use of evaluation in policy-making. Rather, evidence needs to be translated into understandable language and respond to users’ knowledge needs. Thus, M&E evidence could be more strategically communicated. In particular, the use of social media is also a good alternative for evidence to be shared more effectively. Subnational governments may wish to share “information nuggets” on social media, as a cost-efficient way to share monitoring data on their development strategies. Thus, Poland could consider:

  • Transforming the strateg platform into a one-stop-shop for evidence on LDS, including evidence from the municipal level and evaluations, with improved communication and dissemination practices.

  • Taking advantage of social media to share “information nuggets” on LDS.

Monitoring differs from evaluation in substantive ways. It can be used to pursue three main objectives: improving operational decision-making, strengthening accountability and promoting transparency and communication with citizens. Each of these objectives requires a different monitoring setup. In Poland, the LSGUs that do monitor their policy priorities do not necessarily clearly differentiate between these objectives, potentially creating a weakening impact of monitoring. Thus, LSGUs could:

  • Use the national guidelines to clearly differentiate the monitoring setup for each of these objectives: operational decision-making, accountability and communication with citizens.

In particular, LSGUs could consider setting up internal performance dialogues in LSGUs between the highest decision-making level (i.e. marshal office) and the heads of departments/services in order to improve operational decision-making. Setting up a performance dialogue would create an incentive for municipal departments to resolve implementation issues at the technical level through a gradual escalation process. If the problem is still unresolved after two quarters, it could be referred to the city council/mayor’s office twice a year for a decision. Figure 4.2 in the section above demonstrates how this performance dialogue could be articulated between the lower levels of decision-making and the centre of the subnational government over a one-year period. To set up such a performance dialogue, LSGUs should:

  • Clarify the respective responsibilities of each actor throughout the monitoring value chain (co-ordination and promotion of monitoring, data collection, data analysis, reporting, use of data).

  • Clarify the tools used for the performance dialogue, such as a dashboard.

  • Select a limited number of priority objectives and indicators to focus resources dedicated to monitoring policies with high visibility and economic and social impact.

LSGUs can also take advantage of monitoring data to further communicate with their citizens about their results. For this reason, LSGUs could:

  • Improve communication with citizens by updating a few key indicators on a public website or by elaborating a communication document on the progress made on the strategy at regular intervals (every year for example).

Developing clear performance indicators, their baseline and targets is important to monitor policy priorities such as the ones in Polish LDS. One-third (14) of surveyed municipalities report that they did not define their indicators ex ante. When these exist, there is a lack of systematic linkage between each objective contained in the LDS and the indicators chosen to monitor them. Moreover, indicators in the LDS rely on the use of statistical data. Yet, administrative data and data collected as part of an intervention’s implementation are usually better suited for process, output and intermediate outcome indicators, such as those found in LDS. Thus, Polish municipalities should:

  • Clarify the indicators used to monitor each local development strategy.

  • Include indicators based on administrative data, taking advantage of the system for monitoring public services currently under development to build these indicators. The box below gives more information on the different potential sources of data for monitoring.

The uptake of evaluation practices is uneven across LSGUs, with larger, urban LSGUs devoting more time and resources to that end than their smaller counterparts outside of FUAs. Significant evaluation-oriented efforts have been deployed over the years in the context of EU cohesion policy funding.

  • Draw on existing knowledge and expertise to promote systematic evaluation and adopt it as part of their working methods.

LSGUs should address data availability and resource constraints, including analytical capacity, which are major obstacles to the development of evaluation practices at the LSGU level, in particular:

  • Assess whether additional resources could be made available to LSGUs in order to facilitate the development of evaluation practices by enhancing inhouse capacity.

  • Explore venues to improve communication and institutional co-operation across levels of government to ensure appropriate knowledge and understanding by LSGUs of resources already available, e.g. guidelines, data sources, training opportunities, sources of expertise, etc.

  • Pursue institutional co-operation among LSGUs, possibly with regional level authorities in a co-ordinating or facilitating role, with a view to optimising the use of available resources. This may notably involve the networking of knowledge and expertise (e.g. development of a network of evaluation experts accessible to LSGUs).

Publicising evaluation results is crucial to promote use and in short ensure greater follow-up of recommendations, while providing accountability to citizens on the use of public funds. It could be envisaged to:

  • Improve use of evidence through better knowledge management. For example, use the dedicated web page at the national level on www.ewaluacja.gov.pl for subnational governments.

However, OECD data shows that the uptake of evidence by policy- and decision-makers may be more likely when information is easily accessible to them. Indeed, research shows that in isolation, publicity of evidence alone does not always significantly improve the use of evaluation in policy-making. Rather, evidence needs to be translated into understandable language and respond to users’ knowledge needs. Thus, M&E evidence could be more strategically communicated. In particular, the use of social media is also a good alternative for evidence to be shared more effectively. Subnational governments may wish to share “information nuggets” on social media, as a cost-efficient way to share monitoring data on their development strategies. Thus, municipalities could consider:

  • Taking advantage of social media to share “information nuggets” on LDS.

According to the OECD questionnaire, relatively few municipalities in Poland (particularly outside FUAs) have a structured or institutionalised mechanism in place to make systematic use of evaluation findings for improving their services and policies. Further institutionalising the use of evaluation findings for decision-making purposes should therefore be considered a priority. To do so, it may be valuable to:

  • Establish proportionate requirements for transparent and evidence-based decision-making; e.g. to provide a public and substantiated explanation in cases where relevant evaluation findings are not taken into account.

  • Establish clear and explicit linkages between available evidence, such as evaluation findings and resourcing. This applies to budgeting decisions on investment outlays and expenditure as well as to staff incentives and remuneration. While applicable across the board, this recommendation should be considered a priority for municipalities where such linkages are the weakest, i.e. smaller ones, often outside of FUAs.

  • Promote the use of evaluation results in decision-making by including such use among the criteria that are considered for employees’ performance assessments.

References

[10] Colombian Government Department for National Planning (n.d.), Territorial Portal, https://portalterritorial.dnp.gov.co (accessed on 15 June 2020).

[14] DG NEAR (2016), Guidelines on linking planning/programming, monitoring and evaluation, https://ec.europa.eu/neighbourhood-enlargement/sites/near/files/pdf/financial_assistance/phare/evaluation/2016/20160831-dg-near-guidelines-on-linking-planning-progrming-vol-1-v-0.4.pdf.

[16] EC (2020), Total Allocations of Cohesion Policy 2014-2020: Breakdown by Member States, European Structural and Investment Funds Data, European Commission, https://cohesiondata.ec.europa.eu/d/n6pb-g38m (accessed on 15 July 2020).

[7] EC (2015), The Programming Period 2014-2020: Guidance Document on Monitoring and Evaluation, European Regional Development Fund and Cohesion Fund, European Commission, https://ec.europa.eu/regional_policy/en/information/publications/evaluations-guidance-documents/2013/the-programming-period-2014-2020-guidance-document-on-monitoring-and-evaluation-european-regional-development-fund-and-cohesion-fund (accessed on 30 June 2020).

[17] EC (2009), Building the Evaluation System in Poland, European Commission, https://ec.europa.eu/regional_policy/en/projects/best-practices/Polska/1926 (accessed on 18 July 2020).

[6] European Parliament and Council (2013), The Common Provisions Regulation (CPR) for the 2014-2020 European Structural and Investment Fund, https://eur-lex.europa.eu/eli/reg/2013/1303/oj (accessed on 30 June 2020).

[21] Haynes, A. et al. (2018), “What can we learn from interventions that aim to increase policy-makers’ capacity to use research? A realist scoping review”, Health Research Policy and Systems, Vol. 16/1, p. 31, https://doi.org/10.1186/s12961-018-0277-1.

[33] Innovations for Poverty Action (2016), Using Administrative Data for Monitoring and Evaluation.

[11] Italian Government (n.d.), Open Cohesion Project, https://opencoesione.gov.it/en/ (accessed on 15 September 2020).

[12] Jarczewski, W. (2010), “Regional observatories of development policy as a tool for monitoring the efficiency of the cohesion policy”, Europa, Vol. 21, pp. 155-172, https://doi.org/10.7163/eu21.2010.21.12.

[19] Kulesza, M. and D. Sześciło (2012), Local government in Poland, INAP.

[13] Maśloch, P. and L. Piotrowska (2017), “An overview of regional territorial observatories – common elements and substantial differences. Territorial observatories and their role in shaping the defence capability and security of a region”, World Scientific News, http://psjd.icm.edu.pl/psjd/element/bwmeta1.element.psjd-4aeb1116-bbef-4036-ac3a-22b815245839/c/WSN_72__2017__194-203.pdf (accessed on 2 July 2020).

[3] Mexican Government (2013), Federal Planning Law.

[8] Meyer, W., R. Stockmann and L. Taube (2020), “The institutionalisation of evaluation theoretical background, analytical concept and methods”, in The Institutionalisation of Evaluation in Europe, Springer International Publishing, https://doi.org/10.1007/978-3-030-32284-7_1.

[29] Neuhoff, A. et al. (2015), The What Works Marketplace Helping Leaders Use Evidence to Make Smarter Choices, The Bridgespan group, http://www.results4america.org.

[20] OECD (2020), “Best practice principles for regulatory impact analysis”, in Regulatory Impact Assessment, OECD Publishing, Paris, https://dx.doi.org/10.1787/663f08d9-en.

[22] OECD (2020), Improving Governance with Policy Evaluation: Lessons From Country Experiences, OECD Public Governance Reviews, OECD Publishing, Paris, https://doi.org/10.1787/89b1577d-en.

[1] OECD (2019), Open Government in Biscay, OECD Public Governance Reviews, OECD Publishing, Paris, https://dx.doi.org/10.1787/e4e1a40c-en.

[26] OECD (2018), OECD Regulatory Policy Outlook 2018, OECD Publishing, Paris, https://doi.org/10.1787/9789264303072-en.

[34] OECD (2016), Open Government: The Global Context and the Way Forward, OECD Publishing, Paris, https://dx.doi.org/10.1787/9789264268104-en.

[5] OECD (2009), OECD Territorial Reviews: Poland 2008: (Polish version), Ministry of Regional Development, Poland, Warsaw, https://dx.doi.org/10.1787/9788376100814-pl.

[15] OECD (2008), Ireland 2008: Towards an Integrated Public Service, OECD Public Management Reviews, OECD Publishing, Paris, https://doi.org/10.1787/9789264043268-en.

[27] OECD (n.d.), Building Capacity for Evidence Informed Policy Making: Towards a Baseline Skill Set, OECD, Paris, http://www.oecd.org/gov/building-capacity-for-evidence-informed-policymaking.pdf (accessed on 10 July 2020).

[18] OECD (n.d.), Indicators of Regulatory Policy and Governance Surveys 2014 and 2017, OECD, Paris, https://doi.org/10.1787/888933816003.

[23] OECD (forthcoming), “Towards a sound monitoring and evaluation system for long-term planning in Nuevo León”, OECD, Paris.

[4] Polish Government (2017), The Strategy for Responsible Development for the Period up to 2020.

[30] Results for America (2017), 100+ Government Mechanisms to Advance the Use of Data and Evidence in Policymaking: A Landscape Review.

[2] Scottish Government (2020), National Performance Framework, https://nationalperformance.gov.scot/ (accessed on 6 March 2020).

[28] Sobiech, R. and K. Kasianiuk (2017), “Improving interventions, influencing policy process. What do evaluation reports recommend?”, Paper submitted at the 2017 EGPA International Conference.

[24] Stockmann, R., W. Meyer and L. Taube (eds.) (n.d.), The Institutionalisation of Evaluation in Europe, CEval.

[31] UK Civil Service (n.d.), What is a Rapid Evidence Assessment?, http://www.civilservice.gov.uk/networks/gsr/resources-and-guidance/rapidevidence-assessment/what-is (accessed on 12 August 2019).

[32] UN Global Pulse (2016), Integrating Big Data into the Monitoring and Evaluation of Development Programmes.

[9] Vági, P. and E. Rimkute (2018), “Toolkit for the preparation, implementation, monitoring, reporting and evaluation of public administration reform and sector strategies: Guidance for SIGMA partners”, SIGMA Papers, No. 57, OECD Publishing, Paris, https://doi.org/10.1787/37e212e6-en (accessed on 18 June 2019).

[25] Wojtowicz, D. and T. Kupiec (n.d.), “How do the regional authorities use evaluation? Comparative study of Poland and Spain”, https://www.ippapublicpolicy.org/file/paper/1434656324.pdf (accessed on 29 July 2020).

Notes

← 1. Based on the information provided by the Polish Association of Cities.

← 2. Although this OECD questionnaire question referred to regulatory decisions only (“Do you conduct an ex post assessment of the impacts (including costs and benefits) of regulatory decisions?” ), supplemental examples provided suggest that responses ought to be interpreted as referring to both regulatory and policy decisions. This also applies to subsequent references to “some”, “major” and “all” regulations in the present section.

← 3. This sub-section focuses on stakeholder engagement practices in the context of evidence-based decision-making. For a broader perspective on the topic, please refer to Chapter 8 on open government, which the OECD defines as “a culture of governance that promotes the principles of transparency, integrity, accountability and stakeholder participation in support of democracy and inclusive growth” (OECD, 2016[34]).

← 4. It must be noted that, in many cases, stakeholder engagement activities reported by LSGUs are likely to refer, at least in part, to mandatory consultation provisions in the context of strategic and spatial planning.

← 5. Here and in subsequent paragraphs, “policy evaluations” ought to be understood as potentially including laws and regulations.

← 6. Powiats or counties also referred to the underestimation of local development strategies’ implementation costs, which in some cases may have to do with insufficient quality of ex ante assessments.

Metadata, Legal and Rights

This document, as well as any data and map included herein, are without prejudice to the status of or sovereignty over any territory, to the delimitation of international frontiers and boundaries and to the name of any territory, city or area. Extracts from publications may be subject to additional disclaimers, which are set out in the complete version of the publication, available at the link provided.

© OECD 2021

The use of this work, whether digital or print, is governed by the Terms and Conditions to be found at http://www.oecd.org/termsandconditions.