9. Initiatives to strengthen evidence-based policy making for social services in Spain

A more sophisticated monitoring and evaluation policy can benefit the planning and continuous improvement of social service provision. The lack of harmonised indicators at the regional level on key characteristics such as accessibility or waiting times complicates the monitoring and measurement of service quality. Although formal evaluations are not the only possible source of information, they are very important. Unfortunately, they are often carried out on an ad hoc basis and are not results-focused, making it difficult to analyse the effectiveness of strategic social service plans.

This chapter presents ideas for (i) strengthening the monitoring and evaluation system, and (ii) increasing the use of evidence in policy design.

With the exception of the sometimes partial information collected within the framework of the Concerted Plan and the long-term care system (in the information system Autonomy and Care System for Dependent People or SISAAD), there is currently no national database with consolidating statistics on the social services system in Spain. Statistics on primary social services and long-term care are relatively consistent because state laws and agreements set out obligations for the production of this information and because the state co-funds these services. In contrast, with the exception of services falling within the framework of the Long-term Care Act, specialised services (are entirely funded by the autonomous communities and not subject to any such obligation.

The Concerted Plan Report is an annual publication by the Ministry of Social Rights and the 2030 Agenda. It collects the information reported by the autonomous communities (which, in turn, collect the information from the local entities) that have signed an agreement to participate in the Concerted Plan for Basic Social Services in Local Corporations, with the exception of the foral communities of the Basque Country and Navarre. This information – which includes, for example, funding of centres and number of users broken down by group and sex – is collected using an evaluation sheet and entered into an information system. The regions can send the information through a web-based computer tool, which can be “fed” by the national social services system (SIUSS) in the regions that use its web version.

The homogeneous data collection through the SIUSS constitutes an important source of information on the reality of social services in participating communities, but does not prevent information gaps. In many autonomous communities, there are projects funded exclusively by the autonomous and/or local level without state co-funding. These projects are often not included in the concerted plan report statistics. In fact, some autonomous communities include information on financial contributions in these cases, whereas others do not. This creates problems when it comes to comparing primary social services expenditure between autonomous communities. Furthermore, other indicators, such as waiting times that would be useful for a complete perspective could also be included. Moreover, the way that the report presents summary statistics makes it difficult to cross-reference different indicators and track users or providers over time. The 2006 Long-term Care Act has created a system with a requirement to publish more detailed statistical indicators on applications, opinions, beneficiaries and services for autonomy and care for dependent people than is the case for (other) primary care services. The indicators are published every month (and not annually as in the case of social services in certain autonomous communities). The SISAAD is a tool that autonomous communities can use to manage services (Fernadez, Kups and Llena-Nozal, 2022[1]). More recently, the Commission to Analyse the Dependency Situation was set up to prepare a technical report on the state of the dependency system.

Some autonomous communities publish statistics on their websites to provide an overview of their social service systems. For example, the Basque Country‘s profile of users of different services does not only distinguish between men and women, but also between age groups. The indicators published by Castile-León include various statistics on family-care and child-protection services. In addition to the national Concerted Plan Report, several autonomous communities also publish their own reports, but the coverage of indicators sometimes includes only a partial overview of specialised care services. They generally do not include indicators on waiting times and the quality of services. An exception is the quality indicators for secondary care centres promoted by the Institut Català d’Assistència i Serveis Socials [The Catalan Institute for Assistance and Social Services] (Centro de Documentación y Estudios, 2011[2]).

Although many strategic plans include budget items dedicated to evaluating social services provision, the availability and scope of these evaluations differ among the autonomous communities. For example, the Basque Country evaluation, under the responsibility of the Inter-agency Body for Social Services, included statistics on deadlines, expenditure, users and the extent to which the actions foreseen in the plan were carried out. The Interim Monitoring Plan for the 2017-20 Strategic Plan also rates the extent to which actions were carried out. Castile-León’s strategic plan foresees an annual evaluation. The social services catalogue is also evaluated. Other evaluations include the Social Reality Observatory’s interim evaluation of Navarre’s strategic plan, and the evaluation report of the Agency of Social Care of Madrid’s strategic plan. In other autonomous communities, for example, in the Canary Islands, the law provides for the evaluation of plans, but these evaluations are not necessarily made public.

Some communities are planning to strengthen their evaluation systems. For example, the Cantabrian “Horizon 2030 Strategy” plans to create an evaluation system of the public social services system, with measures that include launching the Social Reality Observatory and developing an evaluation plan. Catalonia’s new strategic plan is focused on evidence-based decisions. It includes the design and implementation of a system for evaluating service quality. In some cases, demand forecasts supplement evaluations. An example is Aragon’s Planning and Evaluation Service’s analysis of social needs and potential demand, which studies met and unmet demand, for example, through the number of users with open tickets in the SIUSS. The Asturian Social Services Observatory prepared a demand forecast for residential care for older people.

At the national level, evaluation bodies and evaluation culture are still less developed than in other OECD countries. Individual institutions and ministries are responsible for evaluating the initiatives under their jurisdiction. As the autonomous communities are responsible for social services, there is no national instrument that systematically evaluates and reports on the provision of these services, except for the aforementioned Concerted Plan Reports. Between 2007 and 2017, the State Agency for the Evaluation of Public Policies and Quality of Services of Spain, under the Office of the President, assisted responsible ministries or institutions in conducting evaluations. Since then, responsibility has fallen to the Institute for the Evaluation of Public Policies, under the supervision of the Ministry for Regional Policy and the Civil Service, which carries out evaluations and offers training and methodological explanations. However, its scope of action is rather limited (de la Fuente et al., 2021[3]).

Defining relevant and stable indicators. In Spain, there are currently few spatially and temporally uniform indicators in the area of primary care services, and none in the area of specialised services with the exception of the field of long-term care. This causes difficulties in planning social services policies at all levels (national, regional and local) because it makes it difficult to compare the functioning and performance of the systems of similar localities and regions, and thus to identify good practices. A set of common indicators may help solve this problem. A working group of the Inter-Regional Social Services Council supported by university researchers could for example identify these,

The working group should seek to identify the indicators that are most relevant to the majority of stakeholders and that could be collected in a manner that is not too onerous. The two-decade experience of the Australian Review of Government Service Provision provides an example of a process to select indicators that fulfil these criteria. The review evaluates the performance of 15 public services, including social services, with the aim to measure equity of access or impact of services for different population groups, the effectiveness of achieving the established objectives, and the efficiency of providing cost-effective services. The selected indicators are meant to reflect the impact of services, be sufficiently comparable over time and space, be easily understood by all, and ideally exist in other countries. If a given indicator does not yet exist countrywide, efforts are made to incorporate the missing regions over time (Centro de Documentación y Estudios, 2011[2]). For Spain, the selection should ideally include mid-term indicators, which measure the scope of services, and impact indicators, which assess how services affects users. The selection could take inspiration from a recent inventory of social impact evaluation practices of different social services in the European Union, which identified a number of mid-term and impact indicators. Mid-term indicators often include statistics on the number of beneficiaries of different programmes, while impact indicators generally focus on long-term results, such as the number of previous users who are still receiving services/benefits five years later, who have improved their quality of life or whose financial or professional situation has stabilised, for example (EU, forthcoming[4]).

Strengthening administrative-data infrastructure and survey-data collection. Ideally, the information systems used by the various suppliers should be configured in a way that allows indicators to be extracted from the management systems without additional work. Likewise, it would be desirable for certain researchers to have access to anonymised microdata in order to carry out impact evaluations. The 2019 Finnish Act on the Secondary Use of Social and Health Data, which gives the Findata Agency the power to grant researchers with a legitimate interest access to health and social sector data, could be used as a model. However, administrative data are not enough. For example, the single national indicator system in Britain relies heavily on administrative data, but supplements them with data obtained through surveys. Among these surveys is the Adult Social Care Survey, through which each local authority must establish the degree of satisfaction of all home and residential service users who are able to respond (Centro de Documentación y Estudios, 2011[2]). A study of the quality of nursing homes in Denmark also combined administrative data with survey data (Hjelmar et al., 2018[5]).

Examining the possibility of establishing minimum data requirements for autonomous regions to report to the central authorities. The autonomous communities and local entities participating in the Concerted Plan are required to report some basic statistics on their primary care systems’ professionals, users and expenditure. As mentioned previously, the autonomous communities differ with regard to whether these statistics also cover projects in this sector that do not use national funds. The Inter-Regional Council could also agree to make reporting part of the indicators for primary and specialised care social services to the relevant state authorities compulsory, regardless of their sources of funding.

Strengthening impact evaluations. The monitoring of strategic plans is a good tool for ascertaining whether planned activities have been implemented correctly and within the planned deadlines and budgets. However, this monitoring does not reveal the impact of these measures in general, nor of any changes in the supply and accessibility of services in particular. These kinds of evaluations could uncover information that could lead to policy improvement and more efficient use of resources, if the results of the monitoring are robust and reflected in decision making (see Section 9.2 below). Strengthening data infrastructure and defining common indicators can facilitate the implementation of impact evaluations. This requires that researchers inside or outside administrations have access to the relevant data, in a secure and anonymised manner. In addition, evaluations require a budget, which may be provided for within the strategic plans or come from other sources. Investing in evaluation skills in the different government institutions can also foster a culture of evaluation. For example, the French region of Auvergne-Rhône-Alpes produced an evaluation manual for health and social service managers.

Evidence-based policy making can be defined as the consultation of various sources of information, including statistics and research results, before decision making (OECD, 2020[6]). Although some critics question whether this approach leads to better results (Howlett and Craft, 2013[7]), it is generally considered to be a critical step towards a government capable of addressing complex policy challenges in a more effective and efficient manner. As the use of evidence is a challenge common to various policy areas, a significant proportion of the ideas presented in this section have already been elaborated in the recent OECD report on supporting families (OECD, 2022[8]).

Ensuring that data are of sufficient quality and sufficiently accessible. As already mentioned, the availability and accessibility of data can influence its use within policy planning. Their quality has a significant influence on the accuracy of the resulting evaluation: they must be accurate, verifiable and documented. Finally, the ease or complicatedness of entering information into management systems can affect the quality and completeness of the information that service providers supply. Research based on the OECD OURdata Index suggests that countries with the best results in evidence-based policy making are those that clearly assign responsibility for co-ordinating open-data policies (OECD, 2019[9]; 2020[10]).

Strengthening the role of a government institution in disseminating the practice and use of evaluations, or equipping all agencies with the required expertise. An evaluation agency with more competencies and staff could strengthen knowledge of evaluation methods within the public service, increasing the likelihood that senior officials will consult relevant research results and data before designing or adapting a programme. A successful example is the United States Foundations for Evidence-Based Policy making Act of 2019. Through this act, the federal government sought to increase the use of evidence in policy making in all federal agencies, acknowledging that some were already excellent in this area, while others lacked the necessary skills. The act pushes agencies to adopt more robust evaluation practices in order to generate more evidence on what works and what needs to be improved.

Increasing the demand for evidence from decision makers. Evidence from a number of countries suggests that senior policy makers and public officials often do not base policy making on evidence, including research commissioned by their own ministries or agencies. An important step in increasing the use of evidence in policy making is to ensure that policy makers know where and how to find the information. Knowledge brokers and self-evaluation tools for knowledge on access to research can strengthen the availability of, and demand for, information. For example, Australia and Canada offer self-evaluation tools that help agencies or individuals, respectively, to gauge their ability to use research. Training programmes can increase policy makers’ confidence in interpreting evidence. Examples include the UK Alliance for Useful Evidence Masterclass and the Finnish Innovation Fund’s training module on putting lessons learned from experiments into practice.

Creating (a network of) institutions responsible for the dissemination of good practices. To establish a culture of using evidence in their public administration, a number of OECD countries have established institutions or teams responsible for evaluating public policies and/or for disseminating evaluation results within and outside the administration (OECD, 2022[8]). For example, The What Works Network was established in the United Kingdom in 2013. This network unifies ten What Works centres, each specialising in a different policy area, for example the Centre for Well-being, which focuses on housing, culture and employment policies and programmes. The centres seek to evaluate existing knowledge gaps in programmes and policies; synthesise existing evidence and present it in a way that is easily understood by non-specialists; disseminate the evidence; and assist professionals and decision makers in understanding and applying the evidence (The What Works Network, 2018[11]).

Ensuring that all evaluations implemented or commissioned by public entities are published and accessible. In Poland, all evaluations commissioned by public institutions must be available to the public. A national database has been created and all evaluations are published on a dedicated website. This platform shares the results of over 1 000 studies conducted since 2004, as well as methodological tools for evaluators. The Norwegian Directorate of Financial Management and the National Library of Norway maintain a public web service that brings together all the findings of evaluations conducted by the central government. It contains evaluations commissioned by government agencies from 2005 to today, as well as a selection of evaluations from previous years.

Tailoring the communication of results to the audience. Evidence should be presented and disseminated in a strategic way that is driven by the purpose of the evaluation and the information needs of its intended users (Patton, 1978[12]). Tailored communication and dissemination strategies that increase access to clearly presented research results are very important. These strategies include the use of infographics and webinars and the dissemination of parts of the narrative through social media.

References

[2] Centro de Documentación y Estudios (2011), Cuadros de Mando de Indicadores de Calidad en el Ámbito de los Servicios Sociales.

[3] de la Fuente, A. et al. (2021), “La evaluación de políticas públicas en España: antecedentes, situación actual y propuestas para una reforma”, fedea Policy Papers, No. 2021/9, fedea.

[4] EU (forthcoming), Study on social services.

[1] Fernadez, R., S. Kups and A. Llena-Nozal (2022), Information technologies for social systems in Spain.

[5] Hjelmar, U. et al. (2018), “Public/private ownership and quality of care: Evidence from Danish nursing homes”, Social Science & Medicine, Vol. 216, pp. 41-49, https://doi.org/10.1016/j.socscimed.2018.09.029.

[7] Howlett, M. and J. Craft (2013), “Policy Advisory Systems and Evidence-Based Policy: The Location and Content of Evidentiary Policy Advice”, in Young, S. (ed.), Evidence-based policy making in Canada, Oxford University Press, Ontario.

[8] OECD (2022), Evolving Family Models in Spain: A New National Framework for Improved Support and Protection for Families, OECD Publishing, Paris, https://doi.org/10.1787/c27e63ab-en.

[6] OECD (2020), Mobilising Evidence for Good Governance: Taking Stock of Principles and Standards for Policy Design, Implementation and Evaluation, OECD Public Governance Reviews, OECD Publishing, Paris, https://doi.org/10.1787/3f6f736b-en.

[10] OECD (2020), “The OECD 2019 Open Useful Reusable Data (Ourdata) Index”, OECD, Paris, https://www.oecd.org/gov/digital-government/ourdata-index-policy-paper-2020.pdf.

[9] OECD (2019), The Path to Becoming a Data-Driven Public Sector, OECD Digital Government Studies, OECD Publishing, Paris, https://doi.org/10.1787/059814a7-en.

[12] Patton, M. (1978), Utilization-focused evaluation, Sage, Beverly Hills.

[11] The What Works Network (2018), The What Works Network - Five Years On, https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/677478/6.4154_What_works_report_Final.pdf (accessed on 11 March 2022).

Metadata, Legal and Rights

This document, as well as any data and map included herein, are without prejudice to the status of or sovereignty over any territory, to the delimitation of international frontiers and boundaries and to the name of any territory, city or area. Extracts from publications may be subject to additional disclaimers, which are set out in the complete version of the publication, available at the link provided.

© OECD 2022

The use of this work, whether digital or print, is governed by the Terms and Conditions to be found at https://www.oecd.org/termsandconditions.