4. Evaluating public communication

Evaluation is a core building block of evidence-driven communication. In addition to its role in informing design and delivery processes as outlined in Chapter 3, it helps build a comprehension of how intended audiences understand messages. Among its many benefits, evaluation can contribute to professionalising the field of public communication by providing evidence on its impact and lessons on what works and what does not. It can also aid governments in identifying trends that can inform the design of future communication strategies and initiatives.

Evaluating the impact of public communication is especially important in the context of governments’ efforts to ensure accountability and transparency in an environment characterised by growing citizen expectations. The COVID-19 pandemic in particular has highlighted the need for improved evaluation mechanisms to help practitioners understand the impact of communication initiatives to inform future actions, reflect on lessons learned, and build country resilience for future crises.

The OECD defines the practice of evaluation as “the systematic and objective assessment of an ongoing or completed project, programme or policy, its design, implementation and results […] to determine the relevance and achievement of objectives, efficiency, effectiveness, impact and sustainability” (OECD, 2009[1]) (OECD, 2020[2]). It differs from the concept of monitoring, which is the systematic gathering of data to measure the progress of an ongoing initiative, in that it occurs at different stages, is issue-specific and customised (OECD, 2020[2]). Evaluation can provide information that is credible and useful, enabling the incorporation of lessons learned into the decision-making process.

Despite progress to date, there is a need to build evidence on best practices for evaluating public communication. A wide gap remains between theory and practice due to the lack of standards and evidence of what works and what does not in this field. Indeed, policy makers and experts have not yet reached a consensus on “basic evaluation measures and standards”, placing the field in a state of “stasis” (Macnamara, 2020[3]). These factors, together with the need to make public communication more data-driven, make building evidence on this field all the more important.

This chapter will explore different approaches used by governments to evaluate public communication. It will focus on an in-depth assessment of this particular area given the ample literature and country experience on monitoring and the need to address mounting challenges in regards to evaluation. As such, the first section will reflect on the growing importance of evaluation through an overview of practices in OECD and partner countries. In doing so, it will identify the main challenges inhibiting evaluation in this field. Acknowledging there is no silver bullet, the remaining sections will explore potential avenues to strengthen its institutionalisation, improve its focus from outputs to impact, and link metrics to organisational goals.

Evaluation mechanisms and their systematic application are indispensable for governments to adopt a more strategic communication approach. Notably, they ensure that the function is efficient, achieves concrete impact, and contributes to policy objectives and government priorities. Evaluation can also provide legitimacy to the work of public communicators and better demonstrate its value to garner commitments from senior executives. Beyond the assessment of effectiveness being an end in itself, evaluation is a means of promoting a culture of openness by facilitating transparent government decision-making processes, encouraging the continuous monitoring of progress and promoting accountability for resources used (Macnamara, 2020[3]).

Indeed, evaluation is an essential tool for public communicators to assess the effectiveness of their initiatives, reduce risks of policy failure and promote learning. First, it can foster effectiveness by analysing whether initiatives reached the desired audiences and achieved their intended objectives. In doing so, it can provide timely insights on challenges or unintended consequences, enabling adjustments to given courses of action. Second, evaluation supports learning by building evidence of what works and what does not, helping to avoid failure as well as to inform the design of future strategic communications (OECD, 2020[2]). When applied consistently, this contributes to a better quality of decision making by providing insights on the links between policies, their communication and the impact of messages. Finally, evaluation can reinforce the open government principle of accountability by providing information about whether key communication efforts – and their allocated resources – are generating the expected results and delivering on “value for money” (Macnamara, 2020[3]).

In this regard, governments across OECD and partner countries generally recognised the importance of evaluating public communication. Figure 4.1 illustrates its widespread use at both the central level (34 out of 38 CoGs) and the sectoral level (17 out of 24 MHs). Its status as a core communication competency is also reflected by the fact that 27 of the CoGs that conducted evaluation have an established unit, team or individual for this task.

While there is a rich diversity of evaluation practices across OECD and partner countries, several commonalities exist. By examining how evaluation frameworks define units of analysis (what is being measured), timeframes (ex ante or ex post), methods (how evaluation is carried out), and evaluating entity (internal, external or hybrid) patterns of best practices begin to emerge.

In terms of what is evaluated, communication can be assessed across several core competencies, or “levels of analysis” (Gregory, 2020[4]). As Table 4.1 illustrates, a large share of efforts in OECD and partner countries focused on the evaluation of campaigns (30 out of 33 CoGs) and their impact on the media (31 out of 33 CoGs). With the wide adoption of new technologies and their low entry costs, the assessment of digital and social media campaigns through impressions, likes and shares has also become a popular practice (32 out of 33 CoGs). This was the case of the “Belgium. Uniquely phenomenal” campaign, with an increase in international press coverage (75% in 2019 compared to 38% in 2018 and 19% in 2017), and an increase in the number of followers on Facebook specifically by 88%.1 The least frequent evaluations in CoGs seem to concern the assessment of strategies (21 out of 33 CoGs) and internal communication (23 out of 33 CoGs), which suggests countries could focus more on systematically assessing organisational and strategic objectives beyond programmatic outcomes.

Due in part to the wide range of competencies and activities public communicators evaluate, this field is also characterised by a variety of methods used in both OECD and partner countries, as indicated in Figure 4.2. The most common include surveys, web metrics (google analytics), tracking of news coverage and social media metrics (likes, reach, etc.). Governments rarely deployed methods such as randomised controlled trials or A/B testing, and when adopted, they were mainly applied in the context of campaigns. Given the diversity of methods, governments may find advantages in ensuring the selected mix of methods are fit for purpose against the intended degree of the rigour, availability of data and relevance of objectives for each evaluation (O’Neil, 2020[5]; Gregory, 2020[4]). Doing so will require an enabling environment for practitioners to ensure technical capabilities, guiding frameworks, adequate resources and high-level political support are in place.

The timeframe of evaluation is also a core criterion, notably whether these practices take place at the design or experimentation phase (ex ante), at the end of a communication initiative (ex post) or at both points. In practice, only four surveyed CoGs explicitly mentioned that they conducted evaluation at both the beginning and end of a given campaign. This is consistent with the fact that most public communication evaluations in CoGs were conducted on an ad hoc basis, and in most cases lacked an institutional framework for frequent application (see Table 4.1 above). Overall, findings concurred with general insights from the literature pointing to a general lack of ex ante evaluations within the field of public communication and the need to promote their use to set realistic baselines from which objectives can be more accurately assessed (O’Neil, 2020[5]); (Gregory, 2020[4]) (Macnamara, 2020[3]).

In terms of who evaluates, practices constitute the involvement of internal (the institution or another government entity on its behalf) or external actors (stakeholders outside of government). According to the survey, core communication functions within CoGs were primarily evaluated internally across both OECD and partner countries. Dedicated evaluation teams within CoGs were the primary responsible actors. Where these did not exist, a project lead or another entity within the institution conducted the evaluation. Engaging internal actors can be attractive as they may build on organisational knowledge, access data more easily than external actors and link results with organisational goals (OECD, 2020[2]); (O’Neil, 2020[5]).

In addition, some CoGs use hybrid approaches through the commissioning of evaluations to external actors. Canada, Chile, the Czech Republic, Estonia, the Netherlands and Paraguay are some examples of countries that in addition to their own staff may contract the private sector or external experts to conduct these exercises. The commissioning of external actors may help the results of evaluations be perceived as independent from political influence (OECD, 2020[2]); (O’Neil, 2020[5]). However, contracting external actors may present additional concerns in terms of costs and limited knowledge of internal organisational processes. Governments may benefit from engaging with external stakeholders throughout the evaluation process, which will be discussed in further depth over the next sections of the chapter.

While a large majority of countries recognised evaluation as a core communication competency, 14 out of 38 CoGs and 12 out of 24 MHs considered it as one of the three most challenging competencies within their mandate (see Figure 4.3). According to OECD survey results, this was due to insufficiently skilled staff (10 CoGs and 8 MHs), co-ordination difficulties (5 CoGs and 5 MHs) and insufficient resources (5 CoGs and 4 MHs). Interestingly, five CoGs and three MHs mentioned both human and financial resources as key challenges.

Beyond these institutional challenges, survey findings also revealed a series of factors constraining the application of evaluation in this field across OECD and partner countries. First, a majority of CoGs evaluated communications albeit infrequently and in a non-institutionalised manner, which impeded the consistent application and collection of quality data over time. Second, CoGs faced challenges in showing the contribution of the function to broader policy objectives given the prevalent measure of output (i.e. awareness levels, perception, reach) over impact. Third, evaluations were not generally linked to broader organisational goals, as most CoGs created indicators on an ad hoc basis (i.e. for each communication activity) and only a handful outlined these metrics ex ante within their communication strategy or plan. Together, these challenges were consistent with the broader institutional technical, legal/ethical and cultural factors inhibiting evaluation in many countries, as identified by research in this field (see Table 4.2).

Interestingly, the aforementioned challenges raised by survey respondents reinforce one another. Insufficient human resources (i.e. lack of skilled staff) exacerbate technical barriers to conducting robust evaluations, for example by inhibiting the use of advanced methods yielding higher quality or impact-related data. Moreover, the lack of an institutionalised framework or an overarching evaluation strategy may be one reason for the difficulty in linking the contribution of public communication to broader policy goals and assessing its impact. Moreover, difficulties in evaluating impact and showing causality may lower institutional incentives to invest in evaluation and hire additional staff. These findings align with those concerning the field of policy evaluation more broadly, where such interdependencies underline the need for a sound approach through setting a systemic model, promoting its use and ensuring the quality of results (OECD, 2020[2]).

Against this backdrop, the next section will explore opportunities for governments to address the core challenges of evaluating public communication, namely their lack of institutionalisation and synergies with strategic objectives, and general focus on outputs rather than impact. In doing so, it will first explore the extent to which evaluation practices are institutionalised, and identify existing de jure and de facto mechanisms adopted by countries in this regard. It will then discuss the importance of linking evaluation with organisational goals to reap their full learning and accountability benefits. It will lastly reflect on what evaluations examine in principle, arguing for the measurement of policy impact metrics to showcase the value of public communication.

Institutionalisation is understood as the establishment of evaluation practices within government entities in a systematic way for their regular and consistent application (OECD, 2020[2]). It can take different forms, from the use of regulations, formal procedures or official mandates to policy instruments including practical frameworks, principles and guidelines.

Institutionalising evaluation is at the core of ensuring that public communication practices are fit for purpose. Establishing a systemic framework with a clear methodology, guidelines and templates for evaluation can contribute to aligning siloed efforts and promote the effective application of methodologies across the communications cycle. In doing so, it can help teams assess campaign and staff performance in a consistent manner to ensure the efficient allocation of human and financial resources. Formalising evaluation through a common approach can help simplify its implementation and encourage uptake. At the same time, an institutional approach can also support the role of evaluation in providing policy makers with high-quality evidence with comparable results across time, institutions, and disciplines (OECD, 2020[2]).

In practice, the low levels of institutionalisation in both OECD and partner countries is one of the main reasons why evaluation is underutilised in the field of public communication. In fact, survey results revealed that 53% of CoGs carried out evaluations in an ad hoc manner without an established methodology to ensure their consistent and regular application (see Figure 4.4).2 Most countries conducting evaluations in this manner reported carrying out these practices infrequently, whenever there were available resources, appetite from the political leadership or specific programmatic needs. At the health sector level, evidence suggested that issues go beyond the ad hoc nature of practices in place, as 29% of MHs did not conduct evaluation in the first place.

Given the technical and challenging nature of evaluating public communication, countries are recognising the importance of formalising processes and establishing shared methodologies. In fact, survey results revealed that practices in this regard were emerging in CoGs (37%) and MHs (42%) across OECD and partner countries. Among other things, these efforts took the form of legal and policy frameworks or expert communities of practice.

Indeed, several OECD countries have developed legal frameworks with concrete evaluation procedures for communication campaigns, for example, in government directives. The Government of Canada’s Directive on the Management of Communications (2019) required the ex post evaluation of every campaign exceeding CAD 1 million (Canadian dollars) through a standardised reporting tool (see Box 4.1). Similarly, in the Netherlands, evaluations were mandatory for campaigns conducted on behalf of the central government with a media budget of EUR 150 000. An annual report was made public and shared with the Parliament with an overview of the total media expenses, the ex ante and ex post effects of each campaign and a comparative assessment of performance in relation to previous years. In the Australian state of South Australia, the Marketing Communication Guidelines (2020[6]) required the submission of ex ante evaluation criteria for the approval of all communication activities, regardless of their budget, as well as an ex post evaluation against these set objectives.

Other practices to institutionalise the evaluation of public communication included the establishment of whole-of-government policy frameworks and guidelines recommending a “theory of change or programme theory logic” to evaluate inputs, outputs, outcomes, outtakes and impact (Macnamara and Likely, 2017[7]). In the United Kingdom, the Government Communication Service (GCS) Evaluation Framework 2.0 provided guidance for major paid-for behavioural campaigns by setting a shared methodology, common metrics and practical implementation templates (see Box 4.2). According to survey responses, the Korean Ministry of Culture, Sports and Tourism developed a dedicated strategy on communication policy evaluation (2020) with the aim of aligning the application of this function across sectors and ensuring quality control through established criteria. The Government of Belgium developed a set of guidelines for federal communicators outlining different methodologies and processes to conduct regular evaluations (Government of Belgium, 2014[8]). In addition, the Government of the state of New South Wales in Australia developed a dedicated framework, together with guidelines and an implementation matrix to support good practices (Government of New South Wales, 2017[9]).

Lastly, efforts in some countries benefit from formal and informal bodies building expertise and debating practices, frameworks and evaluation criteria. In the United Kingdom, the GCS established an evaluation council made of internal and external experts to review campaigns prior to their approval and following their implementation (see Box 4.3). In Canada, the Communications Community Office established a thematic network of public sector employees working in evaluation across different sectors.3 The existence of public sector academies with dedicated curricula on the evaluation of public communication in countries such as the Netherlands and the United Kingdom has also helped advance debate among policy makers and support champions to push for innovation in this field (see Chapter 2).

Against this backdrop, advancing the development of international standards in this field will be useful for public communicators to institutionalise evaluation practices. At present, the lack of consensus on appropriate methodologies, tools and principles of good practice for evaluating communication within government has inhibited the adoption of internationally recognised standards in this field (Macnamara, 2020[3]) (Macnamara, 2018[11]). The OECD is currently developing a set of international standards through the upcoming OECD Recommendation on Policy Evaluation, which will provide general insights that the public communication profession can build on and further adapt for its own purposes. Governments may also look to build on several other efforts from private and public initiatives to develop relevant guidance, including:

  • The Barcelona Principles (2.0): The Barcelona Principles are a set of seven principles, providing a framework for effective public relations and communications measurement, adopted by public and private sector stakeholders from over 30 countries. The principles serve as a guide for practitioners to incorporate the changing media landscape and communication field into a reliable, consistent and transparent framework. The set principles include goal setting, measurement of communication outcomes, effect on organisational performance, quantitative and qualitative methods, and media monitoring (AMEC, 2015[12]). These principles include:

    1. a. “Goal setting and measurement are fundamental to communication and public relations”.

    2. b. “Measuring communication outcomes is recommended versus only measuring outputs”.

    3. c. “The effect on organisational performance can and should be measured where possible”.

    4. d. “Measurement and evaluation require both qualitative and quantitative methods”.

    5. e. “AVEs are not the value of communications”.

    6. f. “Social media can and should be measured consistently with other media channels”.

    7. g. “Measurement and evaluation should be transparent, consistent and valid”.

  • The Association for Measurement and Evaluation of Communication (AMEC) Integrated Evaluation Framework: AMEC’s interactive Integrated Evaluation Framework aims to guide professionals through the process from aligning objectives to developing a plan, establishing targets and measuring outputs, outtakes, outcomes and impact of communications (Bagnall, n.d.[13]). This framework shows how to implement the Barcelona Principles by providing a tool that allows users to input data at each stage. In addition to providing definitions and examples, the tool also allows users to create reports based on the data submitted.

  • European Commission’s Toolkit for the evaluation of communication activities: The European Commission Directorate-General for Communication developed a framework, guidelines and a code of conduct for European Union institutions to evaluate communication campaigns. The toolkit aims to guide the planning and implementation phase of communication activities deployed by Directorates in the European Commission. It also elaborates on different methods and types of evaluation metrics and indicators, and offers principles for effective planning (European Commission, 2017[14]).

Encouraging a systemic approach to demonstrate the value of public communication requires evaluation models that have a clear rationale tied to core institutional priorities. Reflecting on the direct contribution of this function to broader government objectives can help build an understanding of the “bigger picture” and nurture the strategic foresight that makes communication effective. While efforts to professionalise evaluation in this regard have been widespread, OECD survey data suggested that a small share of OECD and partner countries linked these assessments with strategic organisational goals. In fact, only 8 CoGs and 4 MHs that conducted evaluation outline metrics to assess activities from the outset of communication strategies or plans linked to government policy priorities (Figure 4.5). In addition, 4 CoGs included a description of how public communication activities should be evaluated, without specific indicators being outlined.

In practice, OECD survey results indicated that 22 out of 33 CoGs and 14 out of 17 MHs created evaluation indicators on an ad hoc basis for each communication activity (see Figure 4.6). The informal and reactive nature of practices can be explained in part by the lack of institutionalisation discussed in the previous section, which in turn raises issues in setting baselines, monitoring progress, collecting comparable data and defining impactful targets. The lack of established metrics and their ad hoc creation in most countries also suggested difficulties in setting SMART objectives4 from the outset. These challenges were exacerbated by the low application of ex ante evaluations, data from which can help provide a baseline for future exercises. Considering that 25 CoGs and 9 MHs claimed to have strategies and/or plans, these findings revealed potential gaps in ensuring congruence at all stages of the communication cycle, from strategic planning to feedback processes for the design of new initiatives.

The development of evaluation metrics in synergy with a broader strategy or plan can provide a roadmap to show the value of this work in a consistent and credible manner. In the United Kingdom, for example, the 2019-20 Government Communication Plan sets clear professional standards for all departments through its comprehensive Evaluation Framework 2.0 and the GCS Modern Communications Operating Model (MCOM) (Aiken et al., 2019[15]). Within its strategic plan (2020-2024), the Government of Turkey sets out “objective cards” that include the rationale for each goal of all departments at the Directorate of Communications, KPIs with yearly targets based on a 2019 baseline, and related evaluation and reporting elements. Ecuador grounded communication evaluations in a broader results management framework (or “Gestión por resultados”), examining the outcomes of campaigns, the strategic management of this function and its contribution to government priorities. The country also outlined clear evaluation metrics for each of the government’s objectives in its communication strategy.

Overall, OECD survey results suggested there is general agreement on the fact that demonstrating the value of public communication is not straightforward in practice and should be seen as an ongoing process. Five out of the eight CoGs that had established metrics linked with their communication strategy underlined the importance of remaining flexible and evaluating additional metrics depending on emerging needs as initiatives are rolled out. Monitoring processes informing evaluation also revealed the need to adapt courses of action and, at times, the strategic direction to achieve intended policy objectives. This is especially important as governments must engage with increasingly fragmented audiences through multiple channels and pursue various goals simultaneously amid a rapidly evolving media landscape (Ansgar and Volk, 2020[16]).

In this regard, OECD and partner countries are adopting tools to support the robust evaluation of organisational objectives linked to public communication activities. The Government of Canada developed the Advertising Campaign Evaluation Tool (ACET) to assess post-campaign outcomes in a database shared across departments (see Box 4.4). Similarly, Colombia uses its Integrated Management System (SIGEPRE) platform to track indicators and provide an integrated dataset for policy makers on communication and its management (Presidency of the Government of Colombia, n.d.[17]). Beyond the direct inclusion of metrics in strategies, countries such as Australia, Korea and the United Kingdom have established Key Performance Indicators (KPIs) and suggested data collection methods in cross-government guiding frameworks as detailed in the previous sections.

Concerning the elements that governments evaluate, survey results revealed that a majority of OECD and partner countries focused on the examination of outputs rather than outcomes and impact. As illustrated in Table 4.3, CoGs tended to evaluate first-order metrics focused on quantifying a communication initiative’s reach (79%) and its effect on awareness levels (66%). This is consistent with the most popular evaluation methods employed in CoGs, which include ex post surveys, media monitoring, and review of social media impressions or other online analytics (see Figure 4.2 above).5 Notably, data confirmed previous findings from the 2017 WPP Leader’s Report, which reported that only 40% of respondents claimed to measure the impact of communication against set policy objectives (WPP Government & Public Sector Practice, 2016[19]). While the evaluation of outputs can be helpful to measure the design and implementation aspects of a given initiative, on their own they do not provide sufficient insights on the broader effects of a communication activity.

As outlined in Chapter 3, with the increase in the creation of behavioural units across governments, over half of surveyed CoGs (53%) and MHs (46%) claimed to evaluate the impact of communication through analysing behaviour change in populations (see Table 4.3 above). This can be a key means to examine whether campaigns are achieving their intended objectives to “improve knowledge, change individual attitudes, or modify degrees of social support for a given policy (Wundersitz, 2019[20])”. In Italy, the Presidency of the Council of Ministers implemented a campaign to promote the use of masks, social distancing and hand washing and evaluated its impact to understand success factors and areas for improvement (see Box 4.5).

In the Netherlands, the government developed the Communication Activation Strategy Instrument (CASI) to support the measurement and evaluation of policy outcomes in campaigns based on behaviour change objectives (see Box 4.6). While governments are increasingly adopting social listening techniques, OECD survey results reveal that existing practices tend to focus on general perceptions and impressions, which on their own are insufficient to substantiate links between behaviour change and policy outcomes, as argued in Chapter 3.

The emphasis on outputs and broad perceptions may help explain why OECD and partner countries are facing challenges in linking the contribution of the communication function to broader policy goals. In fact, only a small share of CoGs evaluated the impact of communication initiatives beyond awareness of a given policy, for instance through analysing changes in service uptake (42%) and stakeholder participation levels (16%). This is consistent with the fact that, even among the few countries that have established frameworks, most tend to omit the perspective of stakeholders to design and evaluate the effectiveness of communication activities (Macnamara, 2018[11]). Examining the effect of public communication on perception, satisfaction and engagement in public life, for example, can provide powerful insights to design policy interventions that are responsive to the needs of different population groups, in particular marginalised segments.

Assessing impact at the organisational level is also critical to ensuring the strategic direction of communication and its contribution to broader government priorities. In practice, OECD survey results revealed that a small share of CoGs evaluated potential reasons for underachieving goals (34%) and unintended consequences (21%) which may inhibit opportunities for institutional learning. The evaluation of these elements are indispensable in providing governments with a complete picture of how effectively a communication initiative is achieving policy goals, delivering on organisational objectives and justifying its costs (OECD, 2020[2]).

Beyond metrics, the over-reliance on outputs is also visible in the most important reasons for conducting evaluations cited by OECD and partner countries. According to OECD survey results, 55% of CoGs considered tracking performance through the development of quantitative data as the main reason for conducting these exercises. A moderate share considered the examination of behavioural change (45%) and perceptions of general policies (42%), but impact on stakeholder participation (13%) and public service uptake (26%) were less prioritised. While these results suggest a recognition of the benefit of evaluation in ensuring the effectiveness of campaigns, this framing can reinforce the conception of communications as a “one-way” mechanism to share government information without considering its effect on stakeholders more broadly (Macnamara, 2020[3]).

While there is no one-size-fits-all approach for how communicators can or should evaluate policy-oriented impact, several OECD countries are using specific outcome and impact metrics to ensure high-quality insights. In the United Kingdom, the guide for the GCS Framework 2.0 provided a series of metrics to evaluate behaviour change, awareness, recruitment and stakeholder engagement aspects of a given campaign, along with suggested measurement methods (GCS, 2018[10]). These included the proportion of the target audience that modified their behaviour, audience sentiments about campaign messages, attitudinal changes, expressions of interest, responses to calls to action, and return on investment across all campaign aspects. In Australia, the framework of New South Wales and its implementation matrix provided a roadmap for evaluating outcomes6 as well as impact7 through a set of proposed metrics, milestones and data collection methods (Government of New South Wales, n.d.[21]). The framework suggested examining the impact of initiatives with metrics such as complying behaviour, quality of life, cost savings and policy buy-in (see Box 4.7).

Various OECD countries have also begun to evaluate the short-, medium- and long-term effects of campaigns and their specific contributions to broader policy aims (see Box 4.8). Sharing these types of results not only supports accountability, but also shows the value of public communication and helps make the case for future investments in this field. Other examples of the impact of campaigns that governments were able to demonstrate through rigorous evaluation can be found in chapters 3 and 7.

Evaluations should also seek to assess the impact of public communication on the ability of citizens to contribute to public life. Anchoring evaluations in an end-user perspective is important given how this function can enable stakeholder participation by ensuring optimal flows of information, effective state-citizen interfaces, and two-way dialogue mechanisms. Moreover, the inclusion of trusted voices from non-government stakeholders in the process of evaluation can help improve their design, relevance, transparency and independence (OECD, 2020[2]). In fact, OECD survey results suggested that only 9 out of 34 CoGs engaged with civil society and academic institutions for evaluating campaigns. Examples illustrating the integration of these actors took different forms, from the commissioning of evaluations to universities such as in Thailand, to the formal inclusion of civil society in the Government Communication Service Strategy and Evaluation Council in the United Kingdom. Additional research could help build a better understanding of how to include the perspective of external actors across evaluations to measure changes in stakeholder engagement in the policy-making process and beyond. The Korean Ministry of Culture, Sports and Tourism’s 2020 Strategy on Policy Communication Evaluation provides an example of initial efforts to measure stakeholder participation and citizen satisfaction-related impact (see Box 4.9).

Evaluating impact alone, however, will not contribute to developing a strategic communication if its results are not utilised in the end. According to OECD surveys, using evaluation results to inform communications is not a common practice, as only a quarter of CoGs in OECD and partner countries make use of data associated with evaluating the impact of public services, for example. Given their high costs, having an end-user perspective during its design and subsequently selecting an appropriate methodology are important elements to ensure these exercises are fit for purpose, in particular to yield high-quality data that feeds into the strategic design of communications. The OECD also underlines the importance of building capabilities, developing standards (i.e. for data collection or wider evaluative processes), setting advisory panels, involving external stakeholders and facilitating access to results as some ways to ensure the quality and utility of policy evaluations more broadly (OECD, 2020[2]).

  • While the importance of evaluating public communication is widely recognised, OECD member and partner countries have scope to expand its application. Evidence points to the lack of institutionalisation, the limited integration of evaluation within strategic planning processes and the predominant focus on outputs over impact as the main inhibiting factors. The lack of adequate human and financial resources compound these challenges.

  • Institutionalising evaluations in the field of public communication can ensure they are more consistently used, help instil methodological rigour, and facilitate the comparability of data across institutions, activities and time. CoGs are critical in embedding a systemic approach through the dedicated use of de jure or de facto mechanisms. Existing practices include the use of government directives, regulations, models, guidelines and communities of practice, among others.

  • Evaluation cannot contribute to strategic communication if it is not linked to the policy priorities of the given institution. Integrating evaluation from the onset of the planning process of a given communication strategy or initiative is also essential to promote a culture of accountability and enable an evidence-driven communication.

  • Going beyond the evaluation of communication outputs and measuring changes in behaviour, stakeholder participation levels and or uptake of services for example can help show the contribution of communication activities to broader policy goals. Examining the reasons for underperformance and unintended consequences may also provide valuable insights for public sector organisations to learn from. Evaluating the impact of communication activities can also provide evidence in support of future investments in the profession and position it as a key lever of government activity.

  • Anchoring evaluations in an end-user perspective and including trusted voices outside of government in such endeavours can help improve their design, relevance, transparency and independence. Given the low level of involvement of civil society and academia in this practice, further research could help identify opportunities to integrate the perspective of these actors in relevant processes, in particular in its initial stages.

  • Difficulties in evaluating public communication are also due to the lack of internationally recognised standards and principles of good practice adapted for governments in this field. To that end, building on existing international efforts, including the forthcoming OECD Recommendation on Policy Evaluation, and sharing successful country-level examples will be a valuable way forward.

  • Further research could be conducted with the aim of mapping evaluation processes at the country level to understand the impact of existing evaluation models and in turn, inform the development of principles of good practice specific to the profession. Codifying successful practices could also help illustrate how governments can better evaluate the impact of public communication on broader policy objectives and stakeholder engagement. Such research could also support governments in moving beyond establishing robust monitoring and evaluation processes toward adopting a culture of ongoing learning within public institutions.

References

[15] Aiken, A. et al. (2019), Government Comms Plan 2019/20, UK Government Communication Service, https://communication-plan.gcs.civilservice.gov.uk/wp-content/uploads/2019/04/Government-Communication-Plan-2019.pdf (accessed on 26 February 2021).

[12] AMEC (2015), Barcelona Principles 2.0, International Association for Measurement and Evaluation of Communication, https://amecorg.com/barcelona-principles-2-0/ (accessed on 4 March 2021).

[16] Ansgar and Volk (2020), “Aligning and linking communication with organizational goals”, in The Handbook of Public Sector Communication, Government of Canada, https://www.tbs-sct.gc.ca/pol/doc-eng.aspx?id=30682.

[13] Bagnall, R. (n.d.), AMEC Integrated Evaluation Framework, International Association for Measurement and Evaluation of Communication, https://amecorg.com/amecframework/ (accessed on 4 March 2021).

[22] Bird, C. (2017), Why evaluation is GREAT, https://quarterly.blog.gov.uk/2017/08/16/why-evaluation-is-great/.

[18] Edelman (2019), Review of International Practices in Government Communications, https://www.ops.gov.ie/app/uploads/2019/11/3.4.5-Resources-document-Review-of-International-Practices_Edelman-SENT-1308.pdf.

[14] European Commission, D. (2017), TOOLKIT for the evaluation of the communication activities, https://ec.europa.eu/info/sites/info/files/communication-evaluation-toolkit_en.pdf (accessed on 4 March 2021).

[10] GCS (2018), Evaluation Framework 2.0, UK Government Communication Service, https://gcs.civilservice.gov.uk/blog/improving-campaigns-with-the-aggregated-outcomes-benchmarking-database/ (accessed on 16 February 2021).

[8] Government of Belgium (2014), Évaluer des actions de communication : Guide pour les communicateurs fédéraux, Government of Belgium, http://www.fedweb.belgium.be (accessed on 5 March 2021).

[9] Government of New South Wales (2017), Guidelines for Implementing the NSW Government Evaluation Framework for Advertising and Communications, https://www.nsw.gov.au/sites/default/files/2020-03/Guidelines%20for%20Implementing%20the%20NSW%20Government%20Evaluation%20Framework.pdf (accessed on 15 February 2021).

[21] Government of New South Wales (n.d.), NSW Evaluation Framework Implementation Matrix, https://www.nsw.gov.au/sites/default/files/2020-03/Evaluation%20Framework%20Implementation%20Matrix.pdf (accessed on 16 February 2021).

[6] Government of South Australia (2020), Marketing Communications Guidelines, https://www.dpc.sa.gov.au/__data/assets/pdf_file/0009/294804/Marketing-Communications-Guidelines.pdf.

[4] Gregory, A. (2020), The Fundamentals of Measurement and Evaluation of Communication, Wiley, https://doi.org/10.1002/9781119263203.ch24.

[3] Macnamara, J. (2020), “New Developments in Best Practice Evaluation”, in The Handbook of Public Sector Communication, Wiley, https://doi.org/10.1002/9781119263203.ch28.

[11] Macnamara, J. (2018), “A Review of New Evaluation Models for Strategic Communication: Progress and Gaps”, International Journal of Strategic Communication, Vol. 12/2, pp. 180-195, https://doi.org/10.1080/1553118X.2018.1428978.

[7] Macnamara, J. and F. Likely (2017), “Revisiting the disciplinary home of evaluation: New perspectives to inform PR evaluation standards”, https://www.researchgate.net/publication/336769009_Revisiting_the_disciplinary_home_of_evaluation_New_perspectives_to_inform_PR_evaluation_standards (accessed on 15 February 2021).

[2] OECD (2020), Improving Governance with Policy Evaluation: Lessons From Country Experiences, OECD Public Governance Reviews, OECD Publishing, Paris, https://dx.doi.org/10.1787/89b1577d-en.

[1] OECD (2009), Guidelines for Project and Programme Evaluations, OECD, Paris, https://www.oecd.org/development/evaluation/dcdndep/47069197.pdf.

[5] O’Neil, G. (2020), “Measuring and Evaluating Audience Awareness, Attitudes and Response”, in Luoma-aho, V. and M. Canel (eds.), The Handbook of Public Sector Communication, Wiley Blackwell.

[17] Presidency of the Government of Colombia (n.d.), SIGEPRE, https://dapre.presidencia.gov.co/dapre/sigepre/que-es-sigepre (accessed on 25 February 2021).

[19] WPP Government & Public Sector Practice (2016), The Leaders’ Report: The future of government communication, WPP Government & Public Sector Practice, https://govtpracticewpp.com/report/the-leaders-report-the-future-of-government-communication-2/ (accessed on 23 February 2021).

[20] Wundersitz, L. (2019), Evaluating behaviour change communication campaigns in health and safety: A literature review, University of Adelaide, https://www.researchgate.net/publication/334364335_Evaluating_behaviour_change_communication_campaigns_in_health_and_safety_A_literature_review (accessed on 15 February 2021).

Notes

← 1. Belgium's answer to the OECD CoG Survey.

← 2. According to the survey data, 52.6% of CoGs conduct evaluations in an ad hoc manner, 36.8% on an institutional basis and 10.5% do not conduct evaluations at all.

← 3. Information retrieved from the 2019-2020 Communications Community Office Annual Report available online at https://www.canada.ca/en/privy-council/services/communications-community-office/reports/annual-2019-2020.html.

← 4. SMART objectives refer to those that are specific, measurable, attainable, realistic and with a specific time frame.

← 5. As of 2020 Ireland does significant evaluation on communication effectiveness, particularly with respect to Covid-19 Public Health advice, evaluating number of people reached, awareness levels, behaviour change in populations, unintended consequences and possible reasons for underachieving goals.

← 6. Outcomes refer to short term and long term, asking what the target audience/s take out of communication and initial responses and what sustainable effects the communication has on target audiences.

← 7. Impact refers to the reflection on the full effects and results are caused, in full or in part, by the communication activity.

Metadata, Legal and Rights

This document, as well as any data and map included herein, are without prejudice to the status of or sovereignty over any territory, to the delimitation of international frontiers and boundaries and to the name of any territory, city or area. Extracts from publications may be subject to additional disclaimers, which are set out in the complete version of the publication, available at the link provided.

© OECD 2021

The use of this work, whether digital or print, is governed by the Terms and Conditions to be found at http://www.oecd.org/termsandconditions.