copy the linklink copied!3. How behavioural insights can help foster a culture of safety

This chapter presents the scoping work compiled at the beginning of the safety culture project. This is based on a desk review of academic and policy/practitioner literature and examines the behavioural and organisational barriers, scopes indicators used to measure changes with these barriers, and identifies potential areas for testing and experimentation with behaviourally-informed solutions.

    

The previous chapter argued that BI can be successfully expanded from influencing individual decision making to influencing organisational decision making via the individuals within them. This includes the organisational policies and procedures set in place that influence individual decisions. Similarly, culture influences the decisions and behaviours of people in an organisation, and these behaviours ultimately drive safety outcomes and performance. As will be discussed in this chapter, human and organisational factors are related to culture, though there is no clear conclusions and often additional research is suggested. This provides a logical avenue for the application of BI to add value. It is for this reason that culture, and in particular safety culture in the energy sector, was chosen as an avenue for testing the application of BI to changing organisational behaviour.

Safety culture, or lack thereof, has had a real and recent impact on society. There is clear evidence from an analysis of global incidents that safety culture is – at least in part – a key factor in most high consequence accidents, such as the nuclear safety system failure at the Fukushima Daiichi plant in Japan in 2011 (OECD/NEA, 2013[1]), and organisational and cultural lapses that contributed to the BP Deepwater Horizon oil spill in 2010 (Reader and O’Connor, 2014[2]); (Corkindale, 2010[3]). Just as significantly, many smaller and more frequent accidents can important as well as major incidents are usually the product of simple (i.e. minor) decisions or behaviours and system complexity. Safety culture is about changing these simple decisions and behaviours to prevent incidents (both minor and major).

Prevention of incidents like these strongly supports further research on safety culture and action in order for regulators to better serve the public interest. Regulators have a role to play in advancing safety culture across the industries that they oversee. A key aspect of this duty requires them to lead the way by understanding their own organisational cultures and behaviours, their cultural strengths and vulnerabilities, and how these factors influence the broader safety and regulatory system. Equally important is to understand the cultural and organisational changes and behaviours of regulated entities and industry to ensure that a safety culture is effectively implemented and impact the sector as a whole.

This chapter builds off that work by presenting the scoping work compiled at the beginning of this safety culture project. This is based on a non-exhaustive desk review of the academic and policy/practitioner literature on developing safety oversight culture in regulatory agencies and a culture of safety compliance in regulated entities. The aims and objectives are as follows:

  • To outline behavioural and organisational barriers to developing a regulator safety culture and a culture of safety compliance in regulated entities;

  • To scope the existence of indicators in regulator safety oversight culture, strengths and weaknesses and the methodology to derive the indicators; and

  • To identify potential areas for testing and experimentation of behaviourally-informed solutions aimed at developing a culture of safety within regulators and regulated entities.

The first section outlines findings from academic and practitioner literature reviews on organisational barriers to developing a safety culture, as well as indicators used to measure changes associated with these barriers and/or removing them. The second section describes how the findings from the literature reviews can be used to test behavioural insights in real world policy and regulatory contexts.

copy the linklink copied!Academic and practitioner literature

There exists a significant amount of academic and practitioner work addressing the role of safety culture in high risk industries. This section provides an overview of literature assessing organisational and behavioural barriers to developing a safety culture, along with an overview of safety culture indicators. The scoping work is an initial scan and was built upon iteratively as the project moved forward.

Understanding safety culture and associated challenges

The following paragraphs provide an overarching understanding of safety culture as well as reasons as to why and when it can be difficult to foster. They outline the key organisational and behavioural barriers, in addition to an overview of indicators used to measure safety culture.

There is no concise internationally agreed upon definition of ‘safety culture’. However, at its core, safety culture is an aspect of the larger organisational culture, including the organisation’s values, beliefs, attitudes, norms and practices (TRB, 2016[4]) (Cooper, 2002[5]). In the literature, there is a clear understanding that safety culture impacts safety performance (Smith, Emma and Wadsworth, 2009[6]). For instance, one study analysed 15 major petrochemical accidents between 1980 and 2010 noted that poor safety culture contributed to 12 of the 15 accidents (Fleming and Scott, 2012[7]).

It is widely acknowledged that regulators have an important role in promoting safety culture. However, responsibility also lies with the industries in combination with regulators to promote a safety culture, acknowledging limits of regulation and that regulators cannot create a safety culture on their own (TRB, 2016[4]).

A recent review acknowledged that there are common sets of practices and processes across the different conceptualisations (TRB, 2016[4]). It found that the elements described by the US Bureau of Safety and Environmental Enforcement mirror those identified in major academic reviews of safety culture research, as well as leading frameworks in different industries. The elements are as follows (BSEE, 2013[8]):

  1. 1. leadership commitment safety values and actions;

  2. 2. hazard identification and risk management;

  3. 3. personal accountability;

  4. 4. work processes;

  5. 5. continuous improvement;

  6. 6. environment for raising concerns;

  7. 7. effective safety and environmental communication;

  8. 8. respectful work environment; and

  9. 9. enquiring attitude.

To provide another framework for comparison, the following elements were generated by a review using similar methods, although collapsed into fewer categories (NEB, 2017[9]). The dimensions include:

  1. 1. safety leadership commitment;

  2. 2. vigilance;

  3. 3. empowerment and accountability; and

  4. 4. resiliency.

Organisational barriers and enablers to a strong safety culture

There are a number of characteristics of high risk industries which makes it challenging to measure safety culture. These aspects include an understanding of what at times can be an unclear concept of safety culture (Cooper, 2002[10]), employers being unaware of issues, a myriad of influencing/reciprocal factors (personal commitment, perceived risk, competencies, safety knowledge, job satisfaction, etc.) and challenges in measurement. Additionally, there are often challenges at an organisational level, in terms of understanding and receiving feedback (TRB, 2016[4]).

Elements of strong safety culture are described below and discussed briefly in terms of ways in which there may be barriers and enablers to strong safety culture. The descriptions are derived largely from a recent reviews (TRB, 2016[4]) (NEB, 2017[9]). This is a not intended to be a definitive summary, but to help frame discussions with regulators. The barriers and enablers below were categorised according to initial discussions with country representatives

Barriers

  1. 1. Production pressure: An organisation’s primary goals (e.g., production) compete or may be perceived as competing with safety (Carroll and Edmondson, 2002[11]); (NEB, 2014[12]). Strategies to combat these opposing incentives are addressed through leadership prioritising strengthening safety culture and critical thinking which are both vital to the maintenance of a dynamic safety culture (Berglund, 2016[13]). Presenting awards to ‘champions’ acknowledging strong safety culture is another way to potentially ameliorate this issue (TRB, 2016[4]).

  2. 2. Personal accountability and enquiring attitude: A strong safety culture depends on employees’ readiness to reveal errors and near misses, and to share their ideas and concerns to more senior people (Carroll, 2002; Stern, 2008; (Tangirala and Ramanujam, 2008[14]). When the right conditions are present, speaking up and listening have an impact on safety and reflect a strong safety culture. (TRB, 2016[4]).

  3. 3. Hazard identification and risk management: Accurate hazard identification and risk management can result from pooling diverse viewpoints (Weick and Westley, 1996[15]). In particular, storytelling can be used to address gaps and inconsistencies that pose threats to safety (Weick and Browning, 1986[16]). Also important is mindful organising which is an enquiring attitude concentrated on hazard identification, risk management and personal accountability (TRB, 2016[4]).

  4. 4. Work processes: Work processes are key to strong safety culture and yet challenging because they often involve substantial resource commitments, such as equipment and training. Building workforce knowledge, skills, and abilities has been demonstrated to foster safety culture and safety performance (TRB, 2016[4]).

  5. 5. Organisational changes and leadership: Top-processes – such as management, existing safety systems, etc. – have an influence on safety culture. When organisational management changes or leadership fails to uphold these standards, safety culture can be affected. Organisations may have long standing records of safety and risk management that could come under pressure when changes happen at the organisational or management level. This was seen as a contributing factor to Deepwater Horizon (Reader and O’Connor, 2014[2]); (Corkindale, 2010[3]).

Enablers

  1. 1. Leadership commitment safety values and actions: Leader commitment to safety (such as through safety practices and procedures), the priority they place on safety compared to other goals, and their dissemination of safety information impacts employee views on safety (Katz-Navon, Naveh and Stern, 2005[17]); (NEB, 2014[12]). It is more likely that employees will focus on safety when leaders isolate and give attention to it (TRB, 2016[4]).

  2. 2. Respectful work environment: Certain ways of working can increase focus on safety and foster a respectful work environment and the ability to raise safety concerns without fear of disciplinary action. Particularly key are safety rounds, or visits by leaders of front-line facilities to talk about safety issues and concerns (Singer and Tucker, 2014[18]). (TRB, 2016[4]).

  3. 3. Atmosphere for raising concerns: Academic literature describes a key way in which leaders empower employees to raise concerns – by creating psychological safety, or the belief that it is safe to take interpersonal risks (Edmondson, 1999[19]). An empowering leader enables employees to think, speak up and learn by doing (Yun, Faraj and Sims, 2005[20]); (Ely and Meyerson, 2010[21]) (TRB, 2016[4]).

  4. 4. Effective safety and situational communication: Employees are more likely to report errors and incidents when leaders make safety a priority and adequately disseminate safety information (Naveh, Katz-Navon and Stern, 2006[22]), (Weingart et al., 2004[23]). On the other hand, employees deviate from safety procedures and do not speak up when they see unsafe activity, when there is a poor safety climate (e.g., one in which there is high production pressure, and not a great deal of widely disseminated safety material) (Hofmann and Stetzer, 1996[24]); (TRB, 2016[4]).

  5. 5. Continuous improvement: involves exerting constant effort to identify often understated details. ‘After event reviews’ (AERs) have been shown to be a helpful practice. They are guided discussions of past experience leading to an understanding of the causes of failures and successes, and lessons learned for improvement (Popper and Lipshitz, 1998[25]); (Ellis and Davidi, 2005[26]); (TRB, 2016[4]). This is further enshrined in the OECD Recommendation of the Council on the Governance of Critical Risks, which encourages the continuous sharing of knowledge including lessons learned from previous events, research and science through post-event reviews, to evaluate the effectiveness of prevention and preparedness activities, as well as response and recovery (OECD, 2014[27]). A culture for continuous improvement goes hand in hand with the concept of psychological safety – the belief that one will not be punished or humiliated for speaking up with ideas, questions, concerns, or mistakes (Edmondson and Lei, 2014[28]).

The barriers and enablers presented above provide an overview from the perspective of mainly one framework (BSEE, 2013[29]) and are adapted from a review (TRB, 2016[4]). It is worth mentioning that when assessing safety culture in a variety of contexts, the way in which other frameworks are organised may be more useful in isolating behaviours, as well as identifying barriers and enablers to that behaviour. For example, another way to think about these elements is to arrange them in terms of “cultural threats” (production pressure, complacency, normalisation of deviance) and “cultural defences” (committed safety leadership, vigilance, empowerment and accountability, resiliency) (NEB, 2014[12]).

What works and what can regulators do to promote safety culture

Regarding what works to promote safety culture, the academic literature addresses conceptual underpinnings of behaviour and safety. In general, reviews cite complexity and that culture cannot be shaped overnight and the importance of shared responsibility between regulators and industries (Swiss Federal Nuclear Safety Inspectorate (ENSI), 2015[30]).

The literature looks at the role of human and organisational factors in promoting safety and suggests research to understand in a detailed context specific manner. Differences in safety culture in certain industries across cultures have been explored and the importance of understanding practical lessons learned across industries is acknowledged. These findings about behaviour and importance of context provide an opportune opening for the use of behavioural insights to add value.

Considering the importance of behaviour, the previous section aimed to outline some of the key ways in which barriers and enablers to strong safety culture can be addressed using evidence from the literature. Building upon this, select high level recommendations for regulators that have come out of reviews of evidence, include creating memoranda of understanding around safety culture, safety culture champions, and assessment and improvement of safety culture using safety management principles (TRB, 2016[4]).

With respect to the oil and gas industry, a recent comprehensive study based on an extensive literature review as well as interviews with oil and gas executives and regulators assessed how a regulator’s culture influences safety outcomes in high hazard industries. It identified “six cultural vulnerabilities” that require control: a politicised mission, a punitive culture, the presence of bureaucratic inertia, the tolerance of inadequate capacity and competency, compliance mentality and a preoccupation with active failures (Bradley, 2017[31]). The work also highlighted ‘six cultural strengths’ that require nurturing: the presence of leadership and political independence, a learning culture, innovation, technical excellence, risk consciousness and systems thinking (Bradley, 2017[31]). These are areas to keep in mind when thinking about promoting safety culture from a regulator point of view.

Additionally, a review carried out by the nuclear power regulator in Switzerland on safety and oversight culture highlights important areas of focus including competence and professionalism, overarching collaboration and framework for oversight. In assessing oversight the study mentions key challenges including active involvement of staff, confidentiality in handling data and results, experiences with respect to project organisation and project management (Swiss Federal Nuclear Safety Inspectorate (ENSI), 2015[30]).

This section provides background information on the elements of safety culture as well as what the literature indicates are good ways of improving upon each of the elements. It also addressed some recommendations of what works, as well as lessons learnt for regulators in specific industry contexts.1 The following section provides practical examples of where barriers to strong safety culture resulted in failures.

Safety culture lessons learnt from incidents

Documents addressing lessons learnt from failures provide important details and nuances. Examples of such reports include debriefs on lessons learnt from the Fukushima nuclear accident (OECD/NEA, 2013[32]); (INPO, 2012[33]); (NRSB, 2014[34]) and the Deepwater Horizon oil spill (NAE and NRC, 2011[35]) in the Gulf of Mexico. This section addresses some of the major implications for understanding the promotion of a culture of safety in high risk industries illustrating the concepts through practical examples.

Fukushima Daiichi Nuclear Accident

On March 11, 2011, an earthquake occurred on Japan’s main island which knocked out power to the Fukushima Daiichi plant, and then a tsunami inundated portions of the plant site. Flooding of critical plant equipment resulted in the extended loss of power, with the consequent loss of reactor monitoring, control and cooling functions in multiple units. The accident prompted widespread evacuations of local populations and subsequent distress, large economic losses and the eventual shutdown of all nuclear power plants in Japan (National Research Council, 2014[36]).

Since this incident, a significant number of reports have been published gathering lessons learnt, for Japan, as well as for other countries. It is important to note that these reports address a range of areas of findings, however, in this review, only the role of safety culture and compliance in the disaster will be discussed (INPO, 2012[33]).

Prior to the incident, the Japanese regulator put a number of measures in place which were found to have led to strengthening of nuclear safety culture. These included: “alert” reports which were issued to share safety culture implications learnt from events; each year a safety seminar was held; an employee safety culture survey was conducted annually; and the chief reactor engineer on each site provided an annual assessment on the state of safety culture. It is acknowledged that these as well as other actions served to strengthen nuclear safety culture.

Although effort was put forward to promote a safety culture, failures ultimately occurred. A few of the key safety culture lessons learnt are addressed below.

  • As described above, safety culture is about cultivating a questioning attitude and challenging assumptions. It was found that the regulator would have greatly benefited from doing this (i.e. by believing that a large tsunami could even occur). An important element to consider here is how to avoid ‘group think’ (INPO, 2012[33]). Some solutions for groupthink include incorporating scenario planning – even for ‘black swan’ events, such as a tsunami – paying attention to information/intelligence, creating an atmosphere that minimises self-censorship, paying attention to dissenting views, being vigilent of “mindguards” (bottlenecks in the flow of information), being ready to be wrong and not ridiculing dissent. However, research is mixed and care needs to be taken when assessing aspect of cultivating a questioning attitude and challenging assumptions.

  • The accident went beyond safety experience. Emergency response personnel did not have the procedures, equipment and training needed. It is acknowledged that international learning would have been important as senior managers indicated that knowing about lessons learnt in other contexts in advance would have greatly helped. Going forward regulatory authorities should consider including in their guidance both prevention and mitigation measures at each level (OECD/NEA, 2013[32]).

  • Organisational factors, including the independence, technical capability and transparency of the regulator in Japan were found to have contributed to the accident and the emergency response challenges confronted (OECD/NEA, 2013[32]); (National Research Council, 2014[36]).

Deepwater Horizon oil spill in the Gulf of Mexico

On 10 April 2010 an explosion occurred on the Deepwater Horizon Drill Rig resulting in an uncontrolled oil release of 4.9 million barrels into the Gulf of Mexico. It was eventually capped on 15 July 2010, however, the oil spread through the adjacent waters, shorelines and lands – areas where ecological health is critical to the economic wellbeing of the neighbouring communities (OEPC, 2010[37]).

A key safety culture lesson from this incident is the importance of a deep understanding of organisational systems (from tools such as such as root-cause/first-cause analysis teams, incident reviews, and other forms of self-analysis, etc.). The absence of such practices for systematically and holistically assessing risk and managing identified hazards was associated with the incident (NAE and NRC, 2011[35]); (Reader and O’Connor, 2014[2]); (Corkindale, 2010[3]).

The following section discusses how to measure safety culture – a practically important element to all that has been discussed thus far.

Indicators used to measure safety culture

There are a number of ways to measure safety culture. The following section provides an overview of key challenges, methods as well as overall considerations to keep in mind when designing and measuring indicators of safety culture. Thinking about applying behavioural insights, a key initial consideration is an understanding of how to measure outcomes. Appreciating that context plays an important role in not only what is tested, but also how it is measured, this review gives an overview of potential methods to inform further work designing indicators for the right behaviour and context.

Challenges

Measuring safety culture can be challenging for a variety of reasons (Beukes, 2015[38]). Practically, it can be time consuming and the expertise needed to carry out technically sound analysis may not exist with current employees. It can be complex to weigh multiple factors simultaneously (i.e. person, behaviour and situation) and there may not be data to readily draw safety culture indicators. Additionally, it is important that there is trust in the results, and that they are used in a practical way.

Before dedicating time and effort to measuring safety culture, it is key to acknowledge why it is important to assess safety culture. Reviews of evidence have demonstrated that it should be completed for one of several reasons related to an organisation’s strategic goals: to shift the discussion from the imprecise to the precise; to permit for the monitoring of progress; to give motivation and feedback (including communication); to recognise strengths, weaknesses and gaps, and potential improvements; and to offer leading indicators (TRB, 2016[4]). When applying behavioural insights and determining how to test a principle, it is important to acknowledge not only which outcome measures are practical to assess and technically appropriate over the study period, but also that what is useful in the real world setting.

Methods

To measure safety culture, it is important to begin with a clear concept, and then determine a set of appropriate assessment procedures, including potentially both quantitative and qualitative methods. These will often vary based on organisation sizes, resources, and work activities (TRB, 2016[4]).

Each of these assessment methods has strengths and limitations (see reviews by (IAEA, 2002[39]); (Sackman, 2006[40]); (TRB, 2016[4])) and balance between practicality and accuracy, and scientific validity is important. Some of the methods commonly used to measure safety culture (TRB, 2016[4]) are as follows:

  • Ethnography uses methods originally coming from field anthropologists to give understanding of underlying assumptions and meaning.

  • Episodic field work includes a mixture of observation, interviews and document analysis.

  • Safety culture and climate surveys give measurable ratings of cultural characteristics.

  • Guided self-analysis involves cultural insiders analysing their own culture via workshops and/or meetings.

  • Finally, multiple methods can be used to account for strengths and limitations of the other methods described above.

Considerations

It is important to reflect upon differences between leading and lagging indicators (Reiman and Pietikäinen, 2010[41]), process and outcome, as well as indicators intended for short-term and/or long-term use. It is acknowledged in the literature that cultural change can take a significant amount of time, and being able to collect data until a period of measurable change as well as to interpret accurately what it represents over time is important.

copy the linklink copied!Practical application of behavioural insights

As discussed above, the field of behavioural insights is based centrally around how context shapes our decisions. It leverages powerful lessons from diverse academic disciplines such as psychology, economics and anthropology, and takes an evidence-based, rigorous methods approach. After isolating a specific behaviour to change, along with the barriers and triggers to that behaviour, next comes the application of behavioural insights concepts.

The diversity in application of behavioural insights is demonstrated in published policy works is highlighted by a recent review of case studies across OECD countries (OECD, 2017[42]). Although it is not an exhaustive list of how behavioural insights have been applied, this chapter addresses three main ways in which behavioural insights can and have been applied in policy settings: 1) literature review/behavioural lens; 2) lab/online experiment and 3) randomised controlled trial/field experiment.

  1. 1. Literature review/behavioural lens: The simplest and often least time-consuming application of behavioural insights is through a literature review/behavioural lens. Following a thorough understanding of the context around a certain behaviour, as well as the relevant policy and academic literature, behavioural insights may be applied and the impact measured. If using this technique, it may not be possible to know with a high degree of certainty whether the application(s) can be attributed to any observed changes in behaviour. It is worth noting that there are a number of other forms of assessment which may be beneficial to gaining an understanding in addition to literature review. These include strategic reviews, needs assessments, process evaluations, business case assessments, etc. (Glennerster and Takavarasha, 2014[43]).

  2. 2. Lab/online experiments: Governments may be hesitant to scale a certain behaviourally-informed policy, potentially spending a significant amount of resources, without knowing for sure if will work in real world contexts. A lab/online experiment can be a way of testing behaviourally-informed interventions in a highly controlled environment to generate evidence. This methodology is often used when it is not possible to carry out more rigorous methods in the field (i.e. to randomise, as addressed in the next section), but a fundamental idea/concept would like to be tested. These methods are often used in marketing and psychology research

  3. 3. Randomised controlled trials/field experiments: Testing using a randomised controlled trial (RCT) in a field experiment allows for nuanced understanding of the complexities of context. Randomised controlled trials use a control group to compare the effectiveness of an intervention against what would have happened if nothing had changed/the intervention was not applied. RCTs are often attempted where possible. The methods involved are discussed in the next section, and the rest of the report.

Applying behavioural insights in practice

A major part of the value added of applying behavioural insights through literature reviews, lab/online experiments and/or RCTs is that the applications are part of a culture of experimentation and evaluation associated with the application of behavioural insights. In 2019, the OECD released a toolkit for applying BI from start to finish on any policy problem, which is centred on a systematic process to understand the root of the policy problem, gather evidence of what works and ultimately improve policy outcomes. This framework involves five steps that are abbreviated to “BASIC” (OECD, 2019[44]):

  1. 1. Behaviour: Identifying and better understanding the behaviours driving the policy problem, and consider how these behaviours could be targeted given desired policy outcomes and context.

  2. 2. Analysis: Examine the psychological and cognitive factors that are causing the targeted behaviours to identify the behavioural drivers of the problem. Complementary existing BI frameworks can be used to aid this analysis (see Box 3.1).

  3. 3. Strategy: Translate the analysis into behaviourally-informed strategies that could effectively change the behavioural problem.

  4. 4. Intervention: Design and implement an intervention to test which strategies are most effective at addressing the behavioural problem.

  5. 5. Change: Develop plans to scale what works into a full policy intervention, communicate what did not work for the community to learn and seek to sustain behaviour change over time.

copy the linklink copied!
Box 3.1. Some key BI frameworks

With the rise of BI around the world, a number of useful frameworks have been developed by both government and non-government agencies. Similar to ABCD, all of these frameworks use a simple mnemonic to establish an analytical tool aimed at helping a policymaker think about behavioural issues within a policy problem. While ABCD can be seen as “another framework”, it was designed and optimised to be used as the analytical heart of the BASIC process framework rather than a standalone tool.

Below is a non-exhaustive list of widely referenced frameworks that complement ABCD and could be a resource for policymakers looking for different ways to analyse a behavioural problem.

  • MINDSPACE (The Behavioural Insights Team, 2010): Provided an early checklist for thinking about how nine well-evidenced behavioural insights may inform public policy development, design and delivery.

  • Test, Learn, and Adapt (The Behavioural Insights Team, 2013): Gave an accessible introduction to the basics of using randomised controlled trials in policy evaluation.

  • EAST (The Behavioural Insights Team, 2013): Provided a simple framework considering how behavioural insights may help design policies based on leveraging convenience, social aspects of decision-making and the attractiveness and timeliness of policies.

  • World Development Report: Mind, Society, and Behavior (World Bank, 2015): Gave a comprehensive overview of how the BI perspective on human decision-making is of relevance to development policy.

  • Define, Diagnose, Design, Test (ideas42, 2017): Provided a practical framework for thinking through a problem and identifying behaviourally informed solutions.

  • US Internal Revenue Service Behavioral Insights Toolkit (IRS, 2017): Created to be a practical resource for use by IRS employees and researchers who are looking to use BI in their work.

  • Assess, Aim, Action, Amend (BEAR, 2018): Presented a playbook developed for applying BI in organisations outlining four steps for applying BI.

Note: Box originally created for (OECD, 2019[44]), Tools and Ethics for Applied Behavioural Insights: The BASIC Toolkit, OECD Publishing, Paris, https://doi.org/10.1787/9ea76a8f-en.

Source: The Behavioural Insights Team (2013[45]), Test, Learn, Adapt: Developing Public Policy with Randomised Controlled Trials, https://38r8om2xjhhl25mw24492dir-wpengine.netdna-ssl.com/wp-content/uploads/2015/07/TLA-1906126.pdf (accessed on 6 November 2018); The Behavioural Insights Team (2010[46]), MINDSPACE, https://www.behaviouralinsights.co.uk/publications/mindspace/ (accessed on 6 November 2018); World Bank (2015[47]), The World Development Report 2015: Mind, Society and Behaviour, http://www.worldbank.org/content/dam/Worldbank/Publications/WDR/WDR%202015/WDR-2015-Full-Report.pdf (accessed on 6 November 2018); ideas42 (2017[48]), Define, Diagnose, Design, Test, http://www.ideas42.org/blog/first-step-towards-solution-beta-project/ (accessed on 6 November 2018); IRS (2017[49]), Behavioral Insights Toolkit, https://www.irs.gov/pub/irs-soi/17rpirsbehavioralinsights.pdf; BEAR (2018[50]), How Should Organisations Best Embed and Harness Behavioural Insights? A Playbook, http://www.rotman.utoronto.ca/-/media/Files/Programs-and-Areas/BEAR/White-Papers/BEAR_BIinOrgs.pdf?la=en (accessed on 6 November 2018).

Complementary fields of work

Although behavioural insights as a field can provide a great deal of understanding, data analytics and fields which take a deep dive perspective, such as qualitative research and design thinking, are likely to play an important role in facilitating, augmenting/strengthening and evaluating the impact of behavioural insights.

Criteria for a randomised controlled trial/main challenges with field experiments

Although on first glance, the theory behind conducting a randomised controlled trial may seem straightforward, a significant amount of effort and time is usually put into setting up field experiments. Simple things can become complex rather quickly. There are practical and logistical elements to consider, as well as a number of technical aspects that should be kept in mind. A few select challenges in carrying out field experiments are described below: 1) whether randomisation is possible; 2) managing challenges in the field; and 3) measuring outcomes.

  • Randomisation is possible: A defining feature of RCTs is the ability to effectively and accurately randomise individuals into representative treatment and control groups. In achieving this, one can obtain understanding of how the treatment compares to doing nothing. Along with thinking about the ability to randomise, sample size should be considered – there must be a large enough number of people in the sample to obtain statistically meaningful results (OECD, 2017[42]).

  • Manageable challenges in the field: It is important to have a detailed understanding of the dynamic situation throughout the intervention period and to acknowledge meaningful confounding variables and threats to understanding and interpreting data. Careful attention should be paid to ensuring that there is not cross contamination between treatment and control arms. This may be a challenge especially if dealing with individuals working in close proximity in the same environment. Ensuring that an experiment is well designed and uses rigorous methods will lead to results worthy of publication – it is good practice to publish results to promote transparency and accountability.

  • Measurable outcomes: As emphasised above, it is critical to collect meaningful outcome data. It is important to identify short-term and long-term effects, particularly when aiming to create sustained behaviour change and habit formation (OECD, 2017[42]).

Ethical considerations

Behavioural insights can be a powerful tool and it is therefore important to consider ethical implications around influencing behaviour change. As discussed in (OECD, 2019[44]), this is especially true as applying BI collects and uses data on individual and group behaviours, as well as uses experimental methods for testing theories of human behaviour. As a result, issues related to privacy, consent or the ethics of applying certain solutions to only some groups arise. (OECD, 2017[42]) reviews the application of BI across 60 government units and finds that, although ethical considerations are considered a challenge, ethics are not the most common opposition/criticism to applying behavioural insights (‘scepticism on effectiveness and acceptability’ are the most frequently cited).

Recognising the challenge of ethics and the lack of internationally recognised standard way of dealing with ethical implications of applying behavioural insights to policy, the BASIC toolkit includes a set of ethical guidelines for policymakers to follow to ensuring they are always “nudging for good”. These guidelines are broken down by both pre-BI project as well as by each of the five stages of BASIC. Three generally principles to discuss and consider before starting a behaviourally-informed intervention are (OECD, 2019[44]):

  • Consider establishing an ethical review board from day one. If time and resources do not allow it, then outline the ethical issues associated with the project, how to address them and continuously consider where ethical approval may be required. A university ethical review board may be considered for expert advice and the use of established ethical approval process can be used.

  • Appoint ethical supervision of data collection, use and storage. BI often involves data collection and analysis that goes beyond what is standard in public policymaking. Consider appointing at least one member – either a member of the ethical review board or the team working on the behaviourally informed intervention – to supervise ethical aspects of data collection, use and storage.

  • Observe existing ethical guidelines and codes of conduct. Make sure all team members observe ethical guidelines and codes of conduct, which are often already present in public institutions. Where existing standards are not sufficient for BI, flag these issues and establish procedures for these instances. Ensure appropriate procedures are in place to protect whistleblowing and ensure anonymity is respected.

Potential areas for testing and experimentation

The sections above outline the stages involved in applying behavioural insights in practice. It is first important to understand the context, define the behavioural problem(s), and then generate an analysis, set of strategies and interventions to test before scaling up what works into a policy for change. The following section will draw upon the findings of the literature reviews as well as literature from published case studies applying behavioural insights in real world contexts to facilitate the identification of potential areas for testing and experimentation of behaviourally-informed solutions aimed at developing a culture of safety within regulators and regulated entities.

Stage 1: Behaviour – Identifying and defining the problem

The literature reviews above provide an understanding of certain organisational and behavioural barriers to creating a culture of safety. They help in potentially identifying specific behaviours to which behavioural insights could be applied, as well as barriers and triggers to such behaviours.

As described above, across high risk industries, key organisational barriers to a strong safety culture include: production pressure, lack of personal accountability and enquiring attitude, hazard identification and risk management and work processes. Key enablers of a strong safety culture include leadership commitment, respectful work environment, environment for raising concerns, effective safety and environmental communication and continuous improvement. These constitute specific behaviours that are likely to lead to a change in culture, as well as illustrate potential touchpoints where it is possible to influence the barriers and triggers to such behaviours.

Stage 2: Analysis – Understand why people act as they do

Given the importance of context in decision making and behaviour change, it is critical to have a detailed understanding of why people act as they do in the suspected areas for behavioural insights application.2 In order to obtain this nuanced understanding, methods such as data analytics, site visits, qualitative research and design thinking can be used. This stage also involves looking at what has been achieved in relevant policies in other jurisdictions as well as the academic literature and conferring with academic experts. This step it critical to understanding whether specific behavioural insight(s) is/are applicable in a certain way, in a certain place and at a certain time. Although this report provides ideas and seed understanding, ultimately, this work will need to be done outside of this report to understand details.

Stage 3: Strategies – Identifying solutions for behaviour change

Once an understanding of the problem and the context is carried out, then strategies for effectively changing the problem can be identified. Although not a comprehensive list, the following behavioural insights principles draw upon certain barriers and triggers identified in the literature reviews and in discussion with country representatives, in this case both through the NER and with country contact points inside the regulatory authorities. The principles which may be relevant when thinking about designing behaviourally-informed policies to create a culture of safety are accompanied by examples of application in real world policy contexts (OECD, 2017[42]); (JPAL, 2017[51]); (ideas42, 2017[52]).

  • Social benchmarking/feedback: comparing a person’s behaviour to that of their peers. Showing doctors comparative prescription rates to their peers made them less likely to prescribe unnecessary medications (Behavioural Insights Team, 2015); household water consumption was reduced after receiving comparative information about usage by neighbours (ideas42); feedback comparing employees’ walking habits to those of their colleagues encouraged them to walk more (University of Pennsylvania).

  • Communication of risk: a trial analysed the most effective way to communicate risk information and motivate subsequent action related to safety of food borne illnesses (Social and Behavioural Sciences Team, 2015).

  • Other areas for testing could include group think, availability cascades and risk regulation, choice architecture (default, anchoring), optimism bias/discounting, omission and status quo bias, authority bias and commitment, salience.

Behavioural insights concepts can be applied in isolation or in combination. It is important to note that if testing multiple concepts at the same time, it will not necessarily be possible to know the degree to which each behavioural insight was effective.

Another important point to consider in the design of the intervention is whether the behavioural insight will likely lead to a short-term or a long-term change, perhaps even habit formation. This has been discussed by a variety of academic studies and should be considered when designing the intervention and evaluation (Duhigg, 2012[53]). A recent paper outlines a number of reasons why applications of behavioural insights may fail. In addition to some nudges producing only short-term effects, the application of behavioural insights can produce confusion and compensating behaviour. To avoid these issues, it is important to be critical at all stages (Sunstein, 2017[54]).

Stage 4: Intervention – Testing strategies to inform public policy development

The next stage is to design and implement an intervention to test and evaluate the behavioural insights identified in Stage 3. How this is carried out is highly dependent on stages 1, 2 and 3, as well as resources available, in addition to what type of evaluation is technically and practically best suited for the context (i.e. literature review/behavioural lens, lab/online experiment, RCT).

Stage 5: Change

The final stage is to take what worked – and what didn’t – to inform public policy develop and ensure society benefits from the insights gained. Here, the policy maker is encouraged to revisit the political context or policy challenge that motivated the project, as well as the work done in stages 1 and 2 to define the approach and scope the project. This is to ensure the policy situation and interests of the project still align with the current situation, as changes can occur. Then the best way to implement and scale up the behaviourally-informed policy should be considered, which could be a new law or regulations, or perhaps a new code or guideline would suffice. Structures need to be put in place to monitor the long-term and potential side effects of the policy intervention to ensure the quality of the new policy is maintained over time. Finally, communicating the results the community, even when they are null results, is essential to ensure peer learning.

References

[50] BEAR (2018), How Should Organizations Best Embed and Harness Behavioural Insights? A Playbook, http://www.rotman.utoronto.ca/-/media/Files/Programs-and-Areas/BEAR/White-Papers/BEAR_BIinOrgs.pdf?la=en (accessed on 6 November 2018).

[13] Berglund, J. (2016), Why Safety Cultures Degenerate And How To Revive Them, Routledge, London, https://doi.org/10.4324/9781315547206.

[38] Beukes, H. (2015), Improving Safety Culture: Barriers, Challenges and Potential Solutions, CIRPD Webinar, EPCOR Utilities, https://www.wwdpi.org/SiteCollectionDocuments/WebinarPowerpoints/2015-03-26_BeukesH_ImprovingSafetyCultureBarriers.pdf (accessed on 5 November 2019).

[31] Bradley (2017), Regulator safety (oversight) culture: How a regulator’s culture influences safety outcomes in high hazard industries, Fielding Graduate University, ProQuest Dissertations Publishing.

[29] BSEE (2013), BSEE Announces Final Safety Culture Policy Statement, Bureau of Safety and Environmental Enforcement, https://www.bsee.gov/newsroom/latest-news/statements-and-releases/press-releases/bsee-announces-final-safety-culture (accessed on 2 March 2020).

[8] BSEE (2013), Final Safety Culture Policy Statement, Bureau of Safety and Environmental Enforcement, https://www.bsee.gov/newsroom/latest-news/statements-and-releases/press-releases/bsee-announces-final-safety-culture (accessed on 5 November 2019).

[11] Carroll, J. and A. Edmondson (2002), “Leading organisational learning in health care”, Quality and Safety in Health Care, Vol. 11/1, pp. 51-56, https://doi.org/10.1136/qhc.11.1.51.

[10] Cooper, D. (2002), “Safety culture: a model for understanding and quantifying a difficult concept”, Professional Safety, http://behavioral-safety.com/articles/safety_culture_understanding_a_difficult_concept.pdf (accessed on 5 November 2019).

[5] Cooper, D. (2002), “Safety culture: A model for understanding and quantifying a difficult concept”, Professional Safety, http://www.behavioural-safety.com/articles/safety_culture_understanding_a_difficult_concept.pdf (accessed on 2 March 2020).

[3] Corkindale, G. (2010), Five Leadership Lessons from the BP Oil Spill, https://hbr.org/2010/06/five-lessons-in-leadership-fro (accessed on 19 March 2020).

[53] Duhigg, C. (2012), The power of habit: Why we do what we do in life and business, Random House, New York, NY, https://psycnet.apa.org/record/2012-09134-000 (accessed on 6 November 2019).

[19] Edmondson, A. (1999), Psychological Safety and Learning Behavior in Work Teams, http://web.mit.edu/curhan/www/docs/Articles/15341_Readings/Group_Performance/Edmondson%20Psychological%20safety.pdf (accessed on 6 November 2019).

[28] Edmondson, A. and Z. Lei (2014), “Psychological Safety: The History, Renaissance, and Future of an Interpersonal Construct”, Annual Review of Organizational Psychology and Organizational Behavior, Vol. 1/1, pp. 23-43, https://doi.org/10.1146/annurev-orgpsych-031413-091305.

[26] Ellis, S. and I. Davidi (2005), “After-event reviews: Drawing lessons from successful and failed experience”, Journal of Applied Psychology, Vol. 90/5, pp. 857-871, https://doi.org/10.1037/0021-9010.90.5.857.

[21] Ely, R. and D. Meyerson (2010), “An Organizational Approach to Undoing Gender: The Unlikely Case of Offshore Oil Platforms”, Research in Organizational Behavior, Vol. 30/2010, pp. 3–34, https://www.hbs.edu/faculty/Pages/item.aspx?num=39818 (accessed on 6 November 2019).

[7] Fleming, M. and N. Scott (2012), “Cultural Disasters: Learning From Yesterday’s Failures To Be Safe Tomorrow”, Oil and Gas Facilities, https://pubs.spe.org/en/ogf/ogf-article-detail/?art=402 (accessed on 5 November 2019).

[43] Glennerster, R. and K. Takavarasha (2014), Running randomized evaluations : a practical guide, Princeton University Press.

[24] Hofmann, D. and A. Stetzer (1996), “A cross-level investigation of factors influencing unsafe behaviors and accidents”, Personnel Psychology, Vol. 49/2, pp. 307-339, https://doi.org/10.1111/j.1744-6570.1996.tb01802.x.

[39] IAEA (2002), Self-assessment of safety culture in nuclear installations Highlights and good practices, International Atomic Energy Agency, https://www-pub.iaea.org/MTCD/Publications/PDF/te_1321_web.pdf (accessed on 2 March 2020).

[52] ideas42 (2017), B-HUB, http://www.bhub.org/ (accessed on 2 March 2020).

[48] ideas42 (2017), Define, Diagnose, Design, Test, http://www.ideas42.org/blog/first-step-towards-solution-beta-project/ (accessed on 6 November 2018).

[33] INPO (2012), Lessons Learned from the Nuclear Accident at the Fukushima Daiichi Nuclear Power Station, Institute of Nuclear Power Operations, https://www.nrc.gov/docs/ML1221/ML12219A131.pdf (accessed on 2 March 2020).

[49] IRS (2017), Behavioral Insights Toolkit, Internal Revenue Service, Washington, D.C., https://www.irs.gov/pub/irs-soi/17rpirsbehavioralinsights.pdf (accessed on 2 March 2020).

[51] JPAL (2017), Evaluations, The Abdul Latif Jameel Poverty Action Lab, https://www.povertyactionlab.org/evaluations (accessed on 7 November 2019).

[17] Katz-Navon, T., E. Naveh and Z. Stern (2005), “Safety climate in health care organizations: A multidimensional approach”, Academy of Management Journal, Vol. 48/6, pp. 1075-1089, https://doi.org/10.5465/AMJ.2005.19573110.

[35] NAE and NRC (2011), Macondo Well Deepwater Horizon Blowout: Lessons for Improving Offshore Drilling Safety, National Academies Press, Washington, D.C., https://doi.org/10.17226/13273.

[36] National Research Council (2014), Lessons learned from the Fukushima nuclear accident for improving safety of U.S. nuclear plants, National Academies Press, Washington, D.C., https://doi.org/10.17226/18294.

[22] Naveh, E., T. Katz-Navon and Z. Stern (2006), “Readiness to report medical treatment errors: the effects of safety procedures, safety information, and priority of safety”, Medical care, Vol. 44/2, pp. 117-23, https://doi.org/10.1097/01.mlr.0000197035.12311.88.

[9] NEB (2017), Safety Culture Indicators Research Project: A Regulatory Perspective, National Energy Board of Canada, https://www.cer-rec.gc.ca/sftnvrnmnt/sft/sftycltr/sftcltrndctr-eng.html (accessed on 5 November 2019).

[12] NEB (2014), Advancing Safety in the Oil and Gas Industry - Statement on Safety Culture, National Energy Board of Canada, https://www.cer-rec.gc.ca/sftnvrnmnt/sft/sftycltr/sftycltrsttmnt-eng.html (accessed on 5 November 2019).

[34] NRSB (2014), Lessons learned from the Fukushima nuclear accident for improving safety of U.S. nuclear plants, The National Academies Press,, https://doi.org/10.17226/18294.

[44] OECD (2019), Tools and Ethics for Applied Behavioural Insights: The BASIC Toolkit, OECD Publishing, Paris, https://dx.doi.org/10.1787/9ea76a8f-en.

[42] OECD (2017), Behavioural Insights and Public Policy: Lessons from Around the World, OECD Publishing, Paris, https://dx.doi.org/10.1787/9789264270480-en.

[27] OECD (2014), Recommendation on the Governance of Critical Risks, Meeting of the OECD Council at Ministerial Level, Paris, https://www.oecd.org/governance/recommendation-on-governance-of-critical-risks.htm (accessed on 2 March 2020).

[1] OECD/NEA (2013), The Fukushima Daiichi Nuclear Power Plant Accident OECD/NEA Nuclear Safety Response and Lessons Learnt, OECD/Nuclear Energy Agency (NEA), Paris, https://www.oecd-nea.org/pub/2013/7161-fukushima2013.pdf (accessed on 19 March 2020).

[32] OECD/NEA (2013), The Fukushima Daiichi Nuclear Power Plant Accident: OECD/NEA Nuclear Safety Response and Lessons Learnt, OECD Publishing, Paris, https://dx.doi.org/10.1787/9789264205048-en.

[37] OEPC (2010), Deepwater Horizon Oil Spill Lessons Learned Workshop: Nontraditional Response and Emergency Restoration Projects, Department of Interior, Office of Environmental Policy and Compliance, https://www.doi.gov/sites/doi.gov/files/uploads/Final%20DHW%20NTR%20Report%203.22.2016.pdf (accessed on 7 November 2019).

[25] Popper, M. and R. Lipshitz (1998), “Organizational Learning Mechanisms: A Structural and Cultural Approach to Organizational Learning”, Journal of Applied Behavioral Science, Vol. 34/3, pp. 161-179, https://doi.org/10.1177/0021886398342003.

[2] Reader, T. and P. O’Connor (2014), “The Deepwater Horizon explosion: non-technical skills, safety culture, and system complexity”, Journal of Risk Research, Vol. 17/3, pp. 405-424, https://doi.org/10.1080/13669877.2013.815652.

[41] Reiman, T. and E. Pietikäinen (2010), Indicators of safety culture-selection and utilization of leading safety performance indicators, Swedish Radiation Safety Authority, https://www.vtt.fi/inf/julkaisut/muut/2010/SSM-Rapport-2010-07.pdf (accessed on 7 November 2019).

[40] Sackman, S. (2006), Assessment, Evaluation, Improvement: Success through Corporate Culture: Recommendations for the Practice, Bertelesmann Stiftung, Gutersloh, Germany, https://www.dgfp.de/hr-wiki/Assessment__Evaluation__Improvement_-_Success_through_Corporate_Culture.pdf (accessed on 2 March 2020).

[18] Singer, S. and A. Tucker (2014), “The evolving literature on safety WalkRounds: emerging themes and practical messages.”, BMJ quality & safety, Vol. 23/10, pp. 789-800, https://doi.org/10.1136/bmjqs-2014-003416.

[6] Smith, A., D. Emma and J. Wadsworth (2009), Safety culture, advice and performance, Report for IOSH Research Committee, http://www.iosh.co.uk/researchanddevelopmentfund, (accessed on 5 November 2019).

[54] Sunstein, C. (2017), “Nudges that fail”, Behavioural Public Policy, Vol. 1/1, pp. 4-25, https://doi.org/10.1017/bpp.2016.3.

[30] Swiss Federal Nuclear Safety Inspectorate (ENSI) (2015), Oversight Culture: ENSI Report on Oversight Practice, https://www.ensi.ch/wp-content/uploads/sites/5/2016/07/ENSI_Aufsichts_Sicherheitskultur_EN_WEB.pdf (accessed on 6 November 2019).

[14] Tangirala, S. and R. Ramanujam (2008), “Employee silence on critical work issues: The cross level effects of procedural justice climate”, Personnel Psychology, Vol. 61/1, pp. 37-68, https://doi.org/10.1111/j.1744-6570.2008.00105.x.

[45] The Behavioural Insights Team (2013), Test, Learn, Adapt: Developing Public Policy with Randomised Controlled Trials, The Behavioural Insights Team, https://38r8om2xjhhl25mw24492dir-wpengine.netdna-ssl.com/wp-content/uploads/2015/07/TLA-1906126.pdf (accessed on 6 November 2018).

[46] The Behavioural Insights Team (2010), MINDSPACE, The Behavioural Insights Team, https://www.behaviouralinsights.co.uk/publications/mindspace/ (accessed on 6 November 2018).

[47] The World Bank (2015), The World Development Report 2015: Mind, Society and Behaviour, The World Bank, Washington, http://www.worldbank.org/content/dam/Worldbank/Publications/WDR/WDR%202015/WDR-2015-Full-Report.pdf (accessed on 6 November 2018).

[4] Transportation Research Board (ed.) (2016), Strengthening the Safety Culture of the Offshore Oil and Gas Industry, The National Academies Press, Washington DC, https://www.nap.edu/catalog/23524/strengthening-the-safety-culture-of-the-offshore-oil-and-gas-industry.

[16] Weick, K. and L. Browning (1986), “Argument and Narration in Organizational Communication”, Journal of Management, Vol. 12/2, pp. 243-259, https://doi.org/10.1177/014920638601200207.

[15] Weick, K. and F. Westley (1996), “Organizational Learning: Affirming an Oxymoron”, in Clegg, S., C. Hardy and W. Nord (eds.), Handbook of Organizational Studies, Sage Publications, London.

[23] Weingart, S. et al. (2004), “Using a multihospital survey to examine the safety culture.”, Joint Commission journal on quality and safety, Vol. 30/3, pp. 125-132, https://doi.org/10.1016/S1549-3741(04)30014-6.

[20] Yun, S., S. Faraj and H. Sims (2005), “Contingent leadership and effectiveness of trauma resuscitation teams”, Journal of Applied Psychology, Vol. 90/6, pp. 1288-1296, https://doi.org/10.1037/0021-9010.90.6.1288.

Notes

← 1. Going forward, if there are industries and/or aspects and areas that further study is merited this review can be refined.

← 2. BASIC did not exist at the time of writing this scoping note in 2017. Therefore, the ABCD model used in BASIC was not applied here. The work accomplished is more a description of how the behaviours were analysed in regards to the context, which helps support analytical thinking about behavioural problems. This is in-line with the purpose of the Analysis stage, which advocates for many complementary models to be used in lieu of ABCD.

Metadata, Legal and Rights

This document, as well as any data and map included herein, are without prejudice to the status of or sovereignty over any territory, to the delimitation of international frontiers and boundaries and to the name of any territory, city or area. Extracts from publications may be subject to additional disclaimers, which are set out in the complete version of the publication, available at the link provided.

https://doi.org/10.1787/e6ef217d-en

© OECD 2020

The use of this work, whether digital or print, is governed by the Terms and Conditions to be found at http://www.oecd.org/termsandconditions.