Chapter 8. Public acceptance and emerging production technologies

Winickoff David E.
Directorate for Science, Technology and Innovation, OECD

Public acceptance of technology is a key factor in how innovation impacts society, and its consideration should therefore figure in policy making around the next production revolution. There is a persistent but misguided view that resistance to technology mostly stems from public ignorance about the true benefits of particular technologies or of innovation in general. Social science research shows that more important reasons for such resistance might be basic value conflicts, distributive concerns, and failures of trust in governing institutions such as regulatory authorities and technical advice bodies. In general, countries and innovators should take into account, to the greatest extent possible, social goals and concerns from the beginning of the development process. While it remains a challenge to realise this goal, best practices have emerged that can serve as a guide. These include funding social science and humanities in an integrated fashion with natural and physical science, using participatory forms of foresight and technology assessment to chart out desirable futures, and engaging stakeholders in communicative processes with clear linkages into policy. All of the above will help build trust and trustworthiness into innovation systems.

  

Introduction

Public acceptance of technology is a key component of innovation policy (OECD, 2016a) and should be an important consideration in policy making around the next production revolution. Strong public concerns can shape the direction, pace and diffusion of innovation, and even block its progress (Gupta, Fischer and Frewer, 2012). This is the case even where technical and economic feasibility have been demonstrated, the rationale for adoption appears sound, and large investments have been undertaken. In particular, emerging technologies have sometimes been frustrated because of social and ethical concerns (EC, 2013). At the same time, public resistance to technologies can give rise to regulations that promote trust and confidence, and steer innovation along acceptable pathways (Rodricks, 2006; Packer, 2008; Davis, 2014).

The consideration of public acceptance of the technologies of the next product revolution might be especially important today. The use and uptake of technology can be affected by the social and political contexts into which they are placed (Gupta, Fischer and Frewer, 2012). The development and adoption of production technologies are poised to affect labour markets in significant ways (The Economist, 2016), raising serious questions about public attitudes and acceptance of these new technologies. The stakes might be high: some see a number of the political events of 2016 as a popular reaction against prevailing manufacturing policies and the labour-market effects of technology.

Historically, public opposition has mounted in a number of fields of emerging technology, including nuclear power, genetically modified organisms (GMO), and other areas of biotechnology. In Europe, for example, negative public sentiment on GMOs has resulted in lower funding levels, high regulatory rejection rates, and lower levels of innovation than in other jurisdictions (Currall et al., 2006). Public investment can also become “stranded” (i.e. unable to be exploited). For example, many countries invested in the construction of nuclear reactors in the 1960s and 1970s. Even in the face of expert opinion avowing safety, political protests around the world halted their broad diffusion (Winner, 1977).

This is not to say that publics are anti-technology. General attitudes of European citizens towards technology are regularly assessed by the Eurobarometer, a set of surveys conducted on behalf of the European Commission since 1973. While general public attitudes about emerging technologies are hard to gauge, there is evidence that societies are generally optimistic about technological development, although this is tempered by concerns. In a recent major survey in Europe, at least half of the respondents expected that, 15 years from now, science and technological development would have a positive impact on health and medical care (65%), education and skills (60%), transport and transport infrastructure (59%), energy supply (58%), protection of the environment (57%), the fight against climate change (54%) and quality of housing (50%) (EC, 2014a).

An assessment of public acceptance, however, must go beyond the measurement of attitudes and aim for a better understanding of the sources and drivers of acceptance. A first step is to understand that there are multiple “publics” in public acceptance. Recent work on public acceptance in the context of renewable energy usefully illustrates the need to avoid a concept of public acceptance that is too thin. This work emphasises that acceptance depends not just on broad political acceptance by the public and key stakeholders, but also on acceptance by consumers and investors, and by communities in which new technologies are sited. Some academics term this the “triangle of acceptance” (Wüstenhagen, Wolsink and Bürer, 2007; Reith et al., 2013).

This chapter draws lessons from work in other science and research-intensive fields, such as health, while addressing concerns specific to a number of next product revolution technologies, particularly artificial intelligence (AI), industrial biotechnology and nanotechnology. Prior experience with the societal reception of emerging technology should help inform policy makers and other key actors as they push these technologies forward. There is a persistent but misguided view that resistance to technology mostly stems from public ignorance about the true benefits of particular technologies or of innovation in general. Social science research shows that basic value conflicts, distributive concerns, and failures of trust in governing institutions such as regulatory authorities and science advice bodies might be more important.

In general, countries and innovators should incorporate, to the greatest extent possible, social goals and concerns from the beginning of the development process. While it remains a challenge to realise this goal, best practices have emerged that can serve as a guide. These include funding social science and humanities in integrated co-streams with natural and physical science, using participatory forms of foresight and technology assessment to chart out desirable futures, and engaging stakeholders in communicative processes with clear linkages into policy. All of the above will help build trust and trustworthiness into innovation systems, which are critical factors in public acceptance.

Key technologies

Some of the technologies addressed in this report have already raised public concerns of various kinds, and are likely to continue to do so (EC, 2013). This section offers a brief review of public acceptance issues in biotechnology, nanotechnology, big data and AI. Some public concerns with emerging production technology have to do with risk, such as how new technologies might affect the health and safety of humans and the environment, and the idea that existing oversight is inadequate to anticipate potential harms. Other concerns have to do with issues of controlling life processes, or decision-making power over technology itself, such as through the control of intellectual property or market dominance. A major source of uncertainty about the path of these technologies lies in the fact that they are converging in unexpected ways, creating yet other new technologies. An example might be the convergence of information and communication technology (ICT) and biotechnology to produce synthetic biology approaches which form a platform for many other kinds of biological entities and tools.

Industrial biotechnology

The use of biotechnology on an industrial scale for fuels, chemicals, and other products isa likely element in the remaking of the production system (see Chapter 9). But, of course, biotechnology has also been the subject of persistent public conflicts over societal risks, especially in the context of GMOs and synthetic biology. In both developed and developing countries, GMOs have raised concerns around health and safety risks and the capacity to contain and reverse their release.

Negative perception has also centred on a linkage between biotechnology, seed patenting, and industrial concentration in the agro-food sector (Jasanoff, 2005). Such concerns have been resolved differently across countries, with some countries adopting genetically modified (GM) crops at a much slower rate than others. Starkly different regulatory approaches growing out of distinct public receptions of biotechnology have resulted in disruptions to international trade and have even triggered dispute settlement at the World Trade Organization (WTO) (Pollack and Shaffer 2009).

The biotechnology case suggests that government efforts to meet public concerns about technology by emphasising risk-assessment science may be only partially successful. In biotechnology, conflicts ostensibly about health and environmental risk reside, at least in part, in deeply held beliefs about the human-environment relationship, the ethics of human manipulation of “nature”, and concerns about the corporate appropriation of biology (Jasanoff, 2005). However, because society may lack other outlets for deliberation on the moral implications of technology, the environmental and health safety risk becomes a primary locus of concern (Winickoff et al., 2005).

Box 8.1. Gene editing in society

With gene-editing techniques, especially those using the CRISPR-Cas9 system (named by the journal Science as the breakthrough discovery of 2015), scientists are now able to change a DNA sequence at precise locations on a chromosome. These techniques are successfully being applied to manipulate genomes for a wide range of applications. Gene editing will make the design and construction of organisms with desired traits easier and cheaper. It has been successfully used with organisms of commercial importance such as crop plants and farm animals, raising the possibility of developing new methods for the control of pests and diseases as well as improving the efficiency of plant and animal breeding. Recently, CRISPR has been used in the People’s Republic of China to edit genomes of non-viable human embryos, and similar experiments have been approved in the United Kingdom (Callaway, 2016).

Certain scientific communities have taken a proactive approach to engaging in public discourse about CRISPR, which could be used in an array of settings including medicine, animal breeding, and environmental management. The technique has suddenly made potentially controversial applications of biotechnology more plausible, such as the precise editing of the human genome. In March 2015, a group of scientists and ethicists, including Nobel laureates David Baltimore of Caltech and Paul Berg of Stanford, proposed a worldwide moratorium on altering the human genome to produce changes that could be passed on to future generations. In December 2015, the National Academies of Science in the United States, along with the Chinese Academy of Sciences and the United Kingdom’s Royal Society, convened a summit of experts from around the world to discuss the scientific, ethical and governance issues associated with human gene-editing research (Reardon, 2015).

Bioproduction does not depend on agricultural feedstocks that are GM, but it certainly does involve sophisticated technical biochemical approaches to break down and reformulate organic material on a large scale. Governments still have to anticipate public concerns around recent biotechnological advances that make this possible. Recent developments in genetic engineering, particularly so-called “gene editing”, have already spurred public debate about the potential benefits and harms of that technology, particularly in the context of human germline engineering (Box 8.1). Synthetic biology, especially the development of novel sequences of De Novo DNA, has also provoked public controversy. Public discourse about these technologies, both within and across countries, is likely to have a large impact on industrial biotechnology (McNutt, 2015).

Nanotechnology

Engineering at the molecular scale through nanotechnology is anticipated to play an important role in the next product revolution (see Chapter 4). Beginning in the 1990s, governments and the private sector promoted nanotechnology as a key to future economic growth, and as an emerging tool to address societal problems. Industry, government, and academia invested significantly in nanotechnology and its commercialisation (Barben et al., 2007). Optimism about the potential of nanotechnologies to positively transform society spurred growth in nanotechnology innovation, but this enthusiasm co-existed alongside concern and protest. Prominent individuals such as Bill Joy and Prince Charles raised alarm bells, as did activist groups, including Greenpeace (Arnall, 2003) and the Erosion, Technology and Concentration Group (ETC Group, 2003). Joy (2000), for example, put forth a catastrophic “grey goo” scenario in Wired magazine, in which out-of-control self-propagating nanobots could obliterate life. Others were concerned with environmental hazards and unintended consequences (Tenner, 2001), shifts in privacy and security (MacDonald, 2004) and possibly greater economic inequality (Meridian Institute, 2005).

Such public concerns about nanotechnology intersected with existing antagonism towards biotechnology, evinced by the fact that the ETC Group (a civil society organisation focused on the socio-economic and ecological impacts of new technologies), which organised action in opposition to agricultural biotechnology, repeatedly called for a moratorium on some forms of nanotechnology research and development (R&D) because of concerns about environmental health and safety (Barben et al., 2007).

Informed by the experience with public opposition to GM foods in Europe, policy makers grew concerned that nanotechnology would draw broad public resistance. Policy makers in a number of countries took measures to promote broader societal considerations, integrating such considerations into nanotechnology R&D at early stages. Steps such as funding co‐streams of social science research and various forms of public engagement were meant to ensure that science was responsive to societal needs and could more effectively support decision making.

In the United States, for example, and by contrast with earlier efforts, a piece of legislation passed in 2003 that sought to integrate social research and public input “upstream” in nanotechnology R&D policy. This focus on early or simultaneous integration of work on social concerns was similar to the approach adopted in the Human Genome Project’s Ethical, Legal, and Social Implications Research Program in the United States. Similarly, the European Union, the Netherlands, Brazil and Colombia have established social science research on nanotechnologies and linked such work to decision making (Barben et al., 2007). A recent survey conducted by the OECD (2013) found that 11 of 25 countries surveyed have a specific policy with regard to the responsible development of nanotechnology, with several other countries having policies under development.

Unknowns about the health and environmental impacts of nanoparticles remain, which continues to raise concern among publics and regulators. Manufactured nanomaterials are found in more than 1 300 products currently on the market, including medical equipment, fabrics, fuel additives, cosmetics and plastics (US EPA, 2016). Regulatory approaches are still evolving, even as nanomaterials are entering waste streams. A recent OECD review of the literature on wastewater treatment – recycling, incineration, landfilling and waste water treatment – has found that significant knowledge gaps are associated with their final disposal (OECD, 2016a).

Big data

The next product revolution will be driven in part by digitalisation, and it is possible that large bodies of personal information will be collected and used in new production processes. Large-scale government programmes to collect and use big data for purposes of surveillance and national security have drawn major public concerns, but other areas have also become the subject of intense public debate. For example, health policy makers across the world are seeking to aggregate diverse health data from millions of people to enable comparative effectiveness research (CER) and help produce an innovative big-data architecture for research and discovery (Institute of Medicine, 2014). A central goal is to integrate population level and personal health data across the public and private sector to advance the evidence base for clinical care, monitor quality, and aid the discovery of biomarkers for the development of better diagnostics and drugs (Krumholz, 2014).

The challenges of integrating diverse health data sets and information architectures are technical, ethical and social. Collecting health data for research as it is generated in the clinic blurs the line between clinical care and research in new ways. Conducting predictive analysis to stratify populations raises concerns of justice, as certain populations may be included or excluded from desirable clinical trials or therapeutic interventions on that basis. Furthermore, obtaining traditional informed consent for the range and scale of potential uses is impossible (Faden, Beauchamp and Kass, 2014). In the United Kingdom, failure to address privacy and access questions triggered a major public controversy among clinical physicians, disease advocacy groups, and the larger public, undermining trust in central health authorities (Kirby, 2014). These social uncertainties are pressing many governments to develop partnerships and public dialogue with patients, health institutions, and other stakeholders in order to find acceptable solutions to questions of privacy, control and justice. OECD countries have recently addressed some of the challenges of managing health data in their recent Recommendation of the Council on Health Data Governance (OECD, 2017).

Artificial intelligence

AI technologies have the potential to transform society. But AI also raises a range of ethical, regulatory and social issues (United States, 2016). From automated assistants to driverless cars, AI stands poised for rapid growth, a view shared widely in Science ministries (G7, 2016). Some are optimistic about this innovation: advocates have argued that AI can both stimulate innovation and boost economic productivity, and perhaps improve the human condition more broadly. OECD research suggests that “big data used to feed machine-learning algorithms can boost industries including advertising, health care, utilities, logistics, transport, and public administration” (Bradbury, 2016). However, concerns about the risks, benefits, and ethical issues associated with these technologies appear to be growing. Professor Stephen Hawking has stated provocatively that “the development of full AI could spell the end of the human race” (Cellan-Jones, 2014). Among the general public, concerns about the potential for AI to displace certain types of work are salient (Smith and Anderson, 2014), as are safety issues (Marks, 2016). A recent poll in Britain found that one in three people believe that the rise of AI is a threat to humanity (British Science Association, 2016).

Professor Dan Sarewitz, an expert in science policy, called for an informed, global public dialogue about AI and its potential impacts in a June 2015 article in Nature (Sarewitz, 2015). Still, robust mechanisms for addressing risks, benefits and ethical issues are not yet institutionalised (Calo, 2014). This is, in part, because AI is still being developed, and because wide and diverse applications make a comprehensive regulatory framework difficult. Moreover, some view policy interventions around AI with scepticism, arguing that it is too early for AI policy (McAfee, 2015), and that intervention could hamper technological development and the potential benefits to society (Brundage and Bryson, forthcoming). Others disagree, holding that regulation can itself enable innovation, and that AI already impacts our daily lives. To this end, the US White House Office of Science and Technology Policy, and European and British parliaments are conducting, or have conducted, public workshops on AI technology and policy. Some have called for national commissions on robotics (Calo, 2014). Importantly, many scholars have called for funding of early research into the human and social dimensions of AI technologies, integrated alongside technical research. Ensuring public acceptance of AI R&D will be critical to the future of this field.

Understanding public acceptance

Public acceptance or rejection of technology is a complex phenomenon that defies easy explanation. What follows is a discussion of social science literature and existing practices that help suggest approaches to how technology can best be brought into society in an acceptable fashion.

Risk perception and fallacy of the public deficit model

For some time, the leading idea on public resistance to technology was that it resulted from lack of information or education. This theory stems in part from classic studies that show a divergence between the risk assessments of lay people and those of experts (Slovic, 1987). These differences are patterned, revealing a bias towards certain technological characteristics. Technologies that are perceived to be irreversible, out of human control, and/or capable of catastrophic failure tend to raise the public perception of risk relative to expert appraisals. Similarly, if technologies are novel and less well-known, outside of human perception (e.g. nanoparticles invisible to the human eye), and delayed in their manifestation of harm, they also tend to be of higher public concern (Slovic, 1987). A number of next product revolution technologies have some of these characteristics. For example, biotechnology and nanotechnology have fast-evolving frontiers, novel physical properties, and their constructs are usually invisible to the human eye.

Studies of risk perception of this kind have led some governments to pursue education campaigns as a primary way of addressing public acceptance of technology. However, reviews of the correlation of education and technological acceptance are at best inconclusive. On controversial issues, there is no correlation at all and, in the words of one scholar “well-informed and less well-informed citizens are to be found on either side of the controversy” (Bauer, 2009). This finding comports with other social science work showing that where deeply held values and personal identities are at stake, science-based accounts are dismissed even by the most literate. It has been shown in one large study, for example, that religious people with even the highest levels of science literacy tend to reject some core precepts of evolution (Kahan, 2015).

While education and information are important for shaping and framing public discourse on technology, public attitudes depend heavily on social and political contexts, and cultures of trust between citizens, regulatory agencies and firms. The following sections expand on this insight.

Trust in institutions

There is a close connection between public resistance to novel technologies and the disruption of trust in public regulatory authorities. In an important study of factors contributing to negative public opinion of GMOs in many parts of Europe, Gaskell et al. point out that “in an increasingly complex world, trust functions as a substitute for knowledge” (Gaskell et al., 1999). These authors argue that resistance to GMOs in Europe was closely tied to a lack of trust in regulatory procedures.

Box 8.2. The HFEA’s public consultation on animal DNA and embryonic research: Hybrids and mitochondrial replacement

The United Kingdom’s HFEA was established in 1990 to license and monitor in vitro fertilisation (IVF) and insemination clinics throughout the country, as well as institutions conducting embryonic research and the storage of gametes and embryos (Jasanoff, 2005). In 2007, HFEA launched a public consultation to explore the public’s views on whether or not scientists should be allowed to create embryos containing animal DNA in embryo research (HFEA 2007; Blackburn-Starza, 2007). The programme, entitled Hybrids and Chimeras, involved a public consultation to facilitate engagement about the issue, and was supported by Sciencewise, a programme run by the Office of Science and Innovation which aims to assist policy makers in conducting public engagement activities.

The consultation ran from April to July of 2007, and involved a range of approaches to consultation. A public opinion poll sought to gather the views of a representative sample of the public in a general fashion. Public deliberations expanded upon these general findings and opened up new questions, focusing on the effect that deliberation and new information had on participants’ views. A written consultation and a public meeting also took place. The results of the public consultation were analysed as evidence by the HFEA, which then decided that cytoplasmic hybrid research should be allowed to move forward, with caution and careful scrutiny (HFEA, 2007).

More recently, the HFEA gathered the public views and made a proposal to Parliament on whether to allow mitochondrial replacement in embryos intended for implantation. Parliament accepted the recommendation, with high public approval.

Other work on regulatory trust corroborates the above point. For example, in the late 1990s in the United Kingdom, public controversy erupted over how regulators poorly addressed uncertainties and contingencies in their management of bovine spongiform encephalopathy (BSE, or “mad cow disease”). Many commentators think that BSE crisis – especially the disruption of trust in the food safety oversight system – laid the groundwork for the broad resistance to GMO foods in the United Kingdom, even as regulators had deemed them safe (Pidgeon, Kasperson and Slovic, 2003). This case suggests that once trust is lost, it is hard to regain, even in other contexts.

This outcome is a stark contrast to the acceptance of reproductive medicine and technology in the United Kingdom, where a dedicated regulatory institution, the Human Fertilisation and Embryology Authority (HFEA), was established in 1990, prior to many controversial advances that have occurred. The HFEA has been successful at anticipating difficult oversight questions, and airing issues in public (Box 8.4). Resulting decisions regarding research and application in embryology research and reproductive medicine have garnered significant acceptance.

Technological hype – over-promotion of the benefits of technology – can undermine trust in governmental and scientific institutions. Emphasising novelty and near-term benefits can lead to disappointment and scepticism among publics (Rayner, 2004). For example, in the fields of stem cell research and clinical translation, there has been a sustained pattern of inflated predictions by scientific communities, funding agencies and the media (Kamenova and Caulfield, 2015). In California this has increased controversy, where a USD 3 billion dollar public initiative on stem cell research begun in 2004 has delivered scientific advances, but failed to deliver the tangible health benefits it advertised.

Values and uncertainties in risk governance and science advice

Key towards building trust in regulatory institutions is building trust in underlying analytic approaches and procedures, of which risk-benefit analysis claims the key position. Social scientists have learned lessons about where agencies can go wrong with respect to risk-based decision making and science advice.

Regulatory or technical advice bodies need to be transparent about how uncertainties are dealt with and what kinds of value-based assumptions are built into risk and benefit models. Controversies like the BSE outbreak mentioned above, and the Fukushima nuclear accident in Japan, indicate the need to better recognise, across expert communities and the public, how risk models necessarily have limitations and science-based regulatory decisions unavoidably carry value judgements (Pfotenhauer et al., 2012). Value judgements operate, for example, in the choice of which facts or kinds of expertise are relevant, in setting thresholds of sufficient evidence, in deciding how to cope with dissent, and in decisions to act in the face of uncertainty.

Box 8.3. Value choices in science and technology advice: Examples and lessons learned

Research in the field of science and technology (S&T) studies described the interplay of science and values in decisions at the intersection of science and policy. In particular, this research has demonstrated a demarcation process where science and society meet, sometimes known as “boundary work”. Boundary work can be defined as a method of distinguishing policy-relevant knowledge from pseudo-science, politics or values. It is a demarcation process through which policy decisions regarding relevant evidence are placed on the “good science” side of the divide that separates objective knowledge from illegitimate, politicised or false science (Jasanoff, 1990).

Boundary work is considered necessary to accomplish at least two goals: to ensure that research responds to the needs of users (often policy makers) and that the credibility of science itself is maintained. One prominent example of boundary work involves attempts of governments to make a clear delineation between risk assessment and risk management, with social and economic factors entering only during the management stage. This distinction also features prominently, for example, within international trade law and how it recognises valid versus invalid forms of plant health and safety regulation (Winickoff et al., 2005). Another involves the regulation of chemical carcinogens: establishing a cancer risk to humans based on direct evidence is often impossible, so regulatory decisions often rely on, for example, animal tests, which are interpreted with a great degree of uncertainty and disagreement, even within expert circles. As a result, the resolution of controversies about whether or not to regulate chemical compounds depends at least as much on the procedures and institutions used to resolve conflicts as the objective science itself (Jasanoff, 1990).

The existence of values-based disputes in science policy does not call into question the validity of technology assessments: rather, it argues for active boundary management by institutions tasked with governing technological risk, and suggests that appeals to scientific objectivity alone are unlikely to quell concern about the impacts of emerging technologies.

Recently, countries have acknowledged the importance of openness, integrity, transparency, and accountability in the establishment of trustworthy science advice (OECD, 2015). For example, in contrast to framing questions of science advice in exclusively technocratic terms, countries have begun to open up the process of science advice to make it more inclusive, and have been more scrupulous in characterising uncertainties and identifying questions that science alone cannot answer. In the United States, new ground was struck in this regard in the 1980s when acquired immune deficiency syndrome (AIDS) activists gained the necessary technical knowledge and political standing to participate in expert groups tasked to determine things like scientific criteria for inclusion in clinical trials (Epstein, 1996). Since then, patient groups “lay experts” in their sphere are often included on health policy task forces. Furthermore, policy questions are increasingly developed and framed in multi-stakeholder settings (OECD, 2015).

OECD (2015) described how a number of scientific advisory bodies have adopted new procedures and practices that might help to limit controversies over scientific advice and increase public trust in advisory systems. These procedures and practices include:

  • Clarified responsibilities. If asked to address an issue, advisory bodies need to ensure that such a task is compatible with their mandate and expertise.

  • Increased transparency. Potential or substantiated conflicts of interest have been responsible for much of the diminution of trust among citizens towards established structures and science-based policies. Experts are likely to have had previous contacts, and often contractual relationships, with some of the stakeholders involved in issues they have to examine. Better standardised definitions of “interests”, and transparent rules to identify such interests, are therefore needed.

  • Stakeholder consultation. Stakeholders are usually understood as people and organisations likely to be affected by decisions taken as a consequence of scientific advice, which can include those with economic interests as well as civil society groupings (e.g. NGOs, trade unions, patient organisations). To take into account the potential impact of their advice, an increasing number of advisory bodies are integrating some sort of consultation process with stakeholders alongside their traditional expert assessments.

  • Direct involvement of civil society. A number of advisory bodies have gone one step further and included within their expert committees some representatives of civil society, including stakeholder groups (industry organisations, consumer associations) and lay persons. Although there are concerns that involvement of non-scientists in scientific advisory committees may dilute the quality of the science advice, it has been noted that, in many cases, these individuals have acquired a level of knowledge in the field sufficient to allow a good understanding of the issues at stake.

  • Public reporting and open communication. To communicate scientific advice in a way that more fully engages society, science advisory bodies will need to make more effective use of social media.

Cross-national differences in regulatory style

There is no one-size-fits all approach for achieving a robust and trustworthy system of technical advice and regulatory oversight. Ultimately, societies have different modes of risk-based decision making and different ways of providing reasoning about S&T in public (Jasanoff, 2005). A significant body of social science comparing the treatment of risk-based decision making across national political systems demonstrates how differences in issue framing and science policy can lead to systematic transnational variations in the assessment of health, safety and environmental risk. Despite these differences, it is clear that across many countries transparency builds credibility as a general matter.

Laying the groundwork for public acceptance

Decades of work in the sociology of technology has shown how the path of technological development is not set in stone or predetermined, but can depend on human agency at the individual or policy level, as well as historical contingency (Bijker, Pinch and Hughes, 2012). It is true that the transformation of the production system will entail a large number of possible relevant research and technological choices made in unco-ordinated ways by people ranging from those who staff funding bodies to managers of institutions that support innovation, to entrepreneurs and workers. But it is also true that national investments and strategies will exert an influence on the direction of technological change. Can strategy and innovation policy address the issue of public acceptance from the beginning? This section reviews a number of strategies and mechanisms that could help create the conditions for technological acceptance where this is appropriate.

Foresight

Next product revolution-relevant technologies, from industrial biotechnology to 3D printing, appear poised to transform markets and, potentially, societies more broadly. But different futures are clearly possible. If an aim of policy is to increase public acceptance of next product revolution technologies, a reliable first step is to engage in foresight activities to identify trends in innovative fields and to co-ordinate, as far as possible, towards a range of socially optimal outcomes. While foresight exercises cannot predict the future, they can help to systematically and transparently identify and assess social, technological, economic, environmental and policy conditions that shape some aspect of the future (see Chapter 9). Good innovation policy can help steer technological trajectories towards agreed objectives, such as broad energy transitions or certain visions of medicines and human health. One benefit of engaging in foresight activities is process-related, including strengthening stakeholder networks and public engagement with technologies.

Examples of foresight processes might include the development of technology roadmaps, the use of bibliometric and patent data to consider technology futures, and expert elicitations. With regard to nanotechnology, for example, the United Kingdom’s Economic and Social Research Council (ESRC) commissioned scenarios for converging technologies, to inform the council’s research strategy (Barben et al., 2007). Mapping the potential futures of technological developments will be important to better understand social implications, and to identify possibilities for getting public buy-in during the innovation process. Some work to institutionalise this longer-term policy thinking is ongoing. For example, the German Federal Ministry for Economic Affairs and Energy and Federal Ministry of Education and Research created a co-ordinating body to bring together stakeholders to assess a long-term strategy for the future of industry.

Participatory technology assessment

Another mechanism to understand and enhance public acceptance of technology is to engage in processes of societal technology assessment. Having emerged in the 1960s, technology assessment has been increasingly adopted in many countries, and has evolved over time based on lessons learned. Innovation policy in many OECD countries is now guided by forms of societal technology assessment carried out by a mix of actors, including national ethics committees and other government bodies tasked with taking a view of broader social effects, health, and safety risk assessment. Some of these assessments are more broadly participatory and include procedures involving stakeholder and public input (Durant, 1999).

This broad set of societal technology assessment processes involves formal risk analysis but can also consider the longer-term social implications of technological adoption that may not easily be reduced to immediate health and safety risks. Questions to consider relate to the distribution of the possible benefits and costs; the consequences of intellectual property in the field; whether there are particular pathways of greatest social benefit; and sources of uncertainty in assessing the technology. These processes must also consider the potential benefits of innovation.

Generally speaking, there has been a shift from more expert-based forms of assessment to more participatory models (see below). Born out of controversies around technologies like nuclear energy, in the United States, technology assessment initially focused rather narrowly on the provision of objective, probabilistic knowledge about future trajectories of emerging technologies. Over time, there has been increased recognition that framing assumptions (e.g. problem definitions, scope and methodologies) shape the conclusions of technology assessment (Ely, van Zwanenberg and Stirling, 2011). In particular, an overemphasis on technical consequences can overshadow important issues associated with social, ethical and political impacts of technologies. For these reasons, countries began to shift to more inclusive, open and deliberative forms of technology assessment.

Some mechanisms of technology assessment involve formal public procedures that feed directly into innovation policy and governance decisions, particularly through the use of expert advisory bodies. One approach to technology assessment is the use of scientific academies or regulatory authorities to assess the most technical aspects of emerging technologies. Another is the establishment of public advisory bodies. Examples of these approaches include the Danish Board of Technology Foundation, the Nuffield Council on Bioethics in the United Kingdom, and presidential bioethics committees in the United States. Such groups might be charged with writing reports on particular technologies that gather evidence through research and public testimony and can inform public reasoning. Public surveys and stakeholder interviews on emerging technologies might also be employed to assess technologies and gauge current opinion. Hearings which seek to collect input from various publics might also be used to inform regulatory agencies.

As mentioned above, recent efforts at technology assessment have taken a more participatory form. These approaches have variously been termed “constructive technology assessment” (Schot and Rip, 1996), “participatory technology assessment” (Guston and Sarewitz, 2002), and “real-time technology assessment”, among others. These approaches emphasise the value of engaging citizens and stakeholders alongside expert analysis for effective technology appraisal. One reason for this shift is that, given that technology assessment is inherently value-laden, citizens should have a voice in these processes. In addition, there is a growing recognition that non-experts and other stakeholders possess knowledge relevant to technology assessment that would otherwise be missed. Toxicological risks are a good example. It is the users of potentially toxic substances in their places of work that are well positioned to provide knowledge e.g. of how workers might become exposed in particular workplaces, given normal habits. To give another obvious example, an assessment of the risks of pesticides would have to take into account the everyday practices of field workers, e.g. whether protective clothing is in fact routinely used.

More participatory modes of technology assessment recognise that the public is more likely to accept assessments of which they have been a part, and that the knowledge these assessments produce is likely to be more robust if diverse stakeholders are engaged. These approaches might include things like socio-technical mapping, which combines stakeholder analysis with plotting of recent technical innovations, early experimentation to identify and manage unanticipated impacts, greater dialogue between the public and innovators, public opinion polling, focus groups and scenario development, among others (Guston and Sarewitz, 2002).

Public engagement and public deliberation

In addition to formal technology assessment processes, engagement with stakeholders and publics more broadly on issues of science, technology, and innovation is increasingly recognised as an important feature of robust science and innovation policy. In their study of the acceptance of renewable energy technologies, Reith et al. (2013) identified three interventions that can enhance social acceptance of emerging technologies: greater information provided to the public (e.g. advertising, newspapers, websites, and excursions to sites), enhanced co-operation and participation (in decision processes and in financial arrangements), and public consultation and engagement (e.g. public meetings and dialogues). These approaches hold promise for the analysis and implementation of other emerging technologies (Reith et al., 2013).

Public engagements might be defined as “participatory processes through which members of diverse publics express their views, concerns, and recommendations about a techno-scientific issue. Such efforts frame publics not as passive recipients of expert knowledge, but as important actors shaping technologies and their trajectories” (Winickoff, Flegal and Asrat, 2015). Mechanisms of public engagement range from public consultation (e.g. surveys) to more dialogue-oriented public participation exercises (e.g. citizens’ consultations and participatory technology assessment). Public engagement can help steer science and innovation towards socially desirable objectives, build a more scientifically literate, supportive and engaged citizenry, and broaden the range of perspectives considered in the development and conduct of research.

Box 8.4. Scenario workshops for technology assessment

The Parliaments and Civil Society in Technology Assessment (PACITA) Workshop: Scenario in Norway, Denmark, Austria, Bulgaria, Catalonia (Spain), Wallonia (Belgium), Czech Republic, Ireland, and Hungary.

As articulated in the Lund Declaration (a European declaration concluding that European research should be issue-oriented, and focused on meeting society’s grand challenges), coping with ageing societies is considered a central challenge in Europe. Referred to as the “double demographic challenge,” the ageing population’s need for health care services is increasing as the size of the workforce declines. As a consequence, new technologies will be important for the provision of health care in the European Union.

To address the challenges and opportunities associated with a range of technologies (e.g. including smart houses, tracking devices and robotics), and to provide policy makers with a set of policy options, the European Union developed a project involving scenario workshops with stakeholders and publics, culminating in a policy report.

A stakeholder group was established with a variety of experts. Descriptions of technologies and an overview of potential future development were gathered, and a set of scenarios regarding the potential use of these technologies in an ageing society were produced. Scenario workshops were then organised in the countries listed above, aimed at assessing the similarities and differences across Europe with regard to the expectations and preferences on the technological challenges in question. Each workshop involved stakeholders, patients and users, technology developers and researchers, and decision makers at multiple levels. Feedback, general responses, issues and ideas were gathered during these deliberative workshops, recorded, and provided to policy makers.

The range of motivations for greater public engagement with S&T can usefully be considered in three categories: normative, instrumental, and substantive (Fiorino 1990; Stirling, 2007). From a normative perspective, the argument is that the governance of science and innovation without meaningful participation from interested stakeholders is contrary to democratic ideals. Citizens should have a say in whether and how S&T affect their lives. The instrumentalist argument is concerned with public acceptance of S&T: engaging the public upfront on questions of controversial S&T policy may stave off public outcry, and enhance trust between scientists and lay publics. Finally, substantive arguments state that public engagement, and in particular the incorporation of non-expert views, can enhance the quality and relevance of the knowledge produced, as well as the utility of technologies.

“Public engagement” in innovation policy often encompasses a wide range of instruments. A typology of public engagement mechanisms derived from Rowe (2005), with examples, can be found in Table 8.1. One form of engagement might be considered “communication,” and encompasses instruments which convey information from policy makers (or other sponsors) to the public. In these efforts, information is unidirectional. Still, well-crafted communication can have significant implications for responsible innovation, in part because transparency can help foster public trust in science advice. Examples of different forms of relevant communication include, for example, making strategic research plans accessible to the public, either in hard copy or online, or “open science”, defined as “an approach to research based on greater access to public research data, enabled by ICT tools and platforms, and broader collaboration in science, including the participation of non-scientists, and finally, the use of alternative copyright tools for diffusing research results” (OECD, 2016b).

Table 8.1. Typology of public engagement mechanisms and some country examples

Key policy features

Key policy instruments

Some country examples

Communication

Online notice

Publishing research plans/regulatory actions on website accessible to public

Lithuania’s public e-platforms; Poland’s Public Information Bulletin

Open science

Open access to academic research

South Africa’s Scientific Electronic Library Online; Turkey’s Ankara Statement on Open Access and National Open Science Committee

Consultation

Public input on agenda setting

Surveys, online feedback, bottom-up sourcing, etc.

Colombia’s Ideas for Change Program; Turkey’s Technology Roadmaps; Netherlands’ National Research Agenda; Argentina’s Argentina Innovadora 2020; The Great New Zealand Science Project

Participation

Anticipatory governance

Foresight activities regarding technology assessment

Czech Republic’s PACITA; Germany’s BMBF Foresight Process

Dialogue for identifying research priorities

Workshops with publics to identify key societal questions

Germany’s dialogue on future technologies; Denmark’s INNO+ Catalogue

Citizen science

Austrian Centre for Citizen Science

Another form of engagement is “public consultation”, in which policy makers (or other sponsors) initiate the collection of input from the public. Public consultation does not generally entail formal dialogue between publics and policy makers. Nevertheless, information elicited by policy makers from the public can help guide socially responsive innovation activities. Examples of public consultation include formal requests for public input regarding research priorities, the conduct of surveys regarding public views on e.g. S&T.

Unlike the aforementioned forms of engagement, “public participation” entails a formal dialogue between policy makers and publics. Of central importance in participatory exercises is the act of deliberation. Information is exchanged across experts and lay publics, which can facilitate mutual learning and even changes in opinions of both policy makers and public participants. One example of public participation includes participatory technology assessment methods.

The trend towards greater adoption of public engagement mechanisms in innovation policy suggests that they are perceived by countries as beneficial. But some challenges exist to their effective implementation. First, constructing representative publics through such exercises can prove challenging. Some public engagement processes are only viewed as legitimate for those publics directly engaged in them. This has been termed a “fundamental problem of scale” (Lövbrand et al., 2015; Stilgoe, Lock and Wilsdon, 2014) and points to the need to consider engagement exercises as only one element of more responsible innovation policy. Another challenge relates to making STI policy responsive to the outputs of public engagement efforts. There is some risk that weak public engagement does not facilitate true deliberation, and instead serves to legitimate existing policies. Furthermore, public engagement is most likely to be impactful when technologies are further “upstream,” or before they are locked in (Collingridge, 1980). This means that, while especially effective in cases of emerging technologies, public engagement can be more challenging for technologies that are already deeply entrenched.

Sweden’s nuclear waste programme provides a good example of a deliberative process that successfully bridged expert and lay divides to produce a societally acceptable decision on the future of a technology. In the 2000s, in response to social concerns about the siting of nuclear waste, Swedish officials conducted and presented a “safety case” as a primary tool in developing a public deliberation on the topic, and the process resulted in a publicly approved, licensed facility (Long and Scott, 2013; European Nuclear Society, 2009). The case materials conveyed technical arguments in lay language about why the proposed repository was thought to be safe. It clearly described what was thought to be the quality of the information used in the case. It also described plans for what would be done to improve understanding, the expected outcome of these efforts, and how previous efforts to improve understanding had performed. At a follow-up, the results of recent experiments were compared with previously predicted results. Over time, the transparency of this process enabled everyone to perceive an increasingly accurate understanding of its performance (Long and Scott, 2013).

Experience in the field of health innovation shows how patients, research participants, and lay publics – if consulted in the course of R&D – can foster innovation and steer innovation towards real needs. For example, in the arena of rare diseases, disease advocacy organisations have organised their own biobanks, recruited researchers to work on their diseases, co-invented tools for interventions, and served as key advisers in shaping the regimes of research ethics for clinical trials.

Integrating ethical, legal and social issues upstream

Potential social concerns and the issue of public acceptance should not be left to the very end of the technology development process. It is increasingly recognised that it is important to integrate the consideration of such issues through the activities of research funding decisions, the practice of science, technology development and commercialisation. How can this be done?

The first generation of approaches to integrating broader social concerns in the development and assessment of technology involved attention to ethical, legal, and social issues (ELSI). Since the Human Genome Project (HGP) in the early 1990s, science funders in many OECD countries have sought to implement ELSI. The planners of the HGP recognised that the information gained from mapping and sequencing the human genome would have profound implications for individuals, families and society, and so they allocated over 3% of the budget to ethical, legal and social implications of research. In the realm of nanotechnology, 2.4% of the National Nanotechnology Initiative in the United States was dedicated for ELSI research, and in the Netherlands 25% of the national research programme on nanotechnology was dedicated to risk research and technology assessment (OECD, 2013). Since this pioneering approach, efforts have been made to mainstream social science and humanities work into funding streams, and this is taking root in many OECD countries.

New mechanisms seek to integrate social considerations not at the end of technology pipelines, but in the course of technology development, to support innovation rather than constrain it. Examples of such comprehensive approaches include the US National Nanotechnology Initiative and the Horizon 2020 programme at the European Commission (Box 8.5).

Box 8.5. Anticipatory governance

The US National Nanotechnology Initiative (NNI) began in 2003 and co-ordinates over USD 1 billion of research per year. The NNI emphasises the need for commercialisation for competitiveness on one hand, and the need to better understand societal impacts on the other. The NNI established two centres to investigate “nanotechnology in society.” These centres have developed an “anticipatory governance” approach that aims to build societal capacity to engage with innovations in nanotechnology. Anticipatory governance has at least three components and is intended to achieve:

  1. Consideration of human values in deliberations about technology, often through the direct engagement of stakeholders and the lay public. The Nanoscale Informal Science Education (NISE) Network in the United States has featured public engagement as a major theme.

  2. Scenario development and foresight to help develop understanding of the social dimensions of scientific and technical change. Illustrative programmes include the Scenarios of Converging Technologies programme at the University of Oxford, and the open-source scenario development initiative under the NanoFutures project at the Centres on Nanotechnology and Society.

  3. Integration of engagement and foresight with scientific and technical work to increase the ability of natural scientists to understand the societal aspects of their own work, and to inform the perspectives of social scientists on cutting-edge technology (Guston, 2008).

Growing out of aforementioned efforts, from ELSI to technology assessment and public deliberation, RRI has gained traction in the EU policy context. RRI combines elements of upstream assessment, public engagement, open access, gender equality, science education, ethics, and governance. RRI aims to open up issues related to S&T innovation, anticipate their consequences, and involve society in deliberating over how S&T can be responsive to societal goals and concerns. RRI, as a concept and set of tools, has evolved substantially since its introduction into EU policy discourse in 2011.

Box 8.6. Understandings of RRI

Definitions of RRI vary across government and academic communities, and treat RRI variously as an approach to governance, a policy framework, and a process. Salient definitions include:

  • An approach that anticipates and assesses potential implications and societal expectations with regard to research and innovation, with the aim to foster the design of inclusive and sustainable research and innovation. It implies that societal actors (e.g. researchers, citizens, policy makers, businesses and third sector organisations) work together during the whole research and innovation process in order to better align both the process and its outcomes with the values, needs and expectations of society (EC, 2017).

  • “A science policy framework that attempts to import broad social values into technological innovation processes while supporting institutional decision making under conditions of uncertainty and ambiguity. In this respect, RRI refocuses technological governance from standard debates on risks to discussions about the ethical stewardship of innovation” (Schroeder and Ladikas, 2015).

  • “A transparent, interactive process by which societal actors and innovators become mutually responsive to each other with a view to the (ethical) acceptability, sustainability and social desirability of the innovation process and its marketable products (in order to allow a proper embedding and technological advances in our society)” (Schomberg, 2013).

One thrust of RRI is the desire “to connect the practice of research and innovation in the present to the futures that it promises and helps bring about” (Owen, Bessant and Heintz, 2013). “Prediction is impossible,” as one academic has stated, “but anticipation of possible, plural futures is vital” (Stilgoe, Bessant and Heintz, 2013).

RRI is not regarded as an approach to implement measures of liability, accountability, stronger regulation, or another form of ethical review. Instead, stakeholders are encouraged to discuss collectively avenues for advancing societal goals through technology, considering the full range of moral, ethical, legal and social implications of research and innovation (Owen et al., 2013). As part of the European Union’s Horizon 2020 programme, RRI forms a key action of the “science with and for society” objective of the European Commission. Governments should ensure that policies, regulatory frameworks and funding initiatives embody the principles of RRI in order to deliver on the promise of smart, inclusive and sustainable solutions to the social challenges discussed under the so-called Rome Declaration (EC, 2014b).

Conclusion

The current transformation of the production system will entail a large number of research and technological choices across value chains and sectors. But national investments and strategies can and will exert a profound influence on the direction of technological change. There are important technological precedents for policy makers, industry, and society to consider in the context of public acceptance. The biotechnology case suggests that government efforts to meet public concerns about next product revolution technologies by focusing on immediate physical risks rather than longer-term social concerns could run into problems. In the case of nanotechnology, science funders invested in social science and social outreach through the creation of Centres on Nanotechnology and Society, and little public resistance has developed. Big data and AI are areas in which societal dialogue has begun in earnest, but in which few institutionalised fora exist for communication and learning.

Social science literature on public acceptance carries a number of key points for policy makers:

  • Public understanding of science. While education and information are important for shaping and framing public discourse on technology, public attitudes depend heavily on social and political contexts, and cultures of trust between citizens, regulatory agencies and firms.

  • Trust. There is a close connection between public resistance to novel technologies and the disruption of trust in public regulatory authorities. The logics, value choices, and uncertainties underlying analytic approaches such as risk-benefit analysis should be transparent. Hype around near and long-term benefits can ultimately undermine trust in governmental, the private sector and scientific institutions.

  • Science advice. Trust begins with the trustworthiness of regulatory and expert advice bodies, and they should be characterised by openness, integrity, transparency, and accountability. There is no single and one-size-fits all approach for achieving a robust and trustworthy system of technical advice and regulatory oversight. Ultimately, societies must draw on the best of their own institutional traditions for public reasoning on technical issues.

A number of mechanisms and good practices exist for promoting the societal capacity for coping with and engaging well with technological choices:

  • Anticipation. A reliable first step is to engage in anticipatory activities – such as foresight – to identify trends in innovative fields, imagine possible futures, and to co-ordinate social actors, as far as possible, towards a range of socially optimal outcomes. While foresight exercises cannot predict the future, they can help to systematically and transparently identify and assess a range of conditions shaping the future.

  • Participatory technology assessment. Different forms of participatory technology assessment are now carried out by a mix of actors, including national ethics committees and other government bodies tasked with taking a view of broader social effects, and health and safety risk assessment. Questions to consider should relate to: the distribution of the possible benefits and costs associated with a particular technology; the consequences of intellectual property in the field; whether there are particular pathways of greatest social benefit; and sources of uncertainty in assessing the technology. These processes must also consider the potential benefits of innovation.

  • Public engagement. Public engagement can help steer science and innovation towards socially desirable objectives, create a more scientifically literate, supportive and engaged citizenry, and broaden the range of perspectives considered in the development and conduct of research. Public engagement is most likely to be impactful when technologies are further “upstream,” or before they are locked in, and good practices have been developing.

  • Integrating ethical, legal and social issues in upstream R&D. It is important to integrate the consideration of such issues through the activities of research funding decisions, and the practice of science, technology development and commercialisation. Approaches such as “anticipatory governance” and “RRI” provide possible frameworks for doing so, but mechanisms require further development and experimentation.

References

Arnall, A.H. (2003), Future Technologies, Today’s Choices: Nanotechnology, Artificial Intelligence and Robotics: A Technical, Political and Institutional Map of Emerging Technologies, Greenpeace Environmental Trust, London.

Barben, D. et al. (2007), “Anticipatory governance of nanotechnology: Foresight, engagement, and integration”, The Handbook of Science and Technology Studies, E. Hacket et al. (eds.), MIT Press, Cambridge, MA.

Bauer, M. (2009), “The evolution of public understanding of science – Discourse and comparative evidence”, Science Technology & Society, Vol. 14/2, pp. 221-40.

Bijker, W., T. Pinch and T. Hughes (eds.) (2012), The Social Construction of Technological Systems: New Directions in the Sociology and History of Technology (Anniversary edition), MIT Press, Cambridge, MA.

Blackburn-Starza, A. (2007), “HFEA launches public consultation on ‘hybrid’ embryos”, BioNews, Vol. 405, 30 April, www.bionews.org.uk/page_13053.asp (accessed 17 January 2017).

Bradbury, M. (2016) “There’s an algorithm for that. Or there soon will be”, OECD Insights Blog, http://oecdinsights.org/2016/05/18/theres-an-algorithm-for-that-or-there-soon-will-be/ (accessed 17 January 2017).

British Science Association (2016), “One in three believe that the rise of artificial intelligence is a threat to humanity”, British Science Association News, www.britishscienceassociation.org/news/rise-of-artificial-intelligence-is-a-threat-to-humanity (accessed 17 January 2017).

Callaway, E. (2016), “UK scientists gain licence to edit genes in human embryos”, Nature, Vol. 530, No. 7588, p. 8.

Calo, R. (2014), “The case for a federal robotics commission”, Brookings Institute Project on Civilian Robotics, www.brookings.edu/research/the-case-for-a-federal-robotics-commission/ (accessed 17 January 2017).

Cellan-Jones, R. (2014), “Stephen Hawking warns artificial intelligence could end mankind”, BBC News, 2 December, www.bbc.com/news/technology-30290540 (accessed 17 January 2017).

Collingridge, D. (1980), The Social Control of Technology, Open University Press, Milton Keynes.

Currall, S.C. et al. (2006), “What drives public acceptance of nanotechnology?”, Nature Nanotechnology, Vol. 1, No. 3, pp. 153-55, https://doi.org/10.1038/nnano.2006.155.

Davis, F.R. (2014), Banned: A History of Pesticides and the Science of Toxicology, Yale University Press, New Haven, https://doi.org/10.1093/envhis/emv178.

Durant, J. (1999), “Participatory technology assessment and the democratic model of the public understanding of science”, Science and Public Policy, Vol. 26, No. 5, pp. 313-19, https://doi.org/10.3152/147154399781782329.

Ely, A., P. van Zwanenberg and A. Stirling (2011), “New models of technology assessment for development”, http://steps-centre.org/publication/new-models-of-technology-assessment-for-development/.

Epstein, S. (1996), Impure Science: AIDS, Activism, and the Politics of Knowledge, University of California Press, Berkeley.

ETC (Erosion, Technology and Concentration) Group (2003), “The big down: From genomes to atoms – Atomtech: Technologies converging at the nano-scale”, www.etcgroup.org/sites/www.etcgroup.org/files/thebigdown.pdf.

EC (European Commission) (2017), “Horizon 2020” website, https://ec.europa.eu/programmes/horizon2020/en/h2020-section/responsible-research-innovation.

EC (2014a), “Special Eurobarometer 419: Public perceptions of science, research and innovation”, European Commission, Brussels, http://ec.europa.eu/public_opinion/archives/ebs/ebs_419_en.pdf.

EC (2014b), Rome Declaration, European Commission, Brussels, https://ec.europa.eu/research/swafs/pdf/rome_declaration_RRI_final_21_November.pdf.

EC (2013), “Options for strengthening responsible research and innovation”, European Commission, Brussels, http://ec.europa.eu/research/science-society/document_library/pdf_06/options-for-strengthening_ en.pdf.

European Nuclear Society (2009), ENS Nnews, Issue 25, www.euronuclear.org/e-news/e-news-25/forsmark.htm.

Faden, R.R., T.L. Beauchamp and N.E. Kass (2014), “Informed Consent, Comparative Effectiveness and Learning Health Care”, New England Journal of Medicine, Vol. 370, February 20, pp. 766-768, https://doi.org/10.1056/NEJMhle1313674.

Fiorino, D.J. (1990), “Citizen participation and environmental risk: A survey of institutional mechanisms”, Science, Technology & Human Values, Vol. 15, No. 2, pp. 226-243.

G7 (2016), “Communiqué”, Science and Technology Ministers’ Meeting in Tsukuba, Ibaraki, Japan, www.g8.utoronto.ca/science/2016-tsukuba.html.

Gaskell, G. et al. (1999), “Worlds apart? The reception of genetically modified foods in Europe and the US”, Science, Vol. 285, No. 5426, pp. 384-87, https://doi.org/10.1126/science.285.5426.384.

Gupta, N., A.R.H. Fischer and L.J. Frewer (2012), “Socio-psychological determinants of public acceptance of technologies: A review”, Public Understanding of Science, Vol. 21, No. 7, pp. 782-95, https://doi.org/10.1177/0963662510392485.

Guston, D.H. (2008), “Innovation policy: Not just a jumbo shrimp”, Nature, Vol. 454, No. 7207, pp. 940-41, https://doi.org/10.1038/454940a.

Guston, D.H. and D. Sarewitz (2002), “Real-time technology assessment”, Technology in Society, Vol. 24, No. 1, pp. 93-109.

HFEA (Human Fertilisation and Embyology Authority) (2007), “Hybrids and chimeras: A report on the findings of the consultation”, Human Fertilisation and Embyology Authority, www.hfea.gov.uk/docs/Hybrids_Report.pdf.

Jasanoff, S. (2005), Designs on Nature: Science and Democracy in Europe and the United States, Princeton University Press, Princeton, NJ.

Jasanoff, S. (1990), The Fifth Branch: Science Advisers as Policymakers, MIT Press, Cambridge, MA.

Joy, B. (2000), “Why the future doesn’t need us”, Wired Magazine, 1 April, www.wired.com/2000/04/joy-2/.

Kahan, D. (2015), “Climate science communication and the measurement problem”, Advances in Political Psychology, Vol. 36, pp. 1-43.

Kamenova, K. and T. Caulfield (2015), “Stem cell hype: Media portrayal of therapy translation”, Science Translational Medicine, Vol. 7, No. 278, No. 278, pp. 4-278, https://doi.org/10.1126/scitranslmed.3010496.

Kirby, T. (2014), “Controversy surrounds England’s new NHS database”, The Lancet, Vol. 383, No. 9918, p. 681, https://doi.org/10.1016/S0140-6736(14)60230-0).

Long, J. and D. Scott (2013), “Vested Interests and Geoengineering Research”, Issues in Science and Technology, No. 29, http://issues.org/29-3/long-4/.

Lövbrand, E. et al. (2015), “Who speaks for the future of Earth?: How critical social science can extend the conversation on the anthropocene”, Global Environmental Change, Vol. 32, pp. 211-18.

MacDonald, C. (2004), “Nanotechnology, privacy and shifting social conventions”, Health Law Review Vol. 12, No. 3, pp. 37-40.

Marks, P. (2016), “AI needs oversight – Time to set standards for autonomous tech”, New Scientist, 21 July, www.newscientist.com/article/2098277-ai-needs-oversight-time-to-set-standards-for-autonomous-tech/.

“@nathanielkoloc (17 May 2015), I think it’s way too early for explicit AI policy/regulation, but check out futureoflife.org/home @bfeld”, https://twitter.com/amcafee/status/599937227834044416.

McNutt, M. (2015), “Breakthrough to genome editing”, Science, Vol. 350, No. 6267, pp. 1445-1445, https://doi.org/10.1126/science.aae0479.

Meridian Institute (2005), “Nanotechnology and the poor: Opportunities and risks”, www.merid.org/~/media/Files/Projects/nano-waterworkshop/NanoWaterPaperFinal.ashx.

OECD (Organisation for Economic Co-operation and Development) (2017), “Recommendation of the Council on Health Data Governance, OECD, Paris, www.oecd.org/health/health-systems/Recommendation-of-OECD-Council-on-Health-Data-Governance-Booklet.pdf.

OECD (2016a), Nanomaterials in Waste Streams: Current Knowledge on Risks and Impacts, OECD Publishing, Paris, https://doi.org/10.1787/9789264249752-en.

OECD (2016b), Science, Technology and Innovation Outlook 2016, OECD Publishing, Paris, https://doi.org/10.1787/sti_in_outlook-2016-en.

OECD (2015), “Scientific advice for policy making: The role and responsibility of expert bodies and individual scientists”, OECD Science, Technology and Industry Policy Papers, No. 21, OECD Publishing, Paris, https://doi.org/10.1787/5js33l1jcpwb-en.

OECD (2013), “Responsible development of nanotechnology: Results from a survey activity”, OECD, Paris, www.oecd.org/officialdocuments/publicdisplaydocumentpdf/?cote=dsti/stp/nano(2013)9/final&doclanguage=en.

Owen, R., J. Bessant and M. Heintz, (eds.) (2013), Responsible Innovation: Managing the Emergence of Science and Innovation in Society, John Wiley & Sons, Chichester, https://doi.org/10.1002/9781118551424.

Owen, R., P. Macnaghten and J. Stilgoe (2012), “Responsible research and innovation: From science in society to science for society, with society”, Science and Public Policy, Vol. 39, No. 6, pp. 751-60, https://doi.org/10.1093/scipol/scs093.

Packer, J. (2008), Mobility without Mayhem: Safety, Cars, and Citizenship, Duke University Press, Durham, NC.

Pfotenhauer, S.M. et al. (2012), “Learning from Fukushima”, Issues in Science and Technology, Vol. 28, No. 3, pp. 79-84.

Pidgeon, N.F., R.E. Kasperson and P. Slovic (eds.) (2003), The Social Amplification of Risk, Cambridge University Press, Cambridge.

Pollack, M.A. and G.C. Shaffer (2009), When Cooperation Fails: The International Law and Politics of Genetically Modified Foods, Oxford University Press, Oxford.

Rayner, S. (2004), “The novelty trap: Why does institutional learning about new technologies seem so difficult?”, Industry and Higher Education, Vol. 18, No. 6, pp. 349-355.

Reardon, S. (2015), “Global Summit Reveals Divergent Views on Human Gene Editing”, Nature, Vol. 528, No. 7581, p. 173, https://doi.org/10.1038/528173a.

Reith, S. et al. (2013), “Public acceptance of geothermal electricity production”, GEOELEC, Deliverable No. 4.4, www.geoelec.eu/wp-content/uploads/2014/03/D-4.4-GEOELEC-report-on-public-acceptance.pdf.

Rodricks, J.V. (2006), Calculated Risks: The Toxicity and Human Health Risks of Chemicals in Our Environment, 2nd ed., Cambridge University Press, Cambridge.

Rowe, G. (2005), “A Typology of public engagement mechanisms”, Science, Technology and Human Values, Vol. 30, No. 2, pp. 251-90, https://doi.org/10.1177/0162243904271724.

Sarewitz, D. (2015), “CRISPR: Science can’t solve it”, Nature, Vol. 522, No. 7557, www.nature.com/news/crispr-science-can-t-solve-it-1.17806.

Schomberg, R. (2013), “A vision of responsible research and innovation”, in R. Owen, J. Bessant and M. Heintz (eds.), Responsible innovation, Managing the responsible emergence of science and innovation in society, John Wiley & Sons, Chichester, https://doi.org/10.1002/9781118551424.

Schot, J. and A. Rip (1996), “The past and future of constructive technology assessment”, Technological Forecasting and Social Change, Vol. 54, pp. 251-68.

Schroeder, D. and M. Ladikas (2015), “Towards principled Responsible Research and Innovation: employing the difference principle in funding decisions”, Journal of Responsible Innovation, Vol. 2/2, pp. 169-183, https://doi.org/10.1080/23299460.2015.1057798.

Slovic, P. (1987), “Perception of Risk”, Science, Vol. 236, No. 4799, pp. 280-85, https://doi.org/10.1126/science.3563507.

Smith, A. and J. Anderson (2014), “AI, robotics, and the future of jobs”, Pew Research Center, Internet, Science & Tech, www.pewinternet.org/2014/08/06/future-of-jobs/.

Stilgoe, J., S.J. Lock and J. Wilsdon (2014), “Why should we promote public engagement with science?”, Public Understanding of Science, Vol. 23, No. 1, pp. 4-15.

Stirling, A. (2007), “‘Opening up’ and ‘closing down’: Power, participation, and pluralism in the social appraisal of technology”, Science, Technology & Human Values, Vol. 33, No. 2, pp. 262-94, https://doi.org/10.1177/0162243907311265.

Tenner, E. (2001), “Unintended consequences and nanotechnology”, in Social Implications of Nanoscience and Nanotechnology, Vol. 241-45, Arlington, National Science Foundation, VA.

The Economist (2016), “Britain’s manufacturing sector is changing beyond all recognition”, The Economist, 5 November, www.economist.com/node/21709597/print.

United States (2016), “Preparing for the future of artificial intelligence”, US Executive Office of the President, https://obamawhitehouse.archives.gov/sites/default/files/whitehouse_files/microsites/ostp/NSTC/preparing_for_the_future_of_ai.pdf.

US EPA (United States Environmental Protection Agency) (2016), “Research on evaluating nanomaterials for chemical safety”, US Environmental Protection Agency, www.epa.gov/chemical-research/research-evaluating-nanomaterials-chemical-safety (accessed 4 February 2016).

Winickoff, D.E. et al. (2005), “Adjudicating the GM food wars: Science, risk, and democracy in world trade law”, Yale Journal of International Law, Vol. 30, No. 1, pp. 81-123.

Winickoff, D.E., J.A. Flegal and A. Asrat (2015), “Engaging the global south on climate engineering research”, Nature Climate Change, Vol. 5, No. 7, pp. 627-34, https://doi.org/10.1038/nclimate2632.

Winner, L. (1977), Autonomous Technology: Technics-out-of-Control as a Theme in Political Thought, MIT Press, Cambridge, MA.

Wüstenhagen, R., M. Wolsink and M.J. Bürer (2007), “Social acceptance of renewable energy innovation: An introduction to the concept”, Energy Policy, Vol. 35, No. 5, pp. 2683-91, https://doi.org/10.1016/j.enpol.2006.12.001.