Chapter 7. Technology outlook1

The technology ecosystem that drives digital transformation is composed of many core technologies and is continuously evolving. This chapter explores the characteristics, opportunities and challenges raised by two of the currently most promising technological developments: machines performing human-like cognitive functions, also known as artificial intelligence, and blockchain, a distributed and tamper-proof database technology.

  

Introduction

Reflecting on the past 30 or 40 years of information and communication technology (ICT) innovation, each decade has seen a new form of technological revolution: “personal computers” in the 1980s, the Internet in the 1990s, mobile computing and smartphones in the 2000s, and the Internet of Things (IoT) in the current decade. Basic computing and networking technologies continue to improve over time through, for example, continued miniaturisation of devices, increased processing power and storage capacity at declining cost, and availability of higher speed on fixed and wireless networks.

However, future potential economic and social benefits increasingly depend on more recent technologies that, in turn, rely on these existing and more mature fundamental building blocks, including the IoT, cloud computing, big data analytics, artificial intelligence (AI) and blockchain. This set of technologies forms an ecosystem in which each technology both exploits and fosters the development of the others. Cloud computing is based on always-on everywhere-available and high-speed Internet connectivity and is essential to big data analytics, which relies on cheap and massive processing power and storage capacity. Big data also critically depends on sophisticated algorithms that, in turn, form the basis of AI. To comprehend their – virtual or physical – environment and take appropriate decisions, machines such as robots and drones rely on AI that often uses big data to identify patterns. The characteristics of each of these technologies create a specific set of opportunities and challenges and, as such, can be considered separately. However, it is increasingly necessary to also analyse them within the broader context of the digital ecosystem without which they could not thrive, and to which they contribute.

This chapter explores the characteristics, opportunities of and challenges raised by two of the currently most promising technological developments: machines performing human-like cognitive functions, also known as AI; and blockchain, a distributed and tamper-proof database technology that can be used to store any type of data, including financial transactions, and has the ability to create trust in an untrustworthy environment. The key findings from this chapter are:

  • AI is going mainstream, driven by machine learning, big data and cloud computing that empower algorithms to identify increasingly complex patterns in large data sets and, in some cases, to outperform humans in certain cognitive functions. Beyond the promise of AI to improve efficiency, resource allocation, and thus drive productivity gains, AI also promises to help address complex challenges in many areas such as health, transport and security.

  • Blockchain does not need any central authority or intermediary operator to function, as illustrated by bitcoin, a virtual currency and one of the first successful blockchain applications, which operates independently of any central bank. Beyond bitcoin, blockchain applications provide many opportunities, including in the financial sector, the public sector, for education, and the IoT, notably by reducing market friction and transaction costs, by facilitating transparency and accountability, and by enabling guaranteed execution through smart contracts.

This chapter also discusses policy challenges that could be amplified by the proliferation of AI and blockchain as well as new challenges that the use of these technologies may bring. Policy makers need to be aware of AI’s potential impacts, for example, on the future of work and skill development, and potential implications for transparency and oversight, responsibility, liability, as well as safety and security. Challenges raised by some blockchain applications include, for example, the difficulty to shut down a blockchain application, if its network is transnational, or the challenge to enforce law in the absence of a central intermediary, which also raises the important question of how – and to whom – to impute legal liability for torts caused by blockchain-based systems.

Artificial intelligence

This section first describes the distinctive characteristics of AI and how over the past few years it has become mainstream, rapidly permeating and transforming our economies and societies. Compared to other technological developments, many are surprised by the speed of AI’s diffusion, but views vary widely on both the likelihood and time horizon of developments such as artificial general intelligence (AGI) or technology singularity.

The potential benefits and opportunities offered by AI are introduced in the following subsection, along with examples of applications in various areas. AI promises to generate productivity gains, improve the efficiency of decision making and lower costs, since it allows data processing at enormous scales and accelerates the discovery of patterns. By helping scientists to spot complex cause and effect relationships, AI is expected to contribute to solving complex global challenges, such as those related to the environment, transportation or health. AI could radically enhance quality of life, impacting healthcare, transportation, education, security, justice, agriculture, retail commerce, finance, insurance and banking, among others. Indeed, AI could find valuable application wherever intelligence must be deployed.

The final subsection introduces some of the main policy questions that AI raises. AI is expected to replace and/or augment components of human labour in both skilled and unskilled jobs, requiring policies to facilitate professional transitions and to help workers develop the skills to benefit from, and to complement, AI. AI could also impact economic concentration and income distribution. Another issue is that of ensuring transparency and oversight of AI-powered decisions that impact people and of preventing algorithmic biases and discrimination and privacy abuses. AI also raises new liability, responsibility, security and safety questions.

Artificial intelligence is going mainstream, driven by recent advances in machine learning

Artificial intelligence is about machines performing human-like cognitive functions

There is no universally accepted definition of AI. AI-pioneer Marvin Minsky defined AI as “the science of making machines do things that would require intelligence if done by men”. The present volume uses the definition provided by Nils J. Nilsson (2010): “Artificial intelligence is that activity devoted to making machines intelligent, and intelligence is that quality that enables an entity to function appropriately and with foresight in its environment”. Machines understanding human speech, competing in strategic game systems, driving cars autonomously or interpreting complex data are currently considered to be AI applications. Intelligence in that sense intersects with autonomy and adaptability through AI’s ability to learn from a dynamic environment.

It is important to note that the boundaries of AI are not always clear, and evolve over time. For example, in some cases techniques developed by AI researchers to analyse large volumes of data are identified as “big data” algorithms and systems (The White House, 2016a). Optical character recognition, for example, has become a widespread technology and is no longer considered to be AI. A core objective of AI research and applications over the years has been to automate or replicate intelligent behaviour.

Machine learning, big data and cloud computing have enabled artificial intelligence’s recently accelerated progress

Despite fluctuations in public awareness, AI has made significant progress since its inception in the 1950s. The principle was conceptualised by John McCarthy, Alan Newell, Arthur Samuel, Herbert Simon and Marvin Minsky in the Dartmouth Summer Research Project, the summer 1956 workshop that many consider to be the start of AI. While AI research has steadily progressed over the past 60 years, the promises of early AI promoters proved to be overly optimistic, leading to an “AI Winter” of reduced funding and interest in AI research during the 1970s. More recently, the availability of big data and cloud computing have enabled breakthroughs in an AI technology called “machine learning” (Chen et al., 2012), dramatically increasing the power, availability, growth and impact of AI. In 2016, an AI programme won at the game of GO against one of the world’s best players – a feat that experts thought would take at least ten more years to accomplish. The availability of scalable supercomputing capabilities on the cloud and growing flows and stocks of data produced by connected humans and machines have enabled breakthroughs in machine learning.

Machine-learning algorithms can identify complex patterns in large data sets

With machine learning, algorithms identify complex patterns in large data sets. For example, Google’s AI learns how to translate content into different languages based on translated documents that are online and Facebook learns how to identify people in images based on its existing large database of known users. In particular, the progress of deep learning and reinforced learning, both branches of machine learning, have led to impressive results since 2011-12.

The efficiency of AI systems also relies on the use of specific microprocessors, often in the cloud. The learning phase of deep neural networks relies on “graphic processing units” processors that were initially designed for video games, such as those by Nvidia. For the response phase, large AI companies often develop dedicated processors, such as Google’s “tensor processing unit” or Intel’s Altera “field programmable gate array”.

While artificial intelligence is about cognitive functions, robotics is generally concerned with motor functions

AI is mostly intangible in its manifestations. Robotics, which operates at the intersection between mechanical engineering, electrical engineering and computer sciences, is mostly physical in its manifestations. In an “autonomous machine”, AI can be characterised as the intelligence or cognitive functions, while robotics refers to the motor functions. However, the distinction between cognitive and motor functions is porous and evolving since mobility requires the ability to sense and analyse the environment. For example, machine-learning AI plays a key role in computer vision. Nonetheless, the physical nature of robotics differentiates it from AI and has industrial consequences for autonomous machines: developing complex motor functions is typically more difficult, expensive and time-consuming than developing complex cognitive functions. Popular examples of the convergence between AI and robotics are self-driving cars and humanoid robots. It is important to highlight that autonomous machines combining advanced AI and robotics techniques still struggle to reproduce many basic non-cognitive motor functions (Box 7.1).

Box 7.1. “Supervised” and “unsupervised” machine-learning algorithms

Machine-learning technology powers web searches, content filtering on social networks, recommendations on e-commerce websites, and is increasingly present in consumer products such as cameras and smartphones. Machine-learning systems are used to identify objects in images; transcribe speech into text; match news items, posts or products with users’ interests; and select relevant results of search.

Unsupervised learning presents a learning algorithm with an unlabelled set of data – that is, with no predetermined “right” or “wrong” answers – and asks it to find structure in the data, perhaps by clustering elements together, for example examining a batch of photographs of faces and learning how to identify how many different people there are. Google’s News service uses this technique to group similar news stories together, as do researchers in genomics looking for differences in the degree to which a gene might be expressed in a given population, or marketers segmenting a target audience.

Supervised learning involves using a labelled data set to train a model, which can then be used to classify or sort a new, unseen set of data (for example, learning how to spot a particular person in a batch of photographs). This is useful for identifying elements in data (perhaps key phrases or physical attributes), predicting likely outcomes, or spotting anomalies and outliers. Essentially this approach presents the computer with a set of “right answers” and asks it to find more of the same. Deep learning is a form of supervised learning.

Source: UK Government Office for Science (2016), “Artificial intelligence: Opportunities and implications for the future of decision-making”, https://www.gov.uk/government/publications/artificial-intelligence-an-overview-for-policy-makers.

Artificial intelligence outperforms humans in certain complex cognitive functions but still requires huge data sets

Neurosciences are important to understand the state of AI today as well as for understanding its future possibilities. The renaissance of AI since about 2011 is largely attributed to the success of the branch of machine learning called “deep artificial neural networks”, also known as deep learning, supported by another branch of AI known as “reinforcement learning”. Both deep learning and reinforcement learning claim to loosely emulate the neuronal layers that the brain uses to process information and learn through pattern recognition, although machine learning currently operates mostly in the realm of statistics. More meaningful convergence between AI and neuroscience is expected in the future as understanding of the human brain improves and technologies converge (OECD, forthcoming).

AI algorithms are able to perform complex computations of large datasets in parallel and therefore, are faster than biological human intelligence. Beyond computationally intensive tasks, AI increasingly outperforms humans for certain complex cognitive functions such as image recognition in radiology (Wang et al., 2016; Lake et al., 2016).

Today’s narrow artificial intelligence focuses on specific tasks, while a hypothetical future artificial general intelligence could carry out general intelligent action, like humans

Existing artificial narrow intelligence (ANI) or “applied” AI is designed to accomplish a specific problem-solving or reasoning task. This is the current state-of-the-art. The most advanced AI system available today, such as the IBM Watson or Google’s AlphaGo, are still “narrow”. While they can generalise pattern recognition to some extent, for example by transferring knowledge learnt in the area of image recognition into speech recognition, the human mind is far more versatile.

Applied AI is often contrasted to a (hypothetical) AGI, in which autonomous machines would become capable of general intelligent action, like a human being, including generalising and abstracting learning across different cognitive functions. AGI would have a strong associative memory and be capable of judgment and decision making, multifaceted problem solving, learning through reading or experience, creating concepts, perceiving the world and itself, inventing and being creative, reacting to the unexpected in complex environments, and anticipating.

With respect to a potential AGI, views vary widely and experts caution that discussions should be realistic in terms of time scales. Projections from the few computer scientists active in AGI research on the time frame for the realisation of AGI range from a decade to a century or more (Goertzel and Pennachin, 2006). Some highlight that AI, like biological intelligence, is necessarily constrained by what computer scientists term combinatorics – the inconceivably vast number of things an intelligent system might think or do (OECD, 2016). In addition, because AI is an artefact, AI systems are constructed using architectures that limit AI to the knowledge and potential actions that make sense for a given application. The convergence of machine learning and neurosciences over the next decades is expected to have a significant impact.

Experts broadly agree that ANI will generate significant new opportunities, risks and challenges. They also agree that the possible advent of an AGI, perhaps sometime during the 21st century, would greatly amplify these consequences.

“Technological singularity” is a speculative future “super” artificial intelligence scenario

The term “technological singularity” refers to a speculative but consequential long-term scenario popularised by Ray Kurzweil, an inventor and futurist who is now Director of Engineering at Google. In this scenario, the emergence of an AGI would lead to an “intelligence explosion” and, within a few decades or less, to an artificial super intelligence (ASI). Such an ASI would be exponentially self-improving and could reportedly threaten mankind.

Both the AGI and ASI scenarios are excluded from the following discussion. The term “artificial intelligence” is used to refer to machine-learning algorithms that are associated with sensors and other computer programmes to sense, comprehend and act on the world; learn from experience; and adapt over time. Computer vision and audio processing algorithms, for example, actively perceive the world around them by acquiring and processing images, sounds and speech and are typically used for applications like facial and speech recognition. A typical application of natural language processing and inference engines is language translation. AI systems can also carry out cognitive actions like taking decisions, for example to accept or to reject an application for credit or undertake actions in the physical world, for example assisted braking in a car.

Successful artificial intelligence platforms leverage vast amounts of data

Digital giants as well as start-ups are active in AI. Multinationals are reorienting their business models towards data and predictive analytics to improve productivity through the use of AI, particularly in the People’s Republic of China (hereafter “China”), France, Israel, Japan, Korea, the Russian Federation, the United Kingdom, and the United States. The marketplace for AI is dominated by a dozen multinationals from the United States, known collectively as GAFAMI – for “Google, Apple, Facebook, Amazon, Microsoft and IBM”, and from China, known as BATX – for “Baidu, Alibaba, Tencent, and Xiaomi” (OECD, 2017). Commercialising AI technology via “software-as-a-service” business models seems to be popular, as done for example by Google and IBM, who provide access to centrally hosted AI on a subscription basis.

In the global competition between these platforms, a key success factor is the amount of data that firms have access to. Machine-learning algorithms currently require vast amounts of data to recognise patterns efficiently. For example, image recognition requires millions of images of a particular animal or car. Data generated by users, consumers and businesses help to train AI systems. Facebook relies on the nearly 10 billion images published daily by its users to continuously improve its visual recognition algorithms. Similarly, Google DeepMind uses user-uploaded YouTube video clips to train its AI software to recognise video images.

The start-up landscape is also vibrant. Research from CB Insights (2017) reported that funding raised by AI start-ups increased from USD 589 million in 2012 to over USD 5 billion in 2016. In 2016, nearly 62% of the deals went to start-ups from the United States, down from 79% just four years before. Start-ups from the United Kingdom, Israel, and India followed. By 2020, the “AI market” is projected to be worth up to USD 70 billion.

Artificial intelligence promises to improve efficiency and productivity and to help address complex challenges

Artificial intelligence can improve efficiency, save costs and enable better resource allocation

AI is expected to dramatically improve the efficiency of decision making, save costs and enable better resource allocation in basically every sector of the economy by enabling the detection of patterns in enormous volumes of data. Algorithms mining data on the operations of complex systems enable optimisation in sectors as diverse as energy, agriculture, finance, transport, healthcare, construction, defence or retail. AI enables public or private actors to optimise the use of production factors – land/environment, labour, capital or information – and to optimise the consumption of resources such as energy or water. Using its AI algorithms, Google was able to reduce the energy consumption of its data centres in ways that human intuition and engineering had not envisaged (Evans and Gao, 2016). In a two-year experiment, Google’s DeepMind artificial neural network analysed over 120 parameters in a data centre and identified a more efficient and adaptive overall method of cooling and powering usage that enabled the company to reduce the energy consumption of already energy efficient data centres by a further 15% (Evans and Gao, 2016). DeepMind foresees applications to improve the efficiency of power plant conversion or to reduce the amount of energy and water needed for semiconductors.

AI decreases the cost of making predictions by assessing risk profile, managing inventory and forecasting demand. AI-assisted predictions in banking and insurance, preventive patient healthcare, maintenance, logistics, or meteorology are increasingly accessible and accurate. Firms like Ocado and Amazon use AI to optimise their storage and distribution networks, plan the most efficient routes for delivery and make the best use of their warehouses. In the healthcare sector, data from smartphones and fitness trackers can be analysed to improve the management of chronic conditions and predict and prevent acute episodes. IBM Watson is looking into using automated speech analysis tools on mobile devices to detect the development of diseases such as Huntington’s, Alzheimer’s or Parkinson’s earlier.

Artificial intelligence can help identify suspicious activity, people or information

Machine learning is being used to detect criminal and fraudulent behaviour and ensure compliance in innovative ways. In fact, fraud detection was one of the first uses of AI in banking. Account activity patterns are monitored and anomalies trigger a review, with advances in machine learning now starting to enable near real-time monitoring. Banks are paying attention and in 2016 the bank Credit Suisse Group AG launched an AI joint-venture with a Silicon Valley surveillance and security firm whose solutions help banks to detect unauthorised trading (Voegeli, 2016).

AI technologies are also increasingly being used in counter-terrorism and police activities. The US Intelligence Advanced Research Projects Activity is working on several programmes to process large volumes of multi-dimensional footage captured by “media in the wild” and identify individuals. Its programmes use AI to move beyond largely two-dimensional image-matching methods; or even to identify individuals and automatically geolocate suspicious untagged videos published online.

The veracity of news and “fake news” is another area where AI can help analyse large volumes of data in the trillions of user-provided posts. Social networking giant Facebook is reportedly training a system to identify fake news based on the types of articles that users have flagged as misinformation in the past.

Artificial intelligence is expected to generate a new wave of productivity gains

AI is expected to be able to contribute to generating productivity gains across domains through both the automation of activities previously carried out by people and through machine autonomy whereby systems are able to operate and adapt to changing circumstances with reduced or no human control (OECD, 2017). The best-known example of machine autonomy is that of driverless cars, but other applications include automated financial trading, automated content curation systems, or systems that can identify and fix security vulnerabilities.

Productivity gains could take place in areas ranging from factories to service centres and offices, as AI enables complex cognitive and physical tasks to be automated. AI can automatise and prioritise routine administrative and operational tasks by training conversational robot software (“bots”). Google’s Smart Reply software proposes draft responses based on previous responses to similar messages. Newsrooms increasingly use machine learning to produce reports and to draft articles. These applications use a human in the final approval process and hence increase the productivity of that individual. Robots using lasers and 3D depth-sensors and advanced computer vision deep neural networks can now work safely alongside warehouse and factory workers. AI can also improve productivity by reducing the cost of searching large data sets. In the legal sector, companies such as ROSS, Lex Machina, H5 or CaseText rely on natural language processing AI to search through legal documents for case-relevant information, reviewing thousands of documents in days rather than months.

Several market research firms have recently attempted to project AI’s impact on economic growth and productivity. Purdy and Daugherty (2016) analysed 12 developed economies and claimed that AI could double these countries’ annual growth rates and increase the productivity of labour by up to 40% by 2035. The McKinsey Global Institute estimated that automation through both AI and robotics could raise global productivity by 0.8% to 1.4% annually.

Artificial intelligence promises to help people address complex challenges in areas like health, transport and security

Artificial intelligence helps detect health conditions early, deliver preventive services and discover new treatments

Advances of AI in healthcare are expected to help the treatment of human diseases and medicine by both helping to detect conditions early and – in combination with rapidly increasing flows of available medical data – by enabling precision and preventive medical treatments. AI helps detect medical conditions early notably through the use of image recognition on radiography, ultrasonography, computed tomography and magnetic resonance imaging. IBM Watson and doctors from the University of Tokyo were able to diagnose a rare form of leukaemia in a Japanese patient that doctors had not detected. In the area of breast cancer detection radiology, deep learning algorithms combined with inputs from human pathologists lowered the error rate to 0.5%, representing an 85% reduction in error compared to error rates achieved by human pathologists alone (3.5%) or machines alone (7.5%) (Nikkei, 2015).

Advances in machine learning are also expected to facilitate drug inventions, creations and discoveries through the mining of data and research publications. Personalised healthcare services and life coaches on smartphones are already beginning to understand and integrate various personal health data sets. In the area of elderly care, natural language processing AI applications and visual and hearing assistance devices, such as exoskeletons or intelligent walkers, are expected to play an increasing role.

Artificial intelligence-powered autonomous driving and optimised traffic routes facilitate transportation and save lives

AI is already impacting transportation significantly with the introduction of itinerary mapping based on traffic data and autonomous driving capabilities. Advances in deep neural networks are one of the main drivers behind the impressive progress achieved in autonomous vehicles over the past decade, particularly thanks to computer vision. In combination with many other types of algorithms, deep neural networks are able to make the most out of complex sensors used for navigation and learn how to drive in complex environments. Benefits include fewer road accidents and enabling people to use commuting time for productive activity, leisure or rest. While the shape and timeline of the restructuring of the car industry is still unclear, many believe that connected and autonomous vehicles could help avoid many of the 1.3 million deaths per year on roads globally. Disrupted by the arrival of new actors such as Google, Baidu, Tesla or Uber, traditional automobile actors such as Ford Motors or Honda are now investing in promising AI start-ups, forging alliances or developing in-house capabilities.

Artificial intelligence helps identify and combat both cybersecurity threats and real-world security threats

AI is effective against cyberattacks and identity theft through analysis of trends and anomalies. It is used as defence against hackers as well as in proactive, just-in-time, responses to hacking attempts. The Defense Advanced Research Projects Agency (DARPA) Cyber Grand Challenge competition in August 2016, with attacks and defence on-the-fly using AI cyber reasoning systems, was an important milestone that, according to DARPA, validated the concept of automated cyber defence. AI has a wide range of security applications beyond cybersecurity. AI is used as a powerful identification method in policing (for example with facial recognition that harnesses large networks of surveillance cameras) and increasingly to predict where and when crime will happen. University-based research start-ups have also used AI to detect lying in written text with potential applications, among others, to enhance online child safety (Dutton, 2011). For emergency and disaster management, AI applications can optimise planning and resource deployment by aid agencies, international organisations and non-governmental organisations.

The rise of artificial intelligence amplifies existing policy challenges and raises new ones

While policy makers are starting to focus on artificial intelligence, more awareness of its potential impacts is needed

Increasingly, countries are developing national AI strategies or including AI as a significant part of wider national digital agendas. China, France, Germany, Japan, Korea, the United Kingdom and the United States have developed or are developing AI-related plans and strategies that intersect with robotics and other complementary sectors. Overall, however, the likely impact of AI in the years ahead is just beginning to be explored by policy makers and by the public at large and the speed at which AI is permeating our economies and society may sometimes be underestimated.

At the G7 ICT Ministers’ Meeting in Takamatsu, Japan in 2016, participating countries agreed on a proposal made by Minister Takaichi, Japanese Ministry for Internal Affairs and Communications, to convene stakeholders to consider social, economic, ethical and legal issues of AI and formulate principles for AI development (Box 7.2).

Box 7.2. Expert discussions on artificial intelligence networking in Japan

Throughout the first half of 2016, the Japanese Ministry of Internal Affairs and Communications convened discussions with experts in science and technology, the humanities, and social sciences on issues associated with the development of “artificial intelligence networking”, i.e. of interconnected artificial intelligence (AI) systems that co-operate with each other.

These discussions advocated the notion of a “Wisdom Network Society”, a human-centric society built through AI in which humans could create, distribute and connect data, information and knowledge freely and safely. The wisdom networks would harmoniously combine human and AI via AI networking and allow complex challenges to be addressed. The expert discussions focused on the social and economic impacts and challenges of AI networking in 16 different areas until the 2040s.

Since October 2016, the ministry has been co-ordinating expert discussions in Japan to consider guiding principles for AI research and development (R&D) and to discuss detailed impacts and risks of AI. The ministry is now actively encouraging international co-operation on AI, with the involvement of all stakeholders.

In the AI R&D context, the ministry has identified the importance of considering: 1) transparency, i.e. the ability to explain and verify the operation of AI networks; 2) user assistance, i.e. ensuring that AI networks assist users and provide users with appropriate opportunities to make choices; 3) controllability by humans, i.e. enabling people to control the safe use of AI, to take over control from AI smoothly if needed, particularly in case of an emergency, and to determine how much AI is used in decisions or actions; 4) security, i.e. ensuring the robustness and dependability of AI networks; 5) safety, i.e. ensuring that AI networks do not cause danger to the lives/bodies of users or third parties; 6) privacy, i.e. not infringing on the privacy of users or third parties; 7) ethics, i.e. ensuring the respect of human dignity and personal autonomy; 8) accountability; and 9) interoperability or linkage, i.e. ensuring interoperability between AIs or AI networking.

Based on these discussions, the Japanese government is considering whether guidelines concerning AI usage and applications are also needed.

Source: OECD (2016), “Summary of the CDEP Technology Foresight Forum: Economic and Social Implications of Artificial Intelligence”, http://oe.cd/ai2016.

In addition, the Japanese Cabinet Office Council for Science, Technology and Innovation helped to co-ordinate a human-centred “Society 5.0” strategy, which was released in March 2017, to help Japan benefit from the opportunities AI creates while minimising risks and setting the limits of automated decision making.

As a result of an inter-agency initiative in the United States, a public report was published in 2016 on AI (“Preparing for the future of artificial intelligence”), which was accompanied by a “National Artificial Intelligence Research and Development Strategic Plan”. These documents detail steps that the US federal government could take to use AI to advance social good and improve government operations; adapt regulations in a way that encourages innovation while protecting the public; ensure that applications of AI, including those that are not regulated, are fair, safe and governable; develop a skilled and diverse AI workforce; and address the use of AI in weapons.

In May 2016, the Chinese government unveiled a three-year national AI plan formulated jointly by the National Development and Reform Commission, the Ministry of Science and Technology, the Ministry of Industry and Information Technology, and the Cyberspace Administration of China. The government envisions creating a USD 15 billion market by 2018 by investing in research and supporting the development of the Chinese AI industry. In 2016, China surpassed the United States in terms of the number of papers published annually on “deep learning”, reflecting the increasing research priority that AI has become for China.

Several partnerships and initiatives are being formed to promote ethical AI and try to prevent adverse effects of AI. For example, the non-profit AI research company OpenAI was founded late 2015 and now employs 60 full-time researchers with the mission to “build safe AGI, and ensure AGI’s benefits are as widely and evenly distributed as possible”.2 In April 2016, the IEEE Standards Association launched its “Global Initiative for Ethical Considerations in the Design of Autonomous Systems” to bring together multiple voices in the AI and autonomous systems communities to “make sure that [AI and autonomous systems] technologies are aligned to humans in terms of our moral values and ethical principles”. In September 2016, Amazon, DeepMind/Google, Facebook, IBM and Microsoft launched the “Partnership on Artificial Intelligence to Benefit People and Society” to advance public understanding of AI technologies and formulate best practices on its challenges and opportunities.3

Artificial intelligence will change the future of work, replacing and/or augmenting human labour in expert and high-wage occupations

A widely discussed set of policy challenges is the impact of AI on jobs. AI is expected to greatly exacerbate the displacement trends caused by automation discussed in Chapter 5, as AI-enabled machines augment or replace humans in numerous occupations across domains and value chains. Whether this raises incomes and generates new types of jobs to replace those that are automated, or leads to unemployment, is uncertain and results of different studies on the overall impacts of job automation conducted over the past five years differ in their assessment and projections (Arntz, Gregory and Zierahn, 2016; Frey and Osborne, 2013; Citibank, 2016).

AI’s impact will also depend on the speed of the development and diffusion of AI technologies in different sectors over the coming decades. According to the International Transport Forum (ITF), for example, driverless trucks could be a regular presence on many roads within the next ten years, leading to large-scale job displacement of truck drivers if driverless trucks are deployed quickly. Driverless trucks could improve road safety, lower emissions and reduce operating costs for road freight in the order of 30%, notably due to savings in labour costs that currently account for 35% to 45% of costs and to more intensive use of vehicle fleets (ITF, 2017). The White House estimated in 2016 that 2.2 million to 3.1 million existing part-time and full-time jobs in the United States may be threatened by automated vehicles over the next two decades (The White House, 2016b).

The jobs that are potentially at risk are not only those in low-skill occupations or in manufacturing. Rather, many jobs involving medium or higher level cognitive skills are also potentially at risk. Early research suggests AI could impact employment using general cognitive skills such as literacy and numeracy, which are a primary focus of development during compulsory education (Elliot, 2014). Machine-learning technologies in particular seem to have potential to affect highly educated professions (Box 5.1). For example, image processing and pattern recognition algorithms are reportedly beginning to impact radiologists: as described earlier, pattern recognition applications are increasingly able to detect health conditions by identifying anomalies on radiography, ultrasonography or magnetic resonance imaging. Machine-learning applications in the area of speech recognition, natural language processing or machine translation are expected to impact demand for services such as translations, legal services and accounting services.

Policy discussions underway to address AI’s impact on jobs include the merits of adapting tax policies to rebalance the shift from labour to capital and protect vulnerable people from socio-economic exclusion (with some even proposing to tax robots); adapting social security and redistributive mechanisms; developing education and skill systems that facilitate repeated and viable professional transitions; and considering how to ensure fair access to credit, healthcare or retirement benefits to a more mobile and less secure workforce.

Developing the skills to benefit from, and to complement, artificial intelligence

Paradoxically, AI and other digital technologies also enable innovative and personalised approaches to job-search and hiring processes and enhance the efficiency of matching labour supply and demand. The LinkedIn platform, for example, uses AI to help recruiters find the right candidates and to connect candidates to the right jobs, based on data about the profile and activity of the platform’s 470 million registered users (Wong, 2017). AI-based tools can also support skills development and retraining through AI-based personalised tutoring tools that provide quality education at scale.

AI could be expected, as with ICTs more generally (Chapter 4), to enhance the need for new skills along three lines: 1) specialist skills, to programme and develop AI applications, e.g. through AI-related fundamental research, engineering and applications, as well as data science and computational thinking; 2) generic skills, to be able to leverage AI; and 3) complementarity skills, to enable, for example, critical thinking; creativity, innovation and entrepreneurship; and the development of human skills such as empathy.

Artificial intelligence’s business dynamics pose new questions

The anticipated business dynamics of AI pose questions of wealth and power distribution as well as of competition and barriers to entry. The rapid evolution of AI technology could challenge existing competition policies and raise questions on the potential impacts of AI on income distribution and of who will control AI technology. On the economic side, there is the potential that a few technology companies with access to large amounts of data and funding could end up controlling AI technology, with access to its super-human intelligence and gathering most of the benefits yielded from AI. AI may also imply that companies will rely less on their human workforce in the future.

As with some other digital and data markets, the AI market may exhibits “winner-takes-most” characteristics because of network effects and scale effects. With the disruptive business models of highly innovative digital multinationals unfolding transnationally, the accumulation of wealth and power by a limited number of private AI actors could cause tensions within and between countries. Some stakeholders highlight risks of digital giants acquiring start-ups before they can become potential competitors and the consequent risks of resource concentration in the field of AI.

Ensuring transparency and oversight of artificial intelligence-powered decisions that impact people

Another set of AI-related policy questions relates to the governance of AI systems. What oversight and accountability mechanisms do machine-learning algorithms require and what balance is needed between productivity and access on the one hand, and values such as justice, fairness and accountability on the other? The question is already manifest in critical areas such as determining priorities in line of care at hospitals, automatic vehicles’ emergency response procedures, citizen risk-profiling in criminal justice procedures, preventive policing, and access to credit and insurance.

The challenge of governing the use of AI algorithms is compounded with advanced machine-learning techniques by the fact that tracing and understanding the decision-making mechanisms of AI algorithms is increasingly difficult as their complexity increases, even to those who design and train them (OECD, 2016). Researchers have started working on a potential solution but results are still immature and uncertain. It should be noted that Articles 13-15 of the European Union’s (EU’s) new General Data Protection Regulation mandates that data subjects receive meaningful information about the logic involved, the significance and the envisaged consequences of automated decision-making systems. It also includes, in Article 22, the “right not to be subject to automated decision making”. The protections actually afforded to data subjects under the regulation and its implications for AI researchers and practitioners are still under discussion (Wachter, Mittelstadt and Floridi, 2016).

Developing and implementing algorithmic accountability solutions at scale is expected to be complex and costly, raising the question of who should bear its costs. If actors pursue a lower cost solution, abuses could arise. Policy makers will have to work closely with AI researchers and engineers to develop mechanisms that balance competing needs for transparency and legitimate commercial confidentiality. In some cases, technical designs and business models may be well aligned with socially established value hierarchies and technical standardisation agencies and independent authorities can play a key role. The Institute of Electronical and Electronics Engineers has launched the Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. The goal of the initiative is to make sure that technology and technologists work to advance humanity under principled disciplines. The initiative harnesses the institute’s experience in complex standardisation processes and the strength and inclusion potential of its transnational reach with its global community of over 400 000 practitioners and experts in 160 countries.

Because machine learning requires vast amounts of data, the governance of AI intersects with the regulation of data collection, storage, processing, ownership and monetisation. Enabling the potential of AI for growth, development and public good will require agreeing on technical standards and governance mechanisms that maximise the free flow of data and promote investments in data-intensive services (OECD, 2015). The difficulty of governing data is compounded by uncertainty over how present and future AI technologies can help create, analyse and use data in radically new ways not previously imagined by consumers, firms and governments.

Applications such as facial recognition and personalised services offer convenience and improved safety, but may raise risks to civil liberties if people are monitored and inferences made by machines are not transparent, or if individuals cannot access their underlying personal information.

Preventing algorithmic biases and discrimination

Concerns that machine-learning algorithms could amplify social biases and cause discrimination have increased as algorithms leveraging big data become more complex, autonomous and powerful. AI learns from data, but if the data are incomplete or biased, AI can exacerbate biases. The case of “Tay”, the teenage AI conversational bot developed by Microsoft, illustrated these risks: the Twitter bot was released in March 2016 on Twitter as an experiment to improve its understanding of language among 18-24 year-olds online. Within hours, the bot had to be shut down as it had started to use racial slurs, defend white supremacist propaganda and support genocide. Another widely cited illustration of the risk of AI discrimination is the racial bias found in some “risk prediction” tools used by judges in criminal sentencing and bail hearings. Some have questioned the fairness and efficacy of predictive policing tools, credit scoring tools and hiring tools, raising questions about how to ensure that algorithms protect diversity and fairness.

Responsibility and liability, security and safety

AI-driven automated decision making raise questions of responsibility and liability, for example when accidents involve autonomous cars. The “authored”/machine nature of AI means that it is difficult to make AI a legal person that might be responsible for its decisions. As for human drivers, insurance is widely viewed as a way forward to deal with uncertain, probabilistic risks. New safety and security risks are also emerging: for example malware could abuse an AI network system or an autonomous weapon.

Blockchain

Blockchain is a distributed and tamper-proof database technology that can be used to store any type of data, including financial transactions, and has the ability to create trust in an untrustworthy environment.

This section first describes the distinctive characteristics of blockchain technology and how it contributes to the establishment of a trusted technical environment for “trustless” economic and social interactions. Taking Bitcoin as a starting point – the first and most widely deployed blockchain network in the financial context – it looks at the technical features of existing blockchains, as well as their limitations. The main benefits and opportunities offered by this new technology are introduced in the following subsection together with examples of applications in various areas. Finally, the section concludes with a description of the policy challenges raised by blockchain technology, including how, if not appropriately regulated, blockchain usage could escape from the purview of the law.

Transactions enabled by blockchain technology can be carried out without any trusted party

A blockchain is a tamper-proof distributed database that is capable of storing any type of data, including financial transactions. Because of its distinctive characteristics (described below), a blockchain can be regarded as a source of “trustless trust” (Werbach, 2016): trust is shifted away from the centralised intermediaries towards the developers of the underlying technical infrastructure, which enables trusted transactions between nodes that are not necessarily trustworthy. Nodes in a blockchain network co-ordinate themselves through a specific protocol that stipulates the rules by which data can be recorded into the distributed database. In most cases, blockchains are implemented in such a way that there is no single party capable of controlling the underlying infrastructure or undermining the system (Brakeville and Perepa, 2016).

Traditional databases are maintained by centralised operators, responsible for hosting the data on their own servers or in data centres. In contrast, a blockchain relies on a distributed peer-to-peer (P2P) infrastructure network for the storage and management of data and on a distributed network of peers to maintain and secure a distributed ledger. The distributed character of a blockchain raises new legal and policy challenges. Indeed, in the absence of a centralised operator in charge of managing the network, it is difficult for regulators or other governmental authorities to influence the operations of many of these blockchain networks.

Compared to traditional databases, blockchains exhibit several unique characteristics that make them particularly suitable for registering records and transferring value in contexts where people cannot or do not want to rely on a trusted third party:

  • A blockchain is highly resilient and operates independently of any central authority or intermediary operator. As such, blockchains are characterised by a strong degree of disintermediation.

  • A blockchain is an append-only database, which is also tamper-resistant. It relies on cryptographic primitives and game theoretical incentives to ensure that, once data have been recorded on the decentralised database, they cannot be subsequently deleted or modified by any single party.

  • Data recorded on a blockchain are signed by the originating party and stored in chronological order into a new block of transactions, which are securely time-stamped by the underlying network.

In addition, some blockchains also come with the capability to execute software logic in a decentralised manner. Because there is no central operator responsible for running the code, such blockchain-based applications are guaranteed to execute in a strict and deterministic manner, providing users with a significant level of security assurance.

Bitcoin

Bitcoin is one of the first applications of blockchain technology for financial applications. Bitcoin is a virtual currency (or “cryptocurrency”) and decentralised payment system that operates independently of any central bank. Launched in 2009 by a pseudonymous entity called Satoshi Nakamoto, the Bitcoin blockchain relies on a set of pre-existing technologies that, combined, allow for the establishment of a decentralised and largely incorruptible database on which to record the history of all transactions performed onto the network.

In just a few years, the Bitcoin network experienced significant adoption. The network has grown from processing less than 100 transactions per day in 2009 to over 250 000 confirmed transactions daily in Q1 2017 (Figure 7.1). Despite its volatility, the Bitcoin price also followed a significant growth. From a few fractions of a US dollar in 2009, the price of Bitcoin reached over USD 1 200 in March 2017.

Figure 7.1. Confirmed Bitcoin transactions per day
Moving averages
picture

Source: Blockchain.info, https://blockchain.info/charts/n-transactions?timespan=all (accessed 24 April 2017).

 https://doi.org/10.1787/888933586749

At its core, Bitcoin is a decentralised database replicated across a P2P network (Nakamoto, 2008). A P2P network is a collection of computers (or nodes) that work together to achieve a common goal – be it either the swapping of files, as in the case of BitTorrent, or anonymous communications, as in the case of The Onion Router (Tor). In contrast to traditional client-server infrastructures, these networks are not managed by any centralised operator; they are operated by a distributed network of peers that interact and co-ordinate themselves via a common computer protocol.

In the case of Bitcoin, nodes are responsible for maintaining and updating the state of the blockchain database according to a particular protocol known as Proof of Work. This protocol is designed to help nodes reach consensus as to the state of the blockchain at periodic intervals, while simultaneously protecting the decentralised database from malicious actors who may seek to manipulate the data or inject fraudulent information.

These nodes voluntarily lend their processing power to the P2P network in order to validate transactions and ensure compliance with the underlying protocol. Valid transactions are stored into a block of transactions, which is appended, in chronological order, to the previous chain of blocks – hence the name “blockchain”.

The Bitcoin blockchain relies on public-private key cryptography4 to ensure that only authorised transactions will go through. Each Bitcoin account is identified by a given address (or public key), which is uniquely and mathematically associated with a particular password (or private key). In order to be regarded as valid, every Bitcoin transaction needs to be signed by the private key of the account holder. The system will assess the legitimacy of the transaction by checking that it is algorithmically correct (i.e. that there are sufficient funds in the account to execute the transaction) and that the funds have not been spent more than once (i.e. that the transaction passes the “double-spending” test).

The “double-spending” problem is a common issue in the context of decentralised virtual currencies. Indeed, in the absence of a centralised clearing house, it is possible for malicious parties to try and spend the same unit of virtual currency twice, by submitting two different and conflicting transactions at the same time, hoping that the network will not synchronise fast enough to block either transaction. The problem has generally been resolved through the introduction of a centralised middleman in charge of clearing the transaction.

Bitcoin introduces a novel solution to the double-spending problem through the Proof of Work protocol. Before a particular block of transaction can be recorded to the Bitcoin blockchain, the network nodes, generally referred to as miners, must first find the solution to a mathematical problem that is inherently related to that block. The mathematical problem uses a hash function (SHA-256) that is computationally difficult to solve but easy to verify, once the solution is found (Bonneau et al., 2015). Once the solution is found, it is publicly broadcasted to the whole network so that the other network participants can verify that it is correct. Only then will that particular block of transactions become an integral part of the Bitcoin blockchain.

The Bitcoin protocol will adjust the difficulty of this mathematical problem depending on the amount of computational resources (i.e. hashing power) currently invested into the network. The greater the amount of resources available to the network, the harder the problem becomes – so as to ensure that a new block of transactions is added, in average, every ten minutes. The Bitcoin network creates incentives for miners to do the heavy computational lifting in the proof of work by rewarding the first miner to solve each block’s mathematical problem with a particular amount of bitcoins, plus the right to perceive all transactions fees associated with this particular block of transactions. The Bitcoin system is designed so that the whole system can only contain 21 million bitcoins ever. Therefore, as time progresses, there will not be any more rewards for the miners to do the work required to verify proof of work. However, it is anticipated that since there will be several users of bitcoins at this point, transaction fees will be sufficient to incentivise this effort.

Unlike other databases, a blockchain is an append-only database, in the sense that data can only be added to a blockchain but, once recorded, cannot be unilaterally deleted or modified by anyone (Narayanan et al., 2016). In the case of Bitcoin, the information recorded on the blockchain can only be altered if one or more parties were to capture more than half of the overall computational power invested into the network – the so-called 51% attack. Given the current size of the Bitcoin network, such an attack, albeit possible,5 would be extremely difficult and costly to achieve.

The Bitcoin blockchain can therefore be regarded as a certified and chronological log of transactions, whose authenticity and integrity are ensured by cryptographic primitives. Because every transaction must be digitally signed by the private key of the account holder, the blockchain represents verifiable proof that one party transferred a particular amount of bitcoins to another party, at a particular point in time. And given that every block incorporates a reference (i.e. a cryptographic hash) to the previous block, any attempt at tampering with the data recorded into a block will be immediately detected by the network. In fact, the modification of any given transaction will invalidate the reference to the previous block, which would inevitably break the chain – and consequently be detected by all other network participants.

Governance mechanisms

Different blockchains implement different governance mechanisms. As a general rule, all blockchains can be situated on a continuum ranging from entirely public and permissionless blockchains, such as Bitcoin, to fully private and permissioned blockchains. Public and permissionless blockchains do not implement any restrictions on who can read or write on the decentralised database. They are generally pseudonymous as the network nodes do not need to disclose their real-world identity. Most of the early blockchain-based networks that emerged after Bitcoin, including Litecoin, Namecoin, Peercoin and Ethereum, rely on a public blockchain.

By contrast, on the other end of the continuum, a private and permissioned blockchain includes a built-in access control mechanism that can limit the number of parties allowed to perform basic tasks on the blockchain. Private blockchains rely on closed and more carefully managed networks, the access to which can be limited to pre-approved individuals, and permission to validate a transaction can be restricted to only some actors in the network.

For instance, permissioned blockchains such as Ripple and Corda (see below) have been developed with a focus on financial services. Instead of relying on an open network, only the parties of a consortium are entitled to participate in the consensus and execute transactions on these blockchains.

The decision as to whether to use a permissionless or permissioned blockchain ultimately boils down to a question of trust, scalability and transparency. On the one hand, public and permissionless blockchains are more “trustless” because they distribute trust over a large number of individual nodes, and rely on Proof of Work to ensure that it is computationally difficult, and expensive, for any of these nodes to manipulate the network. Yet, because of these design choices, public blockchains can be very expensive to maintain, have limited performance, and – despite their pseudonymity – the transparency inherent into these networks can impact the privacy of their users. On the other hand, private and permissioned blockchains are more scalable because they can use computationally less expensive protocols to verify transactions, given that there is already some inherent trust in the actors. They also offer a more controlled environment by giving differentiated access to its actors and making some of the transactions private. For example a consortium of banks can choose to share one permissioned blockchain ecosystem without having to divulge all transactions within their own institution to other institutions in the consortium. Yet, private and permissioned blockchains require a higher degree of trust in the parties managing the network, and, as a result, can be more easily manipulated if one of these parties gets hacked or is otherwise compromised.

Additionally, tools are currently being developed to enable different blockchains to interact with one another, in an interoperable way. For example, the company Blockstream is building tools for the Bitcoin blockchain to serve as a backbone for a variety of other, more specialised permissioned and permissionless blockchains.

Limitations of blockchain technology

Despite their resilience and tamper-resistance, there are limitations inherent in the consensus protocol adopted by many public and permissionless blockchains. Indeed, Proof of Work is grounded on the premise that no party controls more than 50% of the computational power invested in the network. Once that threshold is reached, the controlling party can manipulate the network, creating conflicting records (see the discussion above on the “double-spending” problem) and preventing some transactions from getting added to the database (Narayanan et al., 2016).

While the 51% attack is a problem common to all types of blockchain, it is all the more critical in the case of permissionless blockchains, due to the fact that it is difficult to determine who effectively controls the hashing power invested in these networks. While the collusion of multiple nodes in a permissioned blockchain would be easily identifiable, and sanctionable, the potential takeover of a public blockchain by a group of unidentified individuals would be much harder to detect. And yet, the vulnerability is real. In 2017, after eight years of operation, over 50% of the hashing power operating the Bitcoin network is controlled by five large mining pools (Blockchain, n.d. a). In fact, in a few instances, a single pool of Bitcoin miners was in control of more than half of the network’s computational power.

In addition to these security issues and because blockchains rely on public-private key cryptography, one of the major hindrances to the mainstream adoption of blockchain technology is the lack of a standard key management system, including a recovery and a revocation mechanism. Without a proper recovery mechanism, the loss of a private key would preclude the account holder from performing any operation from the account. Similarly, without a proper key revocation system, if a private key is compromised, anyone in possession of that key could execute unauthorised transactions on behalf of the account holder.

Another important limitation of blockchain technology is performance, which is also more critical in the context of public and permissionless blockchains. Existing public blockchains can only handle a limited number of transactions. For instance, the Bitcoin network processes less than 300 000 transactions per day (Blockchain, n.d. b), as opposed to the 150 million transactions processed by Visa every day. Bitcoin transactions are validated, more or less, every ten minutes (Blockchain, n.d. c), much longer than the time it normally takes for a database to store and record information.

For blockchain technology to reach mainstream adoption, these systems will need to mature in order to handle a seemingly countless number of transactions. Yet, solving scalability issues will be no simple task. Because a blockchain is an append-only database, each new transaction causes the blockchain to grow. The larger the blockchain, the greater the requirements in terms of computational power, storage and bandwidth, all amounting to significant high energy consumption. If these requirements become too onerous, fewer actors will contribute to supporting the network, thus increasing the likelihood that a few large mining pool will control the network (James-Lubin, 2015). While there are already many proposals for bringing blockchains to scale, they are for the most part still in an experimental phase. They include, for example, the use of alternative consensus protocols such as proof of stake (Buterin, 2015; Iddo et al., 2014).6 International efforts to develop standards for blockchain technologies such as the establishment of the International Organization for Standardization (ISO) Technical Committee 307 on “Blockchain and distributed ledger technologies” in 2016 can take the development of these technologies to the next stage, in particular by stimulating greater interoperability, speedier acceptance and enhanced innovation in their use and application.

Blockchain applications provide many new opportunities

Bitcoin was the first application to exploit the new opportunities provided by blockchain technology in the realm of finance, but the benefits brought by blockchain technology can be used for many other types of applications, both in the realm of finance and beyond. These potential benefits are introduced below, together with examples of how the technology is currently being experimented in various areas. It is worth noting that, given the recent history and current immaturity of blockchain technology, the examples listed below are, for the most part, pilots and proof of concepts done by early-stage businesses and start-ups.

Reduced market friction and transaction costs

Blockchain technology can reduce market friction and transaction costs in specific sectors of activity. While there are important costs involved in maintaining a blockchain infrastructure, one of the greatest potentials of blockchain technology is to increase the efficiency of existing information systems, by eliminating paperwork and reducing the overhead costs stemming from the interactions between multiple layers of intermediaries.

For instance, one sector that suffers from significant market friction and transaction costs is the remittance sector. Today, international remittances can take up to seven days to clear, with fees up to 10% of the amount transferred. Blockchains can drive down the costs of remittances, giving people the ability to send money abroad, quickly and cheaply, through mobile devices. Launched on November 2013, in Nairobi, BitPesa was the first remittance company using the Bitcoin blockchain for sending money in African countries. Since then, many other start-ups have been experimenting with the technology. Today, Abra seems to be the leader in the field. Launched in early 2017, the company is the only one addressing the problem of the first and last miles, i.e. how to exchange fiat money into Bitcoin and vice versa.

On a more general level, blockchains can act as a backbone for depository institutions to conduct inter-bank transfers and convert funds. For instance, in 2012, the company Ripple released the Ripple Transaction Protocol, giving banks the ability to convert funds into different currencies, in a matter of seconds and at little to no cost. The protocol creates a series of trades between foreign exchange traders who have agreed to participate on the Ripple network, calculating the fastest and most cost-effective way to convert funds from one currency to another, and then settling those trades instantaneously via a blockchain. The system has recently been adopted by Santander to set up a trial for international remittances and cross-border payments.

Blockchain technology can also contribute to reducing transaction costs, helping banks settle transactions more quickly and efficiently. Instead of each bank maintaining its own record of transactions, a blockchain-based system can update all records simultaneously, removing the need to reconcile transactions between different banks. This is what motivated the creation of the R3 consortium in 2014. With membership from over 70 banks and financial institutions, the consortium is currently geared towards the development of a distributed ledger technology, called Corda, designed to support and facilitate inter-bank transactions.

Blockchain technology also brings the potential to expedite the trading of securities, by combining clearing and settlement into one single operation. Experiments of this kind are already under way. For instance, in October 2015, Nasdaq partnered with Chain to explore the use of blockchain technology for the exchange of shares in private companies. A few months later, the publicly traded company Overstock, the first major online retailer to accept payments in Bitcoin, started offering its own stocks on a blockchain-based trading platform (t0) specifically built for that purpose.

In the derivative market, blockchains are ushering in a new era of financial engineering that could contribute to adding more security, efficiency and precision in risk management. With a blockchain, people can encode the terms of a derivative instrument directly into code, so that they can be processed and automatically executed by the underlying blockchain network. A successful trial was performed in 2016, by the Depository Trust & Clearing Corporation, together with five Wall Street firms – Bank of America, Merrill Lynch, Citi, Credit Suisse and JPMorgan – encoding the terms of credit default swaps into a blockchain-based system in order to manage all post-trade events. Shortly afterwards, in early 2017, the Depository Trust & Clearing Corporation announced its plan to move USD 11 trillion worth of credit derivatives to a blockchain infrastructure specifically built for that purpose. The goal is to improve the processing of derivatives through automated record-keeping and to reduce the reconciliation costs.

Transparency and accountability

By providing a global, transparent and tamper-resistant database on which to record and time-stamp information, a blockchain can serve as a global registry of certified and authenticated records. Important data can be registered on a blockchain in such a way that it becomes available to all, and that it cannot be retroactively modified or repudiated by the party recording it.

In many cases, however, information needs to be kept private. Instead of storing data directly onto a blockchain, data can be hashed7 into a short string that acts as a unique identifier for the data at hand. This is useful to certify the source and integrity of specific records, without disclosing any sensitive information to the public. Indeed, while no one has the ability to retrieve any information by simply looking at the hash, anyone in possession of the original data can verify that it is has not been tampered with by comparing its hash with the one stored on the blockchain.

Various governments are exploring blockchains in the context of providing more transparent and reliable governmental records. For instance, in 2015, the government of Estonia announced a partnership with the start-up Bitnation to provide blockchain-based notarisation services to all its electronic residents. These include, for example, marriage records, birth certificates and business contracts. In 2016, the Estonian eHealth Authority partnered with the software security company Guardtime in order to set a blockchain-based infrastructure to preserve the integrity and improve the auditability of health records and other sensitive data. In May 2016, Ghana announced a partnership with the Bitland organisation to implement a blockchain-based land registry intended to operate as a complement to the official governmental registry. In January 2017, the government of Georgia (country) partnered with the company Bitfury to store real estate information in a blockchain-based system. In April 2017, the start-up Civic Ledger received funding from the Australian government to improve the transparency and reliability of water market information through the use of blockchain.

Opportunities have also arisen in the education and arts sector. For instance, the MIT Digital Certificates Project, launched in October 2016, relies on the Bitcoin blockchain for the issuance of educational certificates or attestations indicating that a particular student has been attending a class or passed an exam. A similar initiative has been undertaken by the French engineering school Léonard de Vinci, which partnered with the French Bitcoin start-up Paymium to certify diplomas on the Bitcoin blockchain. The company Verisart, founded in 2015, is using a blockchain to help artists and collectors generate certificates of authenticity for their works. When a work is sold, the sale is recorded on a blockchain so that others can verify the existence of a legitimate chain of custody. The goal is to create a global registry to facilitate the authentication and tracking of art worldwide.

Blockchain technology also provides new ways for companies to prove the source and authenticity of products. Various initiatives already exist to prevent the counterfeiting of luxury goods. For instance, the company Blockverify uses blockchain and distributed ledger technologies to offer supply chain transparency and anti-counterfeiting solutions with applications to pharmaceuticals, luxury items, diamonds and electronics. Similarly, since 2015, the company Everledger has been using a blockchain to assign unique identifiers to diamonds in order to track them as they are traded on the secondary market. The technology can also assist in the reduction of fraud, black markets and trafficking, particularly in regard to “blood” diamonds sourced from war zones.

The same principle applies for other types of goods. In the fair-trade market, the social enterprise Provenance, founded in 2013, relies on blockchain technology as a means to prove the provenance of food products, along with all the steps they have gone through before they reach the consumer. Thus far, the company has run a successful pilot, using blockchain technology and smart tagging to track the provenance of tuna in Indonesia, with verified social sustainability claims. Similar pilots have been carried out by other start-ups to track the delivery of products across oceans (TBSx3) or to help agricultural businesses better manage supply chains and ensure the provenance of the products they use (Agridigital).

Guaranteed execution through smart contracts

A blockchain can also store software programmes – commonly referred to as smart contracts (Szabo, 1997)8 – which are executed in a distributed manner by the miners of a blockchain-based network. Smart contracts differ from existing software programmes in that they can run autonomously, i.e. independently from any centralised operator or trusted third party. Smart contracts are thus often described as being self-executing and with a guarantee of execution (Buterin, 2013). They incorporate several computing steps as well as “if this, then that” conditions, whose execution can be verified by anyone on the blockchain network. Because they rely on a decentralised network that is not controlled by any single operator, smart contracts are guaranteed to run in a predefined and deterministic manner, free from any third-party intervention.

By far the most prominent platform for the deployment of smart contract code is Ethereum. Launched in August 2015, Ethereum is currently the second-largest blockchain network after Bitcoin, with a market capitalisation of over USD 4 billion and a daily trading volume of more than USD 100 million. The Ethereum blockchain implements a Turing-complete9 programming language, called Solidity, combined with a shared virtual machine, which has become the de facto standard for the development of a large variety of blockchain applications. Once deployed, the code of a smart contract is stored – in a pre-compiled form – on the Ethereum blockchain and is assigned an address. In order to interact with the smart contract, parties send a transaction to the relevant address, thereby triggering the execution of the underlying code. As such, Ethereum can be regarded as a global and distributed computing layer, which constitutes the backbone for decentralised systems and applications. While Ethereum was the first of its kind, similar functionalities have since been implemented in other blockchain-based platforms, such as Rootstock, Monax, Lisk and Tezos.

Smart contracts generally only implement basic functionalities, such as a conditional transaction that will be performed according to a set of predefined conditions. Smart contracts are often used to implement escrow systems that will execute a transaction whenever a particular condition is met. For instance, with a smart contract, an asset can be transferred to a programme which can automatically execute at specific times to automatically validate conditions and decide on whether the asset should be transferred to another person or refunded to the original person or a combination thereof. Smart contracts can also be used to automate recurrent payments. For example, a rental agreement can be executed using a smart contract, where the renter and the owner agree to certain rules, including the rental amount, the day the keys will be transferred and the day the apartment will be vacated. By aggregating multiple smart contracts together and having them interact with each other, it is possible to create complex systems that can provide more advanced functionalities.

Attention should be drawn to the fact that no software is bug-free, and smart contracts are no exception. In fact, the guaranteed execution of smart contract code, combined with the interdependency of multiple smart contract transactions, can generate a significant risk, especially when deployed in a context that does not come with a formalised conflict resolution or arbitration system. Such risk has been clearly illustrated by the case of the TheDAO hack (Box 7.3), where a vulnerability in the code of a smart contract led to a potential loss of over USD 150 million.

Box 7.3. What decentralised applications exist today?

Thus far, although a large number of smart contracts have been deployed on a blockchain, there are only a few usable decentralised applications. Although most of them are still in an experimental phase, they clearly illustrate the potential of blockchain technology. For instance, Akasha and Steem.it are distributed social networks that operate without a central platform such as Facebook. Instead of relying on a centralised organisation to manage the network, these platforms are run in a decentralised manner by aggregating the contributions of a distributed network of peers, who co-ordinate themselves through a common set of rules encoded in a blockchain-based platform.

OpenBazaar is a decentralised marketplace, much like eBay, but that operates independently of any intermediary operator. The platform relies on blockchain technology to enable buyers and sellers to interact directly with one another without passing through any centralised middleman. Once a buyer requests a product from a seller, an escrow account is created on the Bitcoin blockchain to ensure that the funds are only released once the buyer has received the product.

A few decentralised carpooling platforms have also been launched, such as Lazooz or ArcadeCity. These platforms are not administered by any trusted third party, such as Uber; they are governed by the code deployed on a blockchain-based infrastructure, which manages peer-to-peer interactions between drivers and users.

Perhaps the most notorious example of a decentralised application was TheDAO, a blockchain-based investment fund deployed on the Ethereum blockchain in April 2016. TheDAO enabled people to invest money into the fund and vote on the proposals they wanted to fund. As such, it was described as the first decentralised organisation using blockchain technology to co-ordinate the activity of people that do not know, and therefore do not trust, each other. After just one month of operation, TheDAO had raised over USD 150 million worth of Ether (Ethereum’s native digital currency). Unfortunately, the experiment was short-lived. TheDAO was forced to shut down after an attacker exploited a vulnerability in the code, draining more than one-third of its funds. Given the size of the attack and the potential impact it had on the Ethereum ecosystem as a whole, the Ethereum community collectively intervened to revert the transaction and recover the funds that had been illegitimately taken by the attacker. This required a “hard fork” of the Ethereum network – a decision that has been severely criticised by some members of the Ethereum community in that it violated the immutability guarantees of the Ethereum blockchain. This incident contributed to raising awareness about the responsibility issues inherent in these fully decentralised applications.

The Internet of Things

The opportunities of blockchain technologies are not limited to the digital world, they also extend into the physical world, offering new capabilities to the objects surrounding us. With the advent of the IoT, we are witnessing the emergence of connected devices that can communicate with each other and interact with people around them, in order to better adapt to their needs. These devices incorporate the characteristics of digital technologies: connectivity and programmability.

When these devices are connected to a blockchain, they assume additional functionalities, in that they can interact directly with one another – without passing through an intermediary operator – and exchange value in a decentralised manner.

For example, Samsung has recently partnered with IBM to create the proof of concept of a blockchain-enabled IoT device: a washing machine capable of detecting when it is out of detergent, to initiate a transaction with a smart contract on the retailer side in order to place an order and pay for the new detergent (IBM, 2015). In addition to reducing transaction costs, the advantage of this model is that the consumer does not need to communicate any payment information to Samsung or other trusted third-party operator; rather, the consumer only needs to fill the device’s account each time the money runs out.

This is, of course, a very simple example, but this model could be applied to many other types of connected devices. The integration of blockchain technology with the IoT makes it possible to activate or deactivate connected devices through a simple blockchain transaction. Just like a prepaid phone can only be used if the account has enough credits, one could imagine a prepaid car that only turns on if the driver has a purchased a sufficient amount of kilometres. Or even a rental car whose right of use is represented by a token on a blockchain, and can thus be transferred, at any moment, with a simple blockchain transaction, without passing through any centralised operator.

Although these cases are currently only speculative, initiatives of this kind are already underway. For instance, since 2015 the German company Slock.it has been developing Internet-connected locks that can be controlled by smart contracts. The owner of these blockchain-enabled locks can set a price that will enable a third party to open the lock for a specific period of time. Once the amount is deposited, a smart contract will grant the transmitting party the permission to the use the lock for the whole rental period. While the product is still in an early stage of development, the company envisions that its technology could be used to rent out bikes, storage lockers, homes and even automobiles. Another company, Filament, has been working since 2012 on the implementation of secure wireless network of connected devices, and is currently focusing on the use of a blockchain to exchange sensor data and other information, as well as to enter into smart contract transactions with each other.

Disintermediated blockchain applications raise policy challenges

The most common policy challenges associated with blockchain technology relate to the issues of tax evasion, money laundering, terrorist financing and the facilitation of other criminal activities, such as the sale of illegal drugs and weapons, as illustrated by the decentralised market place Silk Road.10

Most of these challenges are due, in part, to the transnationality of existing blockchain networks. Because they rely on a decentralised P2P network, the large majority of blockchain applications implemented thus far challenge the enforcement of national laws. These applications are difficult to ban or regulate, because individual users can easily bypass the regulatory constraints imposed by a particular government or state. Due to their decentralised nature, blockchain networks are also difficult to shut down, because that would require shutting down every node in the network. Other decentralised Internet technologies have raised similar challenges, such as the anonymised P2P communication system Tor, and P2P file-sharing technologies such as BitTorrent or eMule.

But what makes the challenges raised by blockchain technology really unique, and different from that of previous Internet technologies, is that blockchain-based applications generally operate independently of any centralised intermediary or trusted authority. As such, they can potentially raise concerns similar to those raised by AI with respect to employment, although the possible impact on jobs is particularly difficult to assess given the very early stage of blockchain’s deployment. They also eliminate the possibility for governments to rely on a centralised operator or middlemen to enforce national laws on the Internet.

Indeed, as described earlier, permissionless blockchain technology facilitates the creation of decentralised payment systems, such as Bitcoin, that operate without a central clearinghouse, raising fears of loss of monetary control (Blundell-Wignall, 2014). They also enable the creation of decentralised marketplaces – where securities can be issued and traded without the need to resort to regulated intermediaries – or the emergence of decentralised applications that operate independently of any centralised authority. As opposed to existing applications – which are run from a server, owned and controlled by a particular operator – blockchain-based applications are run in a distributed manner by a decentralised network of peers. They operate, therefore, outside of the control of any given operator.

This can be problematic in the context of pseudonymous systems, where parties only identify themselves through their private key. In a centralised model, the intermediary that executes a transaction also has the power to revert it. In the context of a permissionless blockchain, once a transaction has been accidentally or maliciously executed, it cannot be reverted by any single party. The theft or the loss of a private key could therefore have dramatic consequences for the account holder.

Moreover, because they are pseudonymous, permissionless blockchains make it difficult (but not impossible) to enforce laws aimed at preventing illegal practices. This raises the important policy question of how – and to whom – to impute legal liability for the torts caused by blockchain-based systems. Who should be held responsible for these torts and how can damages be recovered from a blockchain-based system when there is no central authority in charge of managing it?

The disintermediated nature of blockchains, combined with the self-executing character of smart contracts, means that these blockchain-based systems can be designed to be largely immune to the coercive power of the state. If so desired, they can ignore a court order, in that they can be programmed in such a way as to make it impossible for anyone to seize their assets.

Of course, in theory, the government could hold parties responsible for creating and deploying blockchain-based systems, insofar as these systems are used to engage in reckless or unlawful activity. For example, blockchain developers could be held responsible, under product liability laws, for any foreseeable damages that these systems might cause to a third party. However, such liability laws could significantly deter innovation in this area and, even if the developers of an unlawful blockchain-based system were to be incriminated for their work, this would in no way affect the way the system operates.

Because of the resilience and tamper-resistance of smart contracts, once a transaction has been executed and validated by the underlying blockchain network, it cannot be retroactively modified by any single party. And because of the guarantee of execution that these systems enjoy, once deployed it becomes extremely difficult for anyone to modify the code and operations of a blockchain-based application – and even more difficult to shut it down. The only way for a blockchain transaction to be reverted or for a smart contract application to be brought to a halt is through a co-ordinated action of the network as a whole, as the Ethereum network did following the TheDAO hack. While this can be easily achieved in the context of permissioned blockchains, where only a small number of identified parties are responsible for establishing the consensus on the blockchain network, this is much harder to achieve in the context of permissionless blockchains, due to the extensive co-ordination costs required to reach consensus between a large number of unidentified parties.

Finally, important policy challenges emerge from the transparency and censorship-resistance of these systems. Indeed, while the pseudonymity provided by permissionless blockchain environments could promote freedom of expression and ultimately increase the availability of information, it can also make it more difficult to enforce laws aimed at restricting the flow of information, such as copyright, hate speech and defamation laws. For example, when combined with decentralised file-sharing networks, the ability to record information on a tamper-resistant database could facilitate the exchange of illicit or indecent material, such as child pornography, revenge porn or content used for public shaming. Nevertheless, some blockchain experts consider that criminal activities are unlikely to be hosted on blockchains as transactions leave too many traces that can identify their authors. These risks are mitigated in the context of permissioned blockchains, where it is possible to tie an individual’s physical identity to their online persona. This notwithstanding, the fact that once a piece of data has been incorporated into a blockchain it cannot be unilaterally deleted, would make it difficult to implement laws like the right to be forgotten, which is enshrined in European law.

Blockchain-based systems, even those that have been specifically designed to ignore the law, do not exist in a vacuum. There are still a number of intermediaries at the intersection between these systems and the rest of society. These include the miners in charge of verifying and validating transactions; the virtual currency exchanges responsible for the trading of blockchain-based tokens with fiat money, and vice versa; as well as the various commercial or non-commercial operators that interact with these systems. It is at these chokepoints that the law still has a possibility to exert influence, in order to, albeit indirectly, regulate these systems.

References

Arntz, M., T. Gregory and U. Zierahn (2016), “The risk of automation for jobs in OECD countries: A comparative analysis”, OECD Social, Employment and Migration Working Papers, No. 189, OECD Publishing, Paris, https://doi.org/10.1787/5jlz9h56dvq7-en.

Blockchain (n.d. a), “Hashrate distribution: An estimation of hashrate distribution amongst the largest mining pools”, webpage, https://blockchain.info/en/pools.

Blockchain (n.d. b), “Confirmed transactions per day”, webpage, https://blockchain.info/charts/n-transactions.

Blockchain (n.d. c), “Median confirmation time”, webpage, https://blockchain.info/charts/median-confirmation-time.

Blundell-Wignall, A. (2014), “The Bitcoin question: Currency versus trust-less transfer technology”, OECD Working Papers on Finance, Insurance and Private Pensions, No. 37, OECD Publishing, Paris, https://doi.org/10.1787/5jz2pwjd9t20-en.

Bonneau, J. et al. (2015), “Research perspectives and challenges for Bitcoin and cryptocurrencies”, Proceedings of IEEE Symposium on Security and Privacy, 17-21 May 2015.

Brakeville, S. and B. Perepa (2016), “Blockchain basics: Introduction to distributed ledgers”, IBM, 9 May, www.ibm.com/developerworks/cloud/library/cl-blockchain-basics-intro-bluemix-trs.

Buterin, V. (2015), “Slasher: A punitive proof-of-stake algorithm”, Ethereum Blog, 14 August.

Buterin, V. (2013), “Ethereum white paper”, https://github.com/ethereum/wiki/wiki/White-Paper.

CB Insights (2017), “The 2016 AI Recap: Startups See Record High In Deals And Funding”, Research Briefs, www.cbinsights.com/research/artificial-intelligence-startup-funding/ (accessed 16 August 2017).

Chen, K. et al. (2012), “Building high-level features using large scale unsupervised learning”, July, v5, https://arxiv.org/abs/1112.6209.

Citibank (2016), Technology at Work v2.0: The Future is Not What it Used to Be, Citigroup, www.oxfordmartin.ox.ac.uk/downloads/reports/Citi_GPS_Technology_Work_2.pdf.

Dutton, J. (2011), “Raging bull: The lie catcher!”, Metal Floss, http://mentalfloss.com/article/28568/raging-bull-lie-catcher.

Elliot, S.W. (2014), “Anticipating a Luddite revival”, Issues in Science and Technology, Vol. XXX/3, Spring, http://issues.org/30-3/stuart.

Evans, R. and J. Gao (2016), “DeepMind AI reduces Google Data Centre cooling bill by 40%”, DeepMind blog, 20 July, https://deepmind.com/blog/deepmind-ai-reduces-google-data-centre-cooling-bill-40.

Frey, C.B. and M.A. Osborne (2013), “The future of employment: How susceptible are jobs to computerisation?”, Oxford Martin School, 17 September, www.oxfordmartin.ox.ac.uk/downloads/academic/The_Future_of_Employment.pdf.

Goertzel, B. and Pennachin, C. (2006), Artificial General Intelligence, Springer, Berlin, Heidelberg, https://doi.org/10.1007/978-3-540-68677-4.

IBM (2015), “Empowering the edge: Practical insights on a decentralized Internet of Things”, IBM Institute for Business Value, Somers, New York, https://www-935.ibm.com/services/multimedia/GBE03662USEN.pdf.

Iddo, B. et al. (2014), “Proof of activity: Extending Bitcoin’s proof of work via proof of stake”, ACM SIGMETRICS Performance Evaluation Review, Vol. 42/3, pp. 34-37.

ITF (International Transport Forum) (2017), “Managing the Transition to Driverless Road Freight Transport”, International Transport Forum Policy Papers, No. 32, OECD Publishing, Paris, https://doi.org/10.1787/0f240722-en.

James-Lubin, K. (2015), “Blockchain scalability”, O’Reilly Media, 21 January, www.oreilly.com/ideas/blockchain-scalability.

Lake, B. et al. (2016), “Building machines that learn and think like people”, Behavioral and Brain Sciences, 2 November, http://cims.nyu.edu/~brenden/1604.00289v3.pdf.

Nakamoto, S. (2008), “Bitcoin: A peer-to-peer electronic cash system”, https://bitcoin.org/bitcoin.pdf.

Narayanan, A. et al. (2016), Bitcoin and Cryptocurrency Technologies, Princeton University Press.

Nikkei (2015), “IBM’s Watson to help doctors devise optimal cancer treatment”, Asian Review, 30 July, http://asia.nikkei.com/Tech-Science/Science/IBM-s-Watson-to-help-doctors-devise-optimal-cancer-treatment.

Nilsson, N. (2010), The Quest for Artificial Intelligence: A History of Ideas and Achievements, Cambridge University Press, Cambridge, United Kingdom.

OECD (Organisation for Economic Co-operation and Development) (forthcoming), “Neurotechnology and society: Strengthening responsible innovation in brain science”, Science, Technology and Industry Policy Papers, OECD, Paris.

OECD (2017), The Next Production Revolution: Implications for Governments and Business, OECD Publishing, Paris, https://doi.org/10.1787/9789264271036-en.

OECD (2016), “Summary of the CDEP Technology Foresight Forum: Economic and Social Implications of Artificial Intelligence”, presentation materials of Professor Dr Susumu Hirano and Associate Professor Tatsuya Kurosaka, OECD, Paris, http://oe.cd/ai2016.

OECD (2015), Data-Driven Innovation: Big Data for Growth and Well-Being, OECD Publishing, Paris, https://doi.org/10.1787/9789264229358-en.

Poon, J. and T. Dryja (2016), “The Bitcoin Lightning Network: Scalable off-chain instant payments”.

Purdy, M. and P. Daugherty (2016), “Why artificial intelligence is the future of growth”, Accenture, October, www.accenture.com/futureofAI.

Szabo, N. (1997), “The idea of smart contracts”, www.fon.hum.uva.nl/rob/Courses/InformationInSpeech/CDROM/Literature/LOTwinterschool2006/szabo.best.vwh.net/idea.html.

UK Government Office for Science (2016), “Artificial intelligence: Opportunities and implications for the future of decision-making”, Government Office for Science, London, https://www.gov.uk/government/publications/artificial-intelligence-an-overview-for-policy-makers.

Voegeli, J. (2016), “CIA-funded Palantir to target rogue bankers”, Bloomberg, 22 March, https://www.bloomberg.com/news/articles/2016-03-22/credit-suisse-cia-funded-palantir-build-joint-compliance-firm.

Wachter, S., B. Mittelstadt and L. Floridi (2016), “Why a right to explanation of automated decision-making does not exist in the general data protection regulation”, 28 December, International Data Privacy Law, https://ssrn.com/abstract=2903469.

Wang, D. et al., “Deep learning for identifying metastatic breast cancer,” 18 June, https://arxiv.org/pdf/1606.05718v1.pdf.

Werbach, K.D. (2016), “Trustless trust”, https://ssrn.com/abstract=2844409.

The White House (2016a), “Preparing for the future of AI”, Executive Office of the President, National Science and Technology Council, Washington, DC, October, https://obamawhitehouse.archives.gov/sites/default/files/whitehouse_files/microsites/ostp/NSTC/preparing_for_the_future_of_ai.pdf.

The White House (2016b), “Artificial intelligence, automation, and the economy”, Executive Office of the President, Washington, DC, December, https://obamawhitehouse.archives.gov/sites/whitehouse.gov/files/documents/Artificial-Intelligence-Automation-Economy.PDF.

Wong, Q. (2017), “At LinkedIn, artificial intelligence is like ‘oxygen’”, The Mercury News, 6 January, www.mercurynews.com/2017/01/06/at-linkedin-artificial-intelligence-is-like-oxygen.

Notes

← 1. The statistical data for Israel are supplied by and under the responsibility of the relevant Israeli authorities. The use of such data by the OECD is without prejudice to the status of the Golan Heights, East Jerusalem and Israeli settlements in the West Bank under the terms of international law.

← 2. OpenAI is co-chaired by Sam Altman and Elon Musk and the entities donating to support OpenAI include Amazon Web Services (AWS), Infosys and YC Research.

← 3. Since that time, new partners have joined the partnership, including for-profit companies (eBay, Intel, McKinsey & Company, Salesforce, SAP, Sony, Zalando, and Cogitai), and non-profits (Allen Institute for Artificial Intelligence, AI Forum of New Zealand, Center for Democracy & Technology, Centre for Internet and Society – India, Data & Society Research Institute, Digital Asia Hub, Electronic Frontier Foundation, Future of Humanity Institute, Future of Privacy Forum, Human Rights Watch, Leverhulme Centre for the Future of Intelligence, UNICEF, Upturn, and the XPRIZE Foundation). They join the founding companies and existing non-profit partners (AAAI, ACLU and OpenAI). The partnership’s tenets include a commitment to open research and dialog on the ethical, social, economic and legal implications of AI and to developing AI research and technology that is robust, reliable, trustworthy and operates within secure constraints.

← 4. Public-private key cryptography enables parties to exchange encrypted information without ever needing to exchange a key. Any user that needs to send information to another user will encode the information with their private key and the recipient’s public key. The recipient will then be able to decode the information received by using his or her private key and the sender’s public key.

← 5. It is worth noting that, although unlikely, such a scenario already occurred in 2014, when a large mining pool (Ghash.io) captured 55% of the mining capacity over the Bitcoin network. Rather than attacking the network, Ghash.io immediately reduced its capacity to avoid compromising the credibility of the network.

← 6. “Proof of stake” is a method by which a blockchain network aims to achieve distributed consensus by asking users to prove ownership of a certain amount of an asset. It offers many advantages, such as significantly increasing the number of transactions when implemented through the Casper protocol. Other approaches include payment channels like the Bitcoin Lightning network (Poon and Dryja, 2016) or mechanisms such as sharding (Iddo et al., 2014).

← 7. Hashing consists in generating a short string (or hash) from a particular piece of digital content. The hash is generated by a mathematical formula which is such that even the smallest modification to the content would generate a completely different string. A hash is often used as the unique identifier of the content that generated it, because it is extremely unlikely that another piece of content would produce the same hash value. Hashes play an important role in security systems, where they are used to ensure that transmitted messages have not been tampered with.

← 8. Szabo (1997) defined smart contracts as “a set of promises, specified in digital form, including protocols within which the parties perform on the other promises”.

← 9. A programming language is said to be Turing-complete if it can be shown that it is computationally equivalent to a Turing machine. That is, any problem that can be solved on a Turing machine using a finite amount of resources can be solved with that programming language using a finite amount of resources.

← 10. The Silk Road marketplace relied on Bitcoin and the Tor network to create anonymous transactions between its users in order to facilitate the trading of illicit goods, like drugs and weapons. Yet, it ultimately was doomed due to the failure of its founder, Ross Ulbricht, to obscure his own withdrawals of Bitcoin from the site.