copy the linklink copied!5. Artificial intelligence, digital technology and advanced production

Alistair Nolan

Directorate for Science, Technology and Innovation, OECD

Chapter 5 examines a selection of policies to enable the use of digital technology in advanced production. The first part looks at individual technologies, their uses and specific policy implications, namely artificial intelligence (AI), blockchain and 3D printing, as well as new materials and nanotechnology (development of which involves complex digital processes). The second part addresses cross-cutting policy issues relevant to digital technology and production. These are technology diffusion, connectivity and data, standards-setting processes, digital skills, access to and awareness of high-performance computing, intellectual property systems and public support for research and development. With respect to public research, particular attention is given to research on computing and AI, as well as the institutional mechanisms needed to enhance the impact of public research.

    

copy the linklink copied!Introduction

New digital technologies are essential to raising living standards and countering the declining labour productivity growth in many OECD countries that has occurred over recent decades. Rapid population ageing – the dependency ratio in the OECD is set to double over the next 35 years – makes raising labour productivity more urgent. Digital technologies can increase productivity in many ways. For example, they can reduce machine downtime, as intelligent systems predict maintenance needs. They can also perform work more quickly, precisely and consistently with the deployment of increasingly autonomous, interactive and inexpensive robots. New digital technologies in production will also benefit the natural environment in several ways by, for instance, making zero-defect production a possibility in some industries.

copy the linklink copied!Digital production technologies: Recent developments and policy implications

Artificial intelligence in production

The Oxford English Dictionary defines artificial intelligence (AI) as “the theory and development of computer systems able to perform tasks normally requiring human intelligence”. Expert systems – a form of AI drawing on pre-programmed expert knowledge – have been used in industrial processes for close to four decades (Zweben and Fox, 1994). The development of deep learning using artificial neural networks1 has been the main source of recent progress in the field. As a result, AI can be applied to most industrial activities – from optimising multi-machine systems to enhancing industrial research (Box 5.1). Furthermore, the use of AI in production will be spurred by automated ML processes that can help businesses, scientists and other users employ the technology more readily. With respect to AI that uses deep learning techniques and artificial neural networks, the greatest commercial potential for advanced manufacturing is expected in supply chains, logistics and process optimisation (Chui et al., 2018). Survey evidence also suggests that the transportation and logistics, automotive and technology sectors lead in terms of the share of early AI-adopting firms (Küpper et al., 2018).

copy the linklink copied!
Box 5.1. Recent applications of artificial intelligence in production

A sample of recent uses of AI in production illustrates the breadth of the industries and processes involved:

  • In pharmaceuticals, AI is set to become the “primary drug-discovery tool” by 2027, according to Leo Barella, Global Head of Enterprise Architecture at AstraZeneca. AI in preclinical stages of drug discovery has many applications. They range from compound identification and managing genomic data to analysing drug safety data and enhancing in-silico modelling (AI Intelligent Automation Network, 2018).

  • In aerospace, Airbus deployed AI to identify patterns in production problems when building its new A350 aircraft. A worker might encounter a difficulty that has not been seen before, but the AI, analysing a mass of contextual information, might recognise a similar problem from other shifts or processes. Because the AI immediately recommends how to solve production problems, the time required to address disruptions has been cut by one-third (Ransbotham et al., 2017).

  • In semiconductors, an AI system can assemble circuitry for computer chips, atom by atom (Chen, 2018); Landing.ai has developed machine-vision instruments to identify defects in manufactured products – such as electronic components – at scales that are invisible to the unaided eye.

  • In the oil industry, General Electric’s camera-carrying robots inspect the interior of oil pipelines, looking for microscopic fissures. If laid side by side, this imagery would cover 1 000 square kilometres every year. AI inspects this photographic landscape and alerts human operators when it detects potential faults (Champain, 2018).

  • In mining, AI is used to explore for mineral deposits and optimise use of explosives at the mine face (even considering the cost of milling larger chunks of unexploded material). It is also used to operate autonomous drills, ore sorters, loaders and haulage trucks. In July 2017, BHP switched to completely autonomous trucks at a mine in Western Australia (Walker, 2017).

  • In construction, generative software uses AI to explore every permutation of a design blueprint. It suggests optimal building shapes and layouts, including the routing of plumbing and electrical wiring. Furthermore, it can link scheduling information to each building component.

  • AI is exploring decades of experimental data to radically shorten the time needed to discover new industrial materials, sometimes from years to days (Chen, 2017).

  • AI is enabling robots to take plain-speech instructions from human operators, including commands not foreseen in the robot’s original programming (Dorfman, 2018).

  • Finally, AI is making otherwise unmanageable volumes of Internet of Things (IoT) data actionable. For example, General Electric operates a virtual factory, permanently connected to data from machines, to simulate and improve even highly optimised production processes. To permit predictive maintenance, AI can process combined audio, video and sensor data, and even text on maintenance history. This can greatly surpass the performance of traditional maintenance practices.

Beyond its direct uses in production, the use of AI in logistics is enabling real-time fleet management, while significantly reducing fuel consumption and other costs. AI can also lower energy consumption in data centres (Sverdlik, 2018). In addition, AI can assist digital security. For example, the software firm Pivotal has created an AI system that recognises when text is likely to be part of a password, helping to avoid accidental online dissemination of passwords. Meanwhile, Lex Machina is blending AI and data analytics to radically alter patent litigation (Harbert, 2013). Many social-bot start-ups also automate tasks such as meeting scheduling (X.ai), business-data and information retrieval (butter.ai), and expense management (Birdly). Finally, AI is being combined with other technologies – such as augmented and virtual reality – to enhance workforce training and cognitive assistance.

AI could also create entirely new industries based on scientific breakthroughs enabled by AI, much as the discovery of deoxyribonucleic acid (DNA) structure in the 1950s led to a revolution in industrial biotechnology and the creation of vast economic value – the global market for recombinant DNA technology has been estimated at USD 500 billion.2

Adopting AI in production: main challenges

To date, despite AI’s potential, its adoption in manufacturing has been limited. By one estimate, even among AI-aware firms, only around 20% use one or more AI technologies in core areas of business or at scale (Bughin et al., 2017). A more recent survey of 60 US manufacturers with annual turnovers of between USD 500 million and USD 10 billion yielded still more striking evidence of the limited diffusion of AI, finding that:

“Just 5% of respondents have mapped out where AI opportunities lie within their company and developing a clear strategy for sourcing the data AI requires, while 56% currently have no plans to do so.” Atkinson and Ezell (2019).

The challenges in using AI in production relate to its application in specific systems and the collection and development of high-quality training data.3 The highest-value uses of AI often combine diverse data types, such as audio, text and video. In many uses, training data must be refreshed monthly or even daily (Chui et al., 2018). Furthermore, many industrial applications are still somewhat new and bespoke, limiting data availability. By contrast, sectors such as finance and marketing have used AI for a longer time (Faggella, 2018). Without large volumes of training data, many AI models are inaccurate. A deep learning supervised algorithm may need 5 000 labelled examples per item and up to 10 million labelled examples to match human performance (Goodfellow, Bengio and Courville, 2016).

In the future, research advances may make AI systems less data-hungry. For instance, AI may learn from fewer examples, or generate robust training data (Simonite, 2016). In 2017, the computer program AlphaGo Zero famously learned to play Go using just the rules of the game, without recourse to external data. In rules-based games such as chess and Go, however, high performance can be achieved based on simulated data. But for industry, training data come from real-world processes and machines.4

Data scientists usually cite data quality as the main barrier to successfully implementing AI. Industrial data might be wrongly formatted, incomplete, inconsistent, or lack metadata. Data scientists will often spend 80% of their time cleaning, shaping and labelling data before AI systems can be put to work. The entire process requires skilled workers, and may have no a priori guarantee of success. Data might have to be drawn and unified from data silos in different parts of a company. Customer data, for instance, may be held separately from supply-chain data. Connecting data silos could also require complementary ICT investments. Moreover, some processes may simply lack the required volumes of data.

Adding to the challenge, manufacturers might have accuracy requirements for AI systems much greater than those in other sectors. For instance, degrees of error acceptable in a retailer’s AI-based marketing function would likely be intolerable in precision manufacturing. Furthermore, implementing AI projects involves a degree of experimentation. Consequently, it may be difficult to determine a rate of return on investment (ROI) a priori, especially by comparison with more standardised investments in ICT hardware. Generally, SMEs are less able to bear risk than larger firms, so uncertainty about the ROI is a particular hindrance to AI uptake in this part of the enterprise population.

The considerations described above highlight the importance of skills for firms attempting to adopt AI. However, AI skills are everywhere scarce. Even leading tech companies in Silicon Valley report high vacancy rates in their research departments, owing to acute competition for AI-related talent. High salaries paid to capable AI researchers reflects the demand for such skills: OpenAI, a non-profit, paid its top researcher more than USD 1.9 million in 2016. AI talent is also mobile, and highly concentrated across countries. A recent estimate suggests that half of the entire AI workforce in Europe is found in just three countries: the United Kingdom, France and Germany (Linkedin Economic Graph, 2019). Furthermore, AI projects often require multidisciplinary teams with a mix of skills, which can be challenging to find. And because many talented graduates in data science and ML are drawn to work on novel AI applications, or at the research frontier, retaining talent in industrial companies can be another difficulty. Skills shortages are unlikely to disappear in the near term, given the many years needed to fully train AI specialists (Bergeret, 2019).

Companies face the question of how best to access the expertise needed to advance AI use. For many companies, turning to universities or public research organisations might not be a first choice. Uncertainties about the match in understanding of business needs, ownership of intellectual property (IP), operational timeframes, or other concerns, can make this route unattractive to some firms. Firms might turn to providers of management consultancy services, but for SMEs these services could be excessively expensive, and might give rise to concerns regarding dependence on the service provider. Some mid-sized and larger industrial companies have decided to create their own in-house AI capabilities, but this path is generally limited to companies with significant financial and other resources. This overall environment highlights the importance of public, or public-private, institutions to help accelerate technology diffusion (see section “Technology diffusion” below).

AI: Specific policies

Perhaps the two most important areas where governments can assist in the uptake of AI concern support for the development of skills, and the funding and operational practices of institutions for technology diffusion. Both of these policy areas are discussed later in this chapter. This subsection focuses on issues relating to training data, and measures to address hardware constraints. Later subsections also refer to relevant issues in connection with rules for IP, and research support. Many other policies – not addressed here – are most relevant to the (still uncertain) consequences of AI. These include policies for competition; economic and social policies that mitigate inequality; and measures that affect public perceptions of AI. Well-designed policies for AI are likely to have high returns because AI can be widely applied and accelerate innovation (Cockburn, Henderson and Stern, 2018). Some of the policies concerned – such as those affecting skills – are also relevant to any new technology.

Governments can take steps to help firms generate value from their data

Many firms hold valuable data, but don’t use them effectively. They may lack in-house skills and knowledge, a corporate data strategy or data infrastructure, among other reasons. This can be the case even in firms with enormous financial resources. For example, by some accounts, less than 1% of the data generated on oil rigs are used (The Economist, 2017).

However, non-industrial sources of expertise – including many AI start-ups, universities and other institutions – could create value from data held by industrial firms. To help address this mismatch, governments can act as catalysts and honest brokers for data partnerships. Among other measures, they could work with relevant stakeholders to develop voluntary model agreements for trusted data sharing. For example, the US Department of Transportation has prepared the draft “Guiding Principles on Data Exchanges to Accelerate Safe Deployment of Automated Vehicles”. The Digital Catapult in the United Kingdom also plans to publish model agreements for start-ups entering into data-sharing agreements (DSAs).

Government agencies can co-ordinate and steward DSAs for AI purposes

Government agencies can co-ordinate and steward DSAs for AI purposes. DSAs operate between firms, and between firms and public research institutions. In some cases, all data holders would benefit from data sharing. However, individual data holders are often reluctant to share data unilaterally – it might be of strategic importance to a company, for instance – or remain unaware of potential data-sharing opportunities. For example, 359 offshore oil rigs were operational in the North Sea and the Gulf of Mexico as of January 2018. AI-based prediction of potentially costly accidents on oil rigs would be improved if this statistically small number of data holders were to share their data. In fact, the Norwegian Oil and Gas Association asked all members to have a data-sharing strategy in place by the end of 2018. In such cases, government action could be helpful. Another example where DSAs might be useful relates to data in supply chains. Suppliers of components to an original equipment manufacturer (OEM) might improve a product using data on how the product performs in production. Absent a DSA, the OEM might be reluctant to share such data, even if doing so could benefit both parties.

The Digital Catapult’s Pit Stop open-innovation activity, which complements its model DSAs, is an example of co-ordination between data holders and counterparts with expertise in data analysis. Pit Stop brings together large businesses, academic researchers and start-ups in collaborative problem-solving challenges around data and digital technologies. The Data Study Group at the Turing Institute, also in the United Kingdom, enables major private- and public-sector organisations to bring data science problems for analysis. The partnership is mutually beneficial. Institute researchers work on real-world problems using industry datasets, while businesses have their problems solved and learn about the value of their data.

Governments can promote open data initiatives

Open data initiatives exist in many countries, covering diverse public administrative and research data. To facilitate AI applications, disclosed public data should be machine-readable. In addition, in certain situations, copyright laws could allow data and text mining. The laws would need to prevent that use of AI does not lead to substitution of the original works or unreasonably prejudice legitimate interests of the copyright owners. Governments can also promote the use of digital data exchanges5 that share public and private data for the public good. Public open data initiatives usually provide access to administrative and other data that are not directly relevant to AI in industrial companies. Nevertheless, some data could be of value to firms, such as national, regional or other economic data relevant to demand forecasts. Open science could also facilitate industrial research (see Chapter 3).

Technology itself may offer novel solutions to use data better for AI purposes

Governments should be alert to the possibilities of using AI technology in public open data initiatives. Sharing data can require overcoming a number of institutional barriers. Data holders in large organisations, for example, can face considerable internal obstacles before receiving permission to release data. Even with a DSA, data holders worry that data might not be used according to the terms of an agreement, or that client data will be shared accidentally. In addition, some datasets may be too big to share in practical ways: for instance, the data in 100 human genomes could consume 30 terabytes (30 million megabytes).

Uncertainty over the provenance of counterpart data can hinder data sharing or purchase, but approaches are being developed to address this concern and incentivise secure data exchange. For example, Ocean Protocol, created by the non-profit Ocean Protocol Foundation, combines blockchain and AI (Ocean Protocol, n.d.). Data holders can obtain the benefits of data collaboration, with full control and verifiable audit. Under one use case, data are not shared or copied. Instead, algorithms go to the data for training purposes, with all work on the data recorded in the distributed ledger. Ocean Protocol is building a reference open-source marketplace for data, which users can adapt to their own needs to trade data services securely.

Governments can also help resolve hardware constraints for AI applications

AI entrepreneurs might have the knowledge and financial resources to develop a proof-of-concept for a business. However, they may lack the necessary hardware-related expertise and hardware resources to build a viable AI company. To help address such issues, Digital Catapult runs the Machine Intelligence Garage programme. It works with industry partners such as GPU manufacturer NVidia, intelligent processing unit-producer Graphcore and cloud providers Amazon Web Services and Google Cloud Platform. Together, they give early-stage AI businesses access to computing power and technical expertise. Policies addressing hardware constraints in start-ups might not directly affect industrial companies, but they could positively shape the broader AI ecosystem in which industrial firms operate.

Blockchain in production

Blockchain – a distributed ledger technology (DLT) – has many potential applications in production (Box 5.2). Blockchain is still an immature technology, and many applications are only at the proof-of-concept stage. The future evolution of blockchain involves various unknowns, including with respect to standards for interoperability across systems. However, similar to the “software as a service” model, companies such as Microsoft, SAP, Oracle, Hewlett-Packard, Amazon and IBM already provide “blockchain as a service”. Furthermore, consortia such as Hyperledger and the Ethereum Enterprise Alliance are developing open-source DLTs in several industries (Figueiredo do Nascimento, Roque Mendes Polvora and Sousa Lourenco, 2018).

Adopting blockchain in production creates several challenges: blockchain involves fundamental changes in business processes, particularly with regard to agreements and engagement among actors in a supply chain. When many computers are involved, the transaction speeds may also be slower than some alternative processes (however, fast protocols operating on top of blockchain are under development). Blockchains are most appropriate when disintermediation, security, proof of source and establishing a chain of custody are priorities (Vujinovic, 2018). A further challenge is that much blockchain development remains atomised. Therefore, the scalability of any single blockchain-based platform – be it in supply chains or financial services – will depend on whether it can operate with other platforms (Hardjano, Lipton and Pentland, 2018).

Blockchain: Possible policies

Regulatory sandboxes help governments better understand a new technology and its regulatory implications. At the same time, they enable industry to test new technology and business models in a live environment. Evaluations of the impacts of regulatory sandboxes are sparse; one exception is FCA (2017), even if this assessment covers only the first year of a scheme in the United Kingdom. Blockchain regulatory sandboxes mostly focus on Fintech. They are being developed in countries as diverse as Australia, Canada, Indonesia, Japan, Malaysia, Switzerland, Thailand and the United Kingdom (Figueiredo do Nascimento, Roque Mendes Polvora and Sousa Lourenco, 2018). The scope of sandboxes could be broadened to encompass blockchain applications in industry and other non-financial sectors. The selection of participants needs to avoid benefiting some companies at the expense of others.

By using blockchain in the public sector, governments could raise awareness of blockchain’s potential, when it improves on existing technologies. However, technical issues need to be resolved, such as how to trust the data placed on the blockchain. Trustworthy data may need to be certified in some way. Blockchain may also raise concerns for competition policy. Some large corporations, for example, may mobilise through consortia to establish blockchain standards, e.g. for supply-chain management.

copy the linklink copied!
Box 5.2. Blockchain: Potential applications in production

By providing a decentralised, consensus-based, immutable record of transactions, blockchain could transform important aspects of production when combined with other technologies. Several examples are listed below:

  • A main application of blockchain is tracking and tracing in supply chains. One consequence could be less counterfeiting. In the motor-vehicle industry alone, firms lose tens of billions of dollars a year to counterfeit parts (Williams, 2013).

  • Blockchain could replace elements of enterprise resource-planning systems. The Swedish software company IFS has demonstrated how blockchain can be integrated with enterprise resource-planning systems in the aviation industry. Commercial aircraft have millions of parts. Each part must be tracked, and a record kept of all maintenance work. Blockchain could help resolve failures in such tracking (Mearian, 2017).

  • Blockchain is being tested as a medium permitting end-to-end encryption of the entire process of designing, transmitting and printing three-dimensional (3D) computer-aided design (CAD) files. The goal is that each printed part embody a unique digital identity and memory (Figueiredo do Nascimento, Roque Mendes Polvora and Sousa Lourenco, 2018). If successful, this technology could incentivise innovation using 3D printing, protect IP and help address counterfeiting.

  • By storing the digital identity of every manufactured part, blockchain could provide proof of compliance with warranties, licences and standards in production, installation and maintenance (Figueiredo do Nascimento, Roque Mendes Polvora and Sousa Lourenco, 2018).

  • Blockchain could induce more efficient use of industrial assets. For example, a trusted record of the usage history for each machine and piece of equipment would help develop a secondary market for such assets.

  • Blockchain could help monetise the IoT, authenticating machine-based data exchanges and implementing associated micro-payments. In addition, recording machine-to-machine exchanges of valuable information could lead to “data collateralisation”. This could give lenders the security to finance supply chains and help smaller suppliers overcome working-capital shortages (Maerian, 2017). By providing verifiably accurate data across production and distribution processes, blockchain could also enhance predictive analytics.

  • Blockchain could further automate supply chains through the digital execution of “smart contracts”, which rely on pre-agreed obligations being verified automatically. Maersk, for example, is working with IBM to test a blockchain-based approach for all documents used in bulk shipping. Combined with ongoing developments in the IoT, such smart contracts might eventually lead to full transactional autonomy for many machines (Vujinovic, 2018).

3D printing

3D printing is expanding rapidly, thanks to falling printer and materials prices, higher-quality printed objects and innovation in methods. For example, 3D printing is possible with novel materials, such as glass, biological cells and even liquids (maintained as structures using nanoparticles). Robot-arm printheads allow objects to be printed that are larger than the printer itself, opening the way for automated construction. Touchless manipulation of print particles with ultrasound allows printing of electronic components sensitive to static electricity. Hybrid 3D printers combine additive manufacturing with computer-controlled machining and milling. Research is also advancing on 3D printing with materials programmed to change shape after printing.

Most 3D printing is used to make prototypes, models and tools. Currently, 3D printing is not cost-competitive at volume with traditional mass-production technologies, such as plastic injection moulding. Wider use of 3D printing depends on how the technology evolves in terms of the print time, cost, quality, size and choice of materials (OECD, 2017a). The costs of switching from traditional mass-production technologies to 3D printing are expected to decline in the coming years as production volumes grow. However, it is difficult to predict precisely how fast 3D printing will diffuse. Furthermore, the cost of switching is not the same across all industries and applications.

3D printing: Specific policies

OECD (2017a) examined policy options to enhance 3D printing’s effects on environmental sustainability. One priority is to encourage low-energy printing processes (e.g. using chemical processes rather than melting material, and automatic switching to low-power states when printers are idle). Another priority is to use and develop low-impact materials with useful end-of-life characteristics (such as compostable biomaterials). Policy mechanisms to achieve these priorities include:

  • targeting grants or investments to commercialise research in these directions

  • creating a voluntary certification system to label 3D printers with different grades of sustainability across multiple characteristics, which could also be linked to preferential purchasing programmes by governments and other large institutions.

Ensuring legal clarity around intellectual property rights (IPRs) for 3D printing of spare parts that are no longer manufactured could also be environmentally beneficial. For example, a washing machine that is no longer in production may be thrown away because a single part is broken. A CAD file for the required part could keep the machine in operation. However, most CADs are proprietary. One solution would be to incentivise rights for third parties to print replacement parts for products, with royalties paid to the original product manufacturers.

Government can help develop the knowledge needed for 3D printing at the production frontier

Bonnin-Roca et al. (2016) observe many potential uses for metals-based additive manufacturing (MAM) in commercial aviation. However, MAM is a relatively immature technology. The fabrication processes at the technological frontier have not yet been standardised, and aviation requires high quality and safety standards. The aviation sector would benefit if the mechanical properties of printed parts of any shape, using any given feedstock on any given MAM machine, could be accurately and consistently predicted. This would also help commercialise MAM technology. Government could help develop the necessary knowledge. Specifically, the public sector could support the basic science, particularly by funding and stewarding curated databases on materials’ properties. It could broker DSAs across users of MAM technology, government laboratories and academia. It could support the development of independent manufacturing and testing standards. And it could help quantify the advantages of adopting new technology by creating a platform documenting early users’ experiences.

Bonnin-Roca et al. (2016) suggest such policies for the United States, which leads globally in installed industrial 3D manufacturing systems and aerospace production. However, the same ideas could apply to other countries and industries. These ideas also illustrate how policy opportunities can arise from a specific understanding of emerging technologies and their potential uses. Indeed, governments should strive to develop expertise on emerging technologies in relevant public structures. Doing so will also help anticipate possible but hard-to-foresee needs for technology regulation.

New materials and nanotechnology

Scientists are studying materials in more detail than ever before. This is due to advances in scientific instrumentation, such as atomic-force microscopes, and developments in computational simulations. Today, materials with entirely novel properties are emerging. Solids have been created with densities comparable to the density of air, for example. Composites can be super-strong and lightweight. Some materials remember their shape, repair themselves or assemble themselves into components, while others can respond to light and sound (The Economist, 2015).

The era of trial and error in material development is also ending. Powerful computer modelling and simulation of materials’ structure and properties can indicate how they might be used in products. Desired properties, such as conductivity and corrosion resistance, can be intentionally built into new materials. Better computation is leading to faster development of new and improved materials, more rapid insertion of materials into new products, and improved processes and products. In the near future, engineers will not only design products, but also the materials from which products are made (Teresko, 2008). Furthermore, large companies will increasingly compete in terms of materials development. For example, a manufacturer of automotive engines with a superior design could enjoy longer-term competitive advantage if it also owned the material from which the engine is built.

Closely related to new materials, nanotechnology involves the ability to work with phenomena and processes occurring at a scale of 1 to 100 nanometres (nm) (a standard sheet of paper is about 100 000 nm thick). Control of materials on the nanoscale – working with their smallest functional units – is a general-purpose technology with applications across production (Friedrichs, 2017). Advanced nanomaterials are increasingly used in manufacturing high-tech products, e.g. to polish optical components.

New materials and nanotechnology: Specific policies

No single company or organisation will be able to own the entire array of technologies associated with materials innovation. Accordingly, a public-private investment model is warranted, particularly to build cyber-physical infrastructure and train the future workforce (McDowell, 2017).

New materials will raise new policy issues and give renewed emphasis to a number of longstanding policy concerns. New digital security risks could arise. For example, a computationally assisted materials “pipeline” based on computer simulations could be hackable. Progress in new materials also requires effective policy in already important areas, often related to the science-industry interface. For example, well-designed policies are needed for open data and open science. Such policies could facilitate sharing or exchanges of modelling tools and experimental data, and simulations of materials’ structures, among other possibilities.

Professional societies are developing a materials-information infrastructure to provide decision support to materials-discovery processes (Robinson and McMahon, 2016). This includes databases of materials’ behaviour, digital representations of materials’ microstructures and predicted structure-property relations, and associated data standards. International policy co-ordination is needed to harmonise and combine elements of cyber-physical infrastructure across a range of European, North American and Asian investments and capabilities. It is too costly (and unnecessary) to replicate resources that can be accessed through web services. A culture of data sharing – particularly pre-competitive data – is required (McDowell, 2017).

Sophisticated and expensive tools are also needed for research in nanotechnology. State-of-the-art equipment costs several million euros and often requires bespoke buildings. It is almost impossible to gather an all-encompassing nanotechnology research and development (R&D) infrastructure in a single institute, or even a single region. Consequently, nanotechnology requires inter-institutional and/or international collaboration to reach its full potential (Friedrichs, 2017). Publicly funded R&D programmes should allow involvement of academia and industry from other countries. They should also enable flexible collaborations between the most suitable partners. The Global Collaboration initiative under the European Union’s Horizon 2020 programme is an example of this approach.

Support is needed for innovation and commercialisation in small companies. Nanotechnology R&D is mostly conducted by larger companies for three reasons. First, they have a critical mass of R&D and production. Second, they can acquire and operate expensive instrumentation. Third, they are better able to access and use external knowledge. Policy makers could improve access to equipment of small and medium-sized enterprises (SMEs) by increasing the size of SME research grants; subsidising or waiving service fees; and/or providing SMEs with vouchers for equipment use.

Regulatory uncertainties regarding risk assessment and approval of nanotechnology-enabled products must also be addressed, ideally through international collaboration. These uncertainties severely hamper the commercialisation of nano-technological innovation. Policies should support the development of transparent and timely guidelines for assessing the risk of nanotechnology-enabled products. At the same time, they should strive for international harmonisation in guidelines and enforcement. In addition, more needs to be done to properly treat nanotechnology-enabled products in the waste stream (Friedrichs, 2017).

copy the linklink copied!Selected cross-cutting policy issues

This section addresses cross-cutting policies relevant to all the digital technologies described above. The issues examined are technology diffusion, connectivity and data, standards-setting processes, digital skills, access to and awareness of high-performance computing (HPC), IP systems and public support for R&D.

Technology diffusion

Most countries, regions and companies are primarily technology users, rather than technology producers. For them, technology diffusion and adoption should be priorities. Even in the most advanced economies diffusion can be slow or partial. For example, a survey of 4 500 German businesses in 2015 found that only 4% had implemented digitalised and networked production processes or planned to do so (ZEW-IKT, 2015). Similarly, a survey of SME manufacturers in the United States in 2017 found that 77% had no plans to deploy the IoT (Sikich, 2017).

Policies that broaden technology diffusion not only help to raise labour productivity growth, they might also lower inequality in rates of wage growth. Policy makers tend to acknowledge the critical importance of technology diffusion at a high level. However, they may overlook technology diffusion in the overall allocation of attention and resources (Shapira and Youtie, 2017).

Certain features of new digital technologies could make diffusion more difficult. Potential technology users must often evaluate large and growing amounts of information on rapidly changing technologies and the skills and other inputs they require. Even the initial step of collecting sensor data can be daunting. A typical industrial plant, for example, might contain machinery of many vintages from different manufacturers. In turn, these could have control and automation systems from different vendors, all operating with different communication standards. And whereas many prior digital production technologies enhanced pre-existing processes, blockchain could entail a more challenging redesign of business models.

Diffusion in SMEs involves particular difficulties

An important issue for diffusion-related institutions is that small firms tend to use key technologies less frequently than larger firms. In Europe, for example, 36% of surveyed companies with 50-249 employees use industrial robots, compared to 74% of companies with 1 000 or more employees (Fraunhofer, 2015). Only 16% of European SMEs share electronic supply-chain data, compared to 29% of large enterprises. This discrepant pattern of technology use directly reflects the availability of skills. For instance, only around 15% of European SMEs employ information and communication technology (ICT) specialists, compared to 75% of large firms (EC, 2017) (Box 5.3).

copy the linklink copied!
Box 5.3. Diffusing technology to SMEs: Some key considerations

Various steps can be taken to help diffuse technology to SMEs, including the following:

It is important to systematise key information for SMEs. A number of countries have developed tools to help SMEs transform technologically. Germany’s Industry 4.0 initiative has documented over 300 uses cases of applications of digital industrial technologies. It also includes contacts to experts (www.plattform-i40.de). And the United Kingdom’s 2017 Mayfield Commission led to the creation of an online self-assessment tool. It gives firms a benchmark against best practice, with guidelines on supporting actions (www.bethebusiness.com). Information provided through such initiatives also needs to encompass AI.

Particularly useful is information on the expected return on investment (ROI) in new technologies, as well as information on essential complementary organisational and process changes. One international survey asked 430 professionals working across industry sectors what could help them implement intelligent business strategy in their organisation. More than half (56%) wanted more information linking initiatives to ROI (AI Intelligent Automation Network, 2018). But careful thinking and exposition of this information is needed. Ezell (2018) notes that an ROI may be hard to calculate when the technology frontier is expanding. ROIs for some AI projects may be particularly hard to determine a priori, in part because data cleaning – which involves an element of art – is key to the outcomes of most AI investments. Investment decisions may also have to include strategic considerations such as the need to remain viable in future supply chains.

Because the skills to absorb information are scarce in many SMEs, simply providing information on technology is not enough. Providing signposts to reliable sources of SME-specific expertise can help. For example, as part of its SMEs Go Digital Programme, Singapore’s TechDepot provides a list of pre-approved digital technology and service solutions suited to SMEs. Targeted skills development is also useful. For instance, Tooling U-SME – an American non-profit organisation owned by the Society of Manufacturing Engineers – provides online industrial manufacturing training and apprenticeships.

Test beds can also provide SMEs with facilities to test varieties and novel combinations of digital and other equipment. In this way, they can de-risk prospective investments.

Diffusion requires conditions to support the creation of growth-oriented start-ups and efficient allocation of economic resources

By ensuring conditions such as timely bankruptcy procedures and strong enforcement of contracts, governments can support the creation of businesses. Increasing new-firm entry and growth is important for diffusion. OECD research has highlighted the role of new and young firms in net job creation and radical innovation. Unconstrained by legacy systems, start-ups often introduce forms of organisation that new technologies require. Electric dynamos, for example, were first commercialised in the mid-1890s during the second industrial revolution. It took almost four decades, and a wave of start-up and investment activity in the 1920s, before suitably reorganised factories became widespread and productivity climbed (David, 1990).

Recent OECD analysis of micro-economic allocation processes highlights the importance for leading-edge production of conducive economic and regulatory framework conditions. These conditions include competitive product markets and flexible labour markets. Low costs for starting and closing a business are also important. Furthermore, openness to foreign direct investment and trade provides a vehicle for technology diffusion and an incentive for technology adoption. Such conditions all facilitate efficient resource allocation. Efficient resource allocation helps incumbent firms and start-ups adopt new technologies and grow. Andrews, Criscuolo and Gal (2016) estimate that more liberal markets, especially in services, could avoid up to half of the difference in multi-factor productivity between “frontier” and “laggard” firms, and accelerate diffusion of new organisational models.

Several additional factors can aid diffusion. These include openness to internationally mobile skilled labour, and the strength of knowledge exchange within national economies. A key such exchange is the interaction between scientific institutions and businesses.

Institutions for diffusion can also be effective if well designed

In addition to enabling framework conditions, effective institutions for technology diffusion are also important. Innovation systems invariably contain multiple sources of technology diffusion, such as universities and professional societies. Shapira and Youtie (2017) provide a typology of diffusion institutions. It ranges from applied technology centres (e.g. the Fraunhofer Institutes in Germany) to open technology mechanisms (e.g. the Bio-Bricks Registry of Standard Biological Parts). Some institutions involved, such as technical extension services, tend to receive low priority in the standard set of innovation support measures. But they can be effective if well designed. For example, the United States’ Manufacturing Extension Partnership has recently been estimated to return USD 14.5 per dollar of federal funding.

New diffusion initiatives are emerging, some of which are still experimental. For instance, alongside established applied technology centres, such as the Fraunhofer Institutes, partnership-based approaches are increasing. An example is the US National Network for Manufacturing Innovation (NNMI). The NNMI uses private non-profit organisations as the hub of a network of company and university organisations to develop standards and prototypes in areas such as 3D printing and digital manufacturing and design.

Technology diffusion institutions need realistic goals and time horizons

Upgrading the ability of manufacturing communities to absorb new production technologies takes time. More effective diffusion is likely when technology diffusion institutions are empowered and resourced to take longer-term perspectives. Similarly, evaluation metrics should emphasise longer-run capability development rather than incremental outcomes and revenue generation.

Introducing new ways to diffuse technology takes experimentation. Yet many governments want quick and riskless results. Policy making needs better evaluation evidence and a readiness to experiment with organisational designs and practices. Concerns over governmental accountability combined with ongoing public austerity in many economies could mean that institutions will be reluctant to risk change, slowing the emergence of next-generation institutions for technology diffusion (Shapira and Youtie, 2017).

Policies on connectivity and data

Broadband networks are essential to Industry 4.0. They reduce the cost of accessing information and expand the means for sharing data and knowledge. In this way, they help develop new goods, services and business models and facilitate research. Policy priorities in this area include furthering access to high-speed broadband networks, including in rural and remote areas, and overhauling laws governing the speed and coverage of communication services (OECD, 2017b). Fibre-optic cable is of particular importance for Industry 4.0 (Box 5.4).

Policies to promote competition and private investment, as well as independent and evidence-based regulation, have helped to extend coverage. When market forces cannot fulfil all policy objectives, governments can respond with a series of tools. These could include competitive public tenders for infrastructure deployment, legal obligations on operators, and subsidies for national and municipal broadband networks.

Other measures include fostering open access arrangements and initiatives to reduce deployment costs. “Dig once” practices, for example, mandate installation of fibre conduits in publicly funded road projects (OECD, 2018b). Technological developments are also likely to expand opportunities for providing services in underserved areas. For example, broadband could be delivered through “White Spaces”, the gaps in radio spectrum between digital terrestrial television channels.

copy the linklink copied!
Box 5.4. The importance of fibre-optic cable for Industry 4.0

Fibre-optic connectivity is important for Industry 4.0, and has numerous advantages over copper-cable based Internet. Fibre-optic cable provides faster speed, with a current upper range of 100 gigabytes per second. It provides faster access to cloud-hosted information, along with greater reliability, signal strength and bandwidth. Its lower latency is important for many digitally controlled machines, for collaboration among employees and for accommodating new technologies such as haptics (which remotely replicate a sense of touch). It improves security because the signal is lost during breaches of fibre-optic cable. It resists interference, stemming, for example, from proximity to machinery. Moreover, 5G networks rely on fibre connectivity.

Enhancing trust in digital services is critical to data sharing and the uptake of broadband. Industry 4.0 also creates risks that could erode the perceived benefits of digital technologies. While challenging to measure, digital security incidents appear to be increasing in terms of sophistication, frequency and influence (OECD, 2017b). In one 2014 incident, hackers breached the office computers of a German steel mill and overrode the shut-off mechanisms on the steel mill’s blast furnace (Long, 2018).

Such incidents affect firms’ reputations and competitiveness. They also impose significant costs on the economy as a whole, restricting ICT adoption and business opportunities. New digital security solutions are emerging. In homomorphic encryption, for example, data are always encrypted, even when being computed on in the cloud. But the technological race between hackers and their targets is continuous. And SMEs, in particular, need to introduce or improve their digital security risk management practices.

Restricting cross-border data flows should be avoided

Research is beginning to show that restricting data flows can lead to lost trade and investment opportunities, higher costs of cloud and other information technology services, and lower economic productivity and gross domestic product growth (Cory, 2017). Manufacturing creates more data than any other sector of the economy. Cross-border data flows are expected to grow faster than growth in world trade. Restricting such flows, or making them more expensive, for instance by obliging companies to process customer data locally, can raise firms’ costs and increase the complexity of doing business, especially for SMEs.

A prospective policy issue: Legal data portability rights for firms?

In April 2016, the European Union’s General Data Protection Regulation established the right to portability for personal data. A number of companies, such as Siemens and GE, are vying for leadership in online platforms for the IoT. As digitalisation proceeds, such platforms will become increasingly important repositories of business data. If companies had portability rights for non-personal data, competition among platforms could grow, and switching costs for firms could fall.

A prospective policy issue: Frameworks to protect non-personal sensor data

The protection of machine-generated data is likely to become a growing issue as Industry 4.0 advances. This is because sensors are becoming ubiquitous, more capable, increasingly linked to embedded computation, and used to stream large volumes of often critical machine data. Single machines may contain multiple component parts made by different manufacturers, each equipped with sensors that capture, compute and transmit data. These developments raise legal and regulatory questions. For instance, are special provisions needed to protect data in value chains from third parties? Which legal entities should have ownership rights of machine-generated data under what conditions? And, what rights to ownership of valuable data should exist in cases of business insolvency?

Increasing trust in cloud computing

Cloud computing is another technology where policy might be needed. Cloud use can bring efficiency gains for firms. And Industry 4.0 will require increased data sharing across sites and company boundaries.6 Consequently, machine data and data analytics and even monitoring and control systems will increasingly be situated in the cloud. The cloud will also enable independent AI projects to start small, and scale up and down as required. Indeed, Google’s Chief AI scientist, Fei-Fei Li, recently argued that cloud computing will democratise AI.7

Governments can act to increase trust in the cloud and stimulate cloud adoption. The use of cloud computing in manufacturing varies greatly across OECD countries. In Finland, 69% of manufacturers use the cloud, for example, compared to around 15% in Germany. Firms in countries where cloud use is low often cite fears over data security and uncertainty about placing data in extra-territorial servers. However, cloud use can bring increased data security, especially for SMEs. For example, Amazon Web Services, a market leader, reportedly provides more than 1 800 security controls. This affords a level of data security beyond what most firms could themselves provide. Government could take steps, for example, to help SMEs better understand the technical and legal implications of cloud service contracts. This could include providing information on the scope and content of certification schemes relevant for cloud computing customers.

Developing digital skills

Digital technologies create new skills needs. Occupational titles like “industrial data scientist”, and “bioinformatics scientists” are recent, reflecting technology-driven changes in skills demand. Individuals need the necessary basic skills to adopt new digital technologies. The lack of generic analytic skills and advanced skills is hindering technology adoption. For instance, surveys show that a shortage of skilled data specialists is a main impediment to the use of data analytics in business (OECD, 2017b).

Concern is widespread regarding possible labour market disruptions from automation driven by digital technology. Data from the OECD Programme for International Assessment of Adult Competencies highlight a lack of ICT skills in low-skilled adult populations in semi-skilled occupations. This means this demographic group is at high risk of losing jobs to automation.

Forecasting skills needs is hazardous. Just a few years ago, few would have foreseen that smartphones would disrupt, and in some cases end, a wide variety of products and industries, from notebook computers and personal organisers to niche industries making musical metronomes and hand-held magnifying glasses (functions now available through mobile applications).

Because foresight is imperfect, governments must establish systems that draw on the collective information and understanding available regarding emerging needs for skills. In that regard, businesses, trade unions, educational institutions and learners can all contribute. Students, parents and employers must have access to information with which to judge how well educational institutions perform and assess the career paths of graduates of different programmes. In turn, educational and training systems must be organised such that resources flow efficiently to courses and institutions that best cater to the demand for skills. Institutions like Sweden’s job security councils, or the SkillsFutureSingapore agency, play such roles (Atkinson, 2018). And business and government must work together to design training schemes, with public authorities ensuring the reliability of training certification.

How learning is delivered matters greatly

Policies for improving skills for Industry 4.0 typically include fostering ICT literacy in school curricula. This literacy ranges from use of basic productivity software such as word processing programmes and spreadsheets, to coding and even digital security courses. Throughout formal education, more multidisciplinary programmes and greater curricular flexibility are often required. For instance, students should be able to select a component on mechanical engineering and combine this with data science, bio-based manufacturing, or other disciplines.

In a comprehensive review of science, technology, engineering and mathematics (STEM) education, Atkinson and Mayo (2010) identify a series of priorities. These emphasise helping students follow their interests and passions; respecting the desire of younger students to be active learners; and giving greater opportunity to explore a wide variety of STEM subjects in depth. Equally important are increasing the use of online, video-game and project-based learning, and creating options to take tertiary-level STEM courses at secondary level. Japan’s Kosen schools have proven the efficacy of many of these ideas since the early 1960s (Schleicher, 2018).

Many governments are implementing forward-looking programmes to match ICT training priorities with expected skills needs. In Belgium, for example, the government carries out prospective studies on the expected impact of the digital transformation on occupations and skills in a wide variety of fields. The results are then used to select training courses to be reinforced for emerging and future jobs (OECD, 2017b). Estonia and Costa Rica have also changed school curricula based on where they estimate jobs will be in the future.

Lifelong learning must be an integral part of work

Advancing automation and the birth of new technologies also mean that lifelong learning must be an integral part of work. Each year, inflows to the labour force from initial education represent only a small percentage of the numbers in work, who in turn will bear much of the cost of adjustment to new technologies. Both considerations underscore the importance of widespread lifelong learning. Disruptive changes in production technology highlight the importance of strong and widespread generic skills, such as literacy, numeracy and problem solving. These foundation skills are the basis for subsequent acquisition of technical skills, whatever they turn out to be. In collaboration with social partners, governments can help spur development of new training programmes, such as conversion courses in AI for those already in work.

Digital technology will itself affect how skills are developed

Digital technology is creating opportunities to develop skills in novel ways. For example, in 2014, Professor Ashok Joel, and graduate students, at Georgia Tech University, created an AI teaching assistant – Jill Watson – to respond to online student questions. For months students were unaware that the responses were non-human (Korn, 2016). iTalk2Learn is a European Union project to develop an open-source intelligent mathematics tutoring platform for primary schools. Closer to the workplace, researchers at Stanford University are developing systems to train crowdworkers using machine-curated material generated by other crowdworkers. And Upskill (www.upskill.io) provides wearable technology to connect workers to the information, equipment, processes and people they need in order to work more efficiently. Among other potential benefits, in a world where lifelong learning will be essential, AI could help learners understand the idiosyncrasies of how they learn best.

Participation in standards-setting processes

Advanced production operates in a vast matrix of technical standards. The semiconductor industry, for example, uses over 1 000 standards (Tassey, 2014). Standards development relevant to Industry 4.0 is underway in many fields. These range from machine-to-machine communication and data transmission to 5G (a global standard for which is expected by 2019), robotics and digital identifiers for objects. Over 100 standards initiatives exist today for the IoT and Industry 4.0 (Ezell, 2018).

Countries and firms that play primary roles in setting international standards can enjoy advantages if new standards align with their own national standards and/or features of their productive base. The public sector’s role should be to encourage industry, including firms of different sizes, to participate at early stages in international (and in some cases national) standards setting. Dedicated support could be given to include under-represented groups of firms in processes to develop standards.

The development of AI standards – particularly technical standards – is at a very early stage so far. Most national AI strategies refer to the development of AI ethics standards. But this oversight dimension of standards, around ethics and corporate governance, also needs technical standards (a term like “algorithmic transparency” doesn’t yet have a technical definition). The timing of standards setting – too soon or too late – is always an issue raised when assessing how standards affect innovation. In the past, often, just a few main players negotiated standards. But now there are large numbers of developers working on Open Source projects that will also find standards solutions. In some areas of AI, who defines a standard first may be less important than with previous technologies.

Improving access to high-performance computing

HPC is increasingly important for firms in industries ranging from construction and pharmaceuticals to the automotive sector and aerospace. Airbus, for instance, owns 3 of the world’s 500 fastest supercomputers. Two-thirds of US-based companies that use HPC say that: “increasing performance of computational models is a matter of competitive survival” (US Council on Competitiveness, 2014). How HPC is used in manufacturing is also expanding, going beyond applications such as design and simulation to include real-time control of complex production processes. Financial rates of return to HPC use are high. By one estimate, each EUR 1 invested generates, on average, EUR 69 in profits (EC, 2016). A 2016 review observed that

“(m)aking HPC accessible to all manufacturers in a country can be a tremendous differentiator, and no nation has cracked the puzzle yet” (Ezell and Atkinson, 2016).

copy the linklink copied!
Box 5.5. Getting supercomputing to industry: Possible policy actions
  • Raise awareness of industrial use cases, with quantification of their costs and benefits.

  • Develop a one-stop source of HPC services and advice for SMEs and other industrial users.

  • Provide low-cost, or free, limited experimental use of HPC for SMEs, with a view to demonstrating the technical and commercial implications of the technology.

  • Establish online software libraries or clearing houses to help disseminate innovative HPC software to a wider industrial base.

  • Give incentives for HPC centres with long industrial experience, such as the Hartree Centre in the United Kingdom, or TERATEC in France, to advise centres with less experience.

  • Modify eligibility criteria for HPC projects, which typically focus on peer review of scientific excellence, to include criteria of commercial impact.

  • Engage academia and industry in the co-design of new hardware and software, as has been done in European projects such as Mont Blanc (Mont Blanc, n.d.).

  • Include HPC in university science and engineering curricula.

  • Explore opportunities for co-ordinating the purchase of commercially provided computing capacity.

As Industry 4.0 becomes more widespread, demand for HPC will rise. But like other digital technologies, the use of HPC in manufacturing falls short of potential. One estimate is that 8% of US firms with fewer than 100 employees use HPC. However, half of manufacturing SMEs could use HPC (for prototyping, testing and design) (Ezell and Atkinson, 2016). Public HPC initiatives often focus on the computation needs of “big science”. Greater outreach to industry, especially SMEs, is frequently needed. Ways forward – a number of which are described in EC (2016) – are set out in Box 5.5.

Even developing countries may be advised to have a backbone network of high-performance computers. Initially, a low-income economy may have few sophisticated industrial uses for HPC. However, high-performance computers can find initial applications in research and science, and then later be applied in industry. Cloud-based supercomputing cannot meet all supercomputing needs. This is only viable when applications are needed occasionally. If industry or scientific applications are regular or continuous, then a cloud-based service may be too expensive.

Intellectual property systems

Digital technologies are raising new challenges for IP systems. 3D printing, for example, might create complications in connection with patent eligibility. For instance, if 3D printed human tissue improves upon natural human tissue, it may be eligible for patenting, even though naturally occurring human tissue is not. More fundamentally, new patenting frameworks may be needed in a world where machines have the ability to invent. AI systems have already created patentable inventions (OECD, 2017a).

AI raises many complex challenges for IP systems, such as identifying infringements of patent laws. These laws will be complicated by AI systems that automatically – and unpredictably – learn from many publicly available sources of information (Yanisky-Ravid and Liu, 2017). An overarching policy challenge is to balance the needs around IP. On the one hand, IP is necessary for incentivising certain types of innovation. On the other, it should not hamper diffusion of technologies such as AI and 3D printing.

Public support for R&D

The complexity of many emerging production technologies exceeds the research capacities of even the largest firms. In such cases public-private research partnerships may be needed. Microelectronics, new materials and nanotechnology, among others, have arisen because of advances in scientific knowledge and instrumentation. Publicly financed basic research has often been critical. For decades, for example, public funding supported progress in AI, including during unproductive periods of research, to the point where AI today attracts huge private investment (National Research Council, 1999). Recent declines in public support for research in some major economies is a concern.

Many possible targets exist for government R&D and commercialisation efforts. As discussed below, these range from quantum computing (Box 5.6), to advancing AI.

An overarching research challenge relates to computation itself

Processing speeds, memory capacities, sensor density and accuracy of many digital devices are linked to Moore’s Law. This asserts that the number of transistors on a microchip doubles about every two years (Investopedia, n.d.). However, atomic-level phenomena and rising costs constrain further shrinkage of transistors on integrated circuits.

Many experts believe a limit to miniaturisation will soon be reached. At the same time, applications of digital technologies across the economy rely on increasing computing power. For example, the computing power needed for the largest AI experiments is doubling every three-and-a-half months (OpenAI, 16 May 2018). By one estimate, this trend can be sustained for at most three-and-a-half to ten years, even assuming public R&D commitments on a scale similar to the Apollo or Manhattan projects (Carey, 10 July 2018).

Much, therefore, depends on achieving superior computing performance (including in terms of energy requirements). Many hope that significant advances in computing will stem from research breakthroughs in optical computing (using photons instead of electrons), biological computing (using DNA to store data and calculate) and/or quantum computing (Box 5.6).

copy the linklink copied!
Box 5.6. A new computing regime: The race for quantum computing

Quantum computers function by exploiting the laws of subatomic physics. A conventional transistor flips between on and off, representing 1s and 0s. However, a quantum computer uses quantum bits (qubits), which can be in a state of 0, 1 or any probabilistic combination of both 0 and 1 (for instance, 0 with 20% and 1 with 80% probability). At the same time, qubits interact with other qubits through so-called quantum entanglement (which Einstein termed “spooky action at a distance”).

Fully developed quantum computers, featuring many qubits, could revolutionise certain types of computing. Many of the problems best addressed by quantum computers, such as complex optimisation and vast simulation, have major economic implications. For example, at the 2018 CogX Conference, Dr Julie Love, Microsoft’s director of quantum computing, described how simulating all the chemical properties of the main molecule involved in fixing nitrogen – nitrogenase – would take today’s supercomputers billions of years. Yet this simulation could be performed in hours with quantum technology. The results of such a simulation would directly inform the challenge of raising global agricultural productivity and limiting today’s reliance on the highly energy-intensive production of nitrogen-based fertiliser. Rigetti Computing has also demonstrated that quantum computers can train ML algorithms to a higher accuracy, using fewer data than with conventional computing (Zeng, 22 February 2018).

Until recently, quantum technology has mostly been a theoretical possibility. However, Google, IBM and others are beginning to trial practical applications with a small number of qubits (Gambetta, Chow and Teffen, 2017). For example, IBM Quantum Experience (IBM, n.d.) offers free online quantum computing. However, no quantum device currently approaches the performance of conventional computers.

By one estimate, fewer than 100 people globally possess the skills to write algorithms specifically for quantum computers. Azhar (2018) calculates that companies involved in any aspect of quantum computing employ fewer than 2 000 people globally. Skill constraints may be lessened by Google’s release of Cirq. This software toolkit allows developers without specialised knowledge of quantum physics to create algorithms for quantum machines (Giles, 2018a). Zapata Computing, a start-up, aims to offer a range of ready-made software that firms can use on quantum computers (Giles, 2018b).

The further development of robust, scalable quantum computing involves major research and engineering challenges. Global annual public investment in quantum computing could range from EUR 1.5 billion to EUR 1.9 billion. While relatively small, venture capital funding is growing, led by D-Wave (USD 175 million), Rigetti (USD 70 million), Cambridge Quantum Computing (USD 50 million) and IonQ (USD 20 million) (Azhar, 2018). The People’s Republic of China is scheduled to open a National Laboratory for Quantum Information Sciences in 2020, with a projected investment of USD 10 billion. Chinese scientists are making major research advances. In July 2018, for instance, they broke a record for the number of qubits linked to one another through quantum entanglement (Letzer, 2018).

A need for more – and possibly different – research on AI

Public research funding has been key to progress in AI since the origin of the field. The National Research Council (1999) shows that while the concept of AI originated in the private sector – in close collaboration with academia – its growth largely results from many decades of public investments. Global centres of AI research excellence (e.g. at Stanford, Carnegie Mellon and the Massachusetts Institute of Technology) arose because of public support, often linked to US Department of Defense funding. However, recent successes in AI have propelled growth in private sector R&D for AI. For example, earnings reports indicate that Google, Amazon, Apple, Facebook and Microsoft spent a combined USD 60 billion on R&D in 2017, including an important share on AI. By comparison, total US federal government R&D for non-defence industrial production and technology amounted to around USD 760 million in 2017 (OECD, 2019).

Many in business, government and among the public believe AI stands at an inflection point, ready to achieve major improvements in capability. However, some experts emphasise the scale and difficulties of the outstanding research challenges. Some AI research breakthroughs could be particularly important for society, the economy and public policy. However, corporate and public research goals might not fully align. Jordan (2018) notes that much AI research is not directly relevant to the major challenges of building safe intelligent infrastructures, such as medical or transport systems. He observes that unlike human-imitative AI, such critical systems must have the ability to deal with:

“…distributed repositories of knowledge that are rapidly changing and are likely to be globally incoherent. Such systems must cope with cloud-edge interactions in making timely, distributed decisions and they must deal with long-tail phenomena whereby there is (sic) lots of data on some individuals and little data on most individuals. They must address the difficulties of sharing data across administrative and competitive boundaries.” (Jordan, 2018)

Other outstanding research challenges relevant to public policy relate to making AI explainable; making AI systems robust (image-recognition systems can easily be misled, for instance); determining how much prior knowledge will be needed for AI to perform difficult tasks (Marcus, 2018); bringing abstract and higher-order reasoning, and “common sense”, into AI systems; and inferring and representing causality. Jordan (2018) also identifies the need to develop computationally tractable representations of uncertainty. No reliable basis exists for judging when – or whether – research breakthroughs will occur. Indeed, past predictions of timelines in the development of AI have been extremely inaccurate.

Research and industry can often be linked more effectively

Government-funded research institutions and programmes should be free to combine the right partners and facilities to address challenges of scale-up and interdisciplinarity. Investments are often essential in applied research centres and pilot production facilities to take innovations from the laboratory into production. Demonstration facilities such as test beds, pilot lines and factory demonstrators are also needed. These should provide dedicated research environments with the right mix of enabling technologies and the technicians to operate them. Some manufacturing R&D challenges may need expertise from manufacturing engineers and industrial researchers, as well as designers, equipment suppliers, shop-floor technicians and users (O’Sullivan and López-Gómez, 2017).

More effective research institutions and programmes in advanced production may also need new evaluation indicators. These would go beyond traditional metrics such as numbers of publications and patents. Additional indicators might also assess such criteria as successful pilot line and test-bed demonstration, training of technicians and engineers, consortia membership, the incorporation of SMEs in supply chains and the role of research in attracting FDI.

copy the linklink copied!Conclusion

New digital technologies are key to the next production revolution. Realising their full potential requires effective policy in wide-ranging fields, including skills, technology diffusion, data, digital infrastructure, research partnerships, standards and IPRs. Typically, these diverse policy fields are not closely connected in government structures and processes. Governments must also adopt long-term time horizons, for instance, in pursuing research agendas with possible long-term payoffs. Public institutions must also possess specific understanding of many fast-evolving digital technologies. One leading authority argues that converging developments in several technologies are about to yield a “Cambrian explosion” in robot diversity and use (Pratt, 2015). Adopting Industry 4.0 poses challenges for firms, particularly small ones. It also challenges governments’ ability to act with foresight and technical knowledge across multiple policy domains.

References

AI Intelligent Automation Network (2018), “AI 2020 : The global state of intelligent enterprise”, webpage, https://www.aiia.net/events-intelligentautomation-chicago/downloads/ai-2020-the-global-state-of-intelligent-enterprise (accessed 9 July 2019).

Andrews, D., C. Criscuolo and P.N. Gal (2016), “The global productivity slowdown, technology divergence and public policy”, Hutchins Center Working Paper # 24, September. https://www.brookings.edu/wp-content/uploads/2016/09/wp24_andrews-et-al_final.pdf.

Atkinson, R.D. and Ezell, S. (2019), “The manufacturing evolution: how AI will transform manufacturing and the workforce of the future”, Information Technology and Innovation Foundation, Washington, DC: https://itif.org/publications/2019/08/06/manufacturing-evolution-how-ai-will-transform-manufacturing-and-workforce.

Atkinson, R.D. (2018), “How to reform worker-training and adjustment policies for an era of technological change”, Information Technology and Innovation Foundation, Washington, DC: http://www2.itif.org/2018-innovation-employment-workforce-policies.pdf.

Atkinson, R.D. and M. Mayo (2010), “Refueling the U.S. innovation economy: Fresh approaches to science, technology, engineering and mathematics (STEM) Education”, Information Technology and Innovation Foundation, Washington, DC: https://www.itif.org/files/2010-refueling-innovation-economy.pdf.

Azhar, A. (2018), “Exponential view: Dept. of quantum computing”, The Exponential View, 15 July, www.exponentialview.co/evarchive/#174.

Bergeret, B. (2019), “AI and Europe’s medium-sized firms: How to overcome an Achilles heel”, OECD Observer, http://oecdobserver.org/news/fullstory.php/aid/6259/AI_and_Europe_92s_medium-sized_firms:_How_to_overcome_an_Achilles_heel.html.

Bonnin-Roca, J. et al. (2016), “Policy needed for additive manufacturing”, Nature Materials, Vol. 15, Nature Research, Springer, pp. 815-818, https://doi.org/10.1038/nmat4658.

Bughin, J. et al. (2017), “Artificial intelligence: The next digital frontier?” Discussion Paper, June, McKinsey Global Institute, https://www.mckinsey.com/~/media/McKinsey/Industries/Advanced%20Electronics/Our%20Insights/How%20artificial%20intelligence%20can%20deliver%20real%20value%20to%20companies/MGI-Artificial-Intelligence-Discussion-paper.ashx.

Carey, R. (10 July 2018), “Interpreting AI compute trends”, AI blog, https://aiimpacts.org/interpreting-ai-compute-trends/.

Champain, V. (2018), “Comment l’intelligence artificielle augmentée va changer l’industrie”, La Tribune, Paris, 27 March, www.latribune.fr/opinions/tribunes/comment-l-intelligence-artificielle-augmentee-va-changer-l-industrie-772791.html.

Chen, S. (2018), “Scientists are using AI to painstakingly assemble single atoms”, Science, American Association for the Advancement of Science, Washington, DC, 23 May, www.wired.com/story/scientists-are-using-ai-to-painstakingly-assemble-single-atoms/.

Chen, S. (2017), “The AI company that helps Boeing cook new metals for jets”, Science, American Association for the Advancement of Science, Washington, DC, 12 June, www.wired.com/story/the-ai-company-that-helps-boeing-cook-new-metals-for-jets/.

Chui, M. et al. (2018), “Notes from the AI frontier: Insights from hundreds of use cases”, Discussion Paper, April, McKinsey & Company, New York, April, www.mckinsey.com/featured-insights/artificial-intelligence/notes-from-the-ai-frontier-applications-and-value-of-deep-learning.

Cockburn, I., R. Henderson and S. Stern (2018), “The impact of artificial intelligence on innovation”, in A. Agrawal, J. Gans and A. Goldfarb (eds.), The Economics of Artificial Intelligence: An Agenda, University of Chicago Press.

Cory, N. (2017), “Cross-border data flows: Where are the barriers and what do they cost ?”, Information Technology and Innovation Foundation, Washington, DC, https://itif.org/publications/2017/05/01/cross-border-data-flows-where-are-barriers-and-what-do-they-cost.

David, P.A. (1990), “The Dynamo and the computer: An historical perspective on the modern productivity paradox”, American Economic Review, Volume 80, Issue 2, pp.355-361.

Digital Catapult (2018), “Machines for machine intelligence: Providing the tools and expertise to turn potential into reality”, Machine Intelligence Garage, Research Report 2018, London, www.migarage.ai.

Dorfman, P. (2018), “3 Advances changing the future of artificial intelligence in manufacturing”, Autodesk Newsletter, 3 January, www.autodesk.com/redshift/future-of-artificial-intelligence/.

EC (2017), “Europe’s digital progress report 2017”, https://ec.europa.eu/digital-single-market/en/news/europes-digital-progress-report-2017.

EC (2016), “Implementation of the Action Plan for the European High-Performance Computing Strategy”, Commission Staff Working Document, SWD(2016)106, European Commission, Brussels, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52016SC0106.

Ezell, S. (2018), “Why Manufacturing Digitalization Matters and How Countries Are Supporting It”, Information Technology and Innovation Foundation, Washington, DC, www2.itif.org/2018-manufacturing-digitalization.pdf (accessed 21 January 2019).

Ezell, S.J. and R.D. Atkinson (2016), “The vital importance of high-performance computing to US competitiveness”, Information Technology and Innovation Foundation, Washington, DC, www2.itif.org/2016-high-performance-computing.pdf.

Faggella, D. (2018), “Industrial AI applications – How time series and sensor data improve processes”, 31 May, Techemergence, San Francisco, www.techemergence.com/industrial-ai-applications-time-series-sensor-data-improve-processes/.

FCA (2017), “Regulatory sandbox lessons learned report”, Financial Conduct Authority, London, www.fca.org.uk/publication/research-and-data/regulatory-sandbox-lessons-learned-report.pdf.

Figueiredo do Nascimento, S., A. Roque Mendes Polvora and J. Sousa Lourenco (2018), “#Blockchain4EU: Blockchain for industrial transformations”, Publications Office of the European Union, Luxembourg, https://ec.europa.eu/jrc/en/publication/eur-scientific-and-technical-research-reports/blockchain4eu-blockchain-industrial-transformations.

Fraunhofer (2015), “Analysis of the impact of robotic systems on employment in the European Union”, https://ec.europa.eu/digital-single-market/news/fresh-look-use-robots-shows-positive-effect-automation.

Friedrichs, S. (2017), “Tapping nanotechnology’s potential to shape the next production revolution”, in The Next Production Revolution: Implications for Governments and Business, OECD Publishing, Paris, https://doi.org/10.1787/9789264271036-8-en.

Gambetta, J.M., J.M. Chow and M. Teffen (2017), “Building logical qubits in a superconducting quantum computing system”, npj Quantum Information, Vol. 3/2, Nature Publishing Group and University of New South Wales, London and Sydney, https://doi.org/10.1038/s41534-016-0004-0.

Giles, M. (2018a), “Google wants to make programming quantum computers easier”, MIT Technology Review, 18 July, Massachusetts Institute of Technology, Cambridge, www.technologyreview.com/s/611673/google-wants-to-make-programming-quantum-computers-easier/.

Giles, M. (2018b), “The world’s first quantum software superstore – or so it hopes – is here”, MIT Technology Review, 17 May, Massachusetts Institute of Technology, Cambridge, www.technologyreview.com/s/611139/the-worlds-first-quantum-software-superstore-or-so-it-hopes-is-here/.

Goodfellow, I., Y. Bengio and A. Courville (2016), Deep Learning, MIT Press, Cambridge, Massachusetts.

Harbert, T. (2013), “Supercharging patent lawyers with AI: How Silicon Valley’s Lex Machina is blending AI and data analytics to radically alter patent litigation”, IEEE Spectrum, 30 October, Institute of Electrical and Electronics Engineers, New York, https://spectrum.ieee.org/geek-life/profiles/supercharging-patent-lawyers-with-ai.

Hardjano, T., A. Lipton and A.S. Pentland (2018), “Towards a design philosophy for interoperable blockchain systems”, 7 July, Massachusetts Institute of Technology, Cambridge, https://hardjono.mit.edu/sites/default/files/documents/hardjono-lipton-pentland-p2pfisy-2018.pdf.

House of Lords (2018), “AI in the UK: Ready, willing and able?”, Select Committee on Artificial Intelligence – Report of Session 2017-19, HL Paper No. 100, Authority of the House of Lords, London, https://publications.parliament.uk/pa/ld201719/ldselect/ldai/100/100.pdf.

IBM (n.d.), “Quantum”, webpage, www.research.ibm.com/quantum (accessed 9 July 2019).

Investopedia (n.d.), “Moore’s Law”, webpage, www.investopedia.com/terms/m/mooreslaw.asp (accessed 10 May 2019).

Jordan, M. (2018), “Artificial intelligence –  The revolution hasn’t happened yet”, Medium, 19 April, San Francisco, https://medium.com/@mijordan3/artificial-intelligence-the-revolution-hasnt-happened-yet-5e1d5812e1e7.

Korn, M. (2016), “Imagine discovering that your teaching assistant really is a robot”, The Wall Street Journal, 6 May, https://www.wsj.com/articles/if-your-teacher-sounds-like-a-robot-you-might-be-on-to-something-1462546621.

Küpper, D. et al. (2018), “AI in the factory of the future: The ghost in the machine”, 18 April, Boston Consulting Group, www.bcg.com/publications/2018/artificial-intelligence-factory-future.aspx.

Letzer, R. (2018), “Chinese researchers achieve stunning quantum entanglement record”, Scientific American, 17 July, Springer Nature, www.scientificamerican.com/article/chinese-researchers-achieve-stunning-quantum-entanglement-record/.

Linkedin Economic Graph (2019),“Talent in the European Labour Market”, Linkedin Economic Graph, November, https://economicgraph.linkedin.com/content/dam/me/economicgraph/en-us/reference-cards/research/2019/AI-Talent-in-the-European-Labour-Market.pdf.

Long, L. (2018), “The most pressing challenge modern manufacturers face? Cybersecurity”, Engineering.com, 30 April, https://www.engineering.com/AdvancedManufacturing/ArticleID/16856/The-Most-Pressing-Challenge-Modern-Manufacturers-Face-Cybersecurity.aspx.

Marcus, G. (2018), “Innateness, AlphaZero and artificial intelligence”, arxiv.org, Cornell University, Ithaca, United States, https://arxiv.org/ftp/arxiv/papers/1801/1801.05667.pdf.

McDowell, D.L. (2017), “Revolutionising product design and performance with materials innovation”, in The Next Production Revolution: Implications for Governments and Business, OECD Publishing, Paris, https://doi.org/10.1787/9789264271036-10-en.

Mearian, L. (2017), “Blockchain integration turns ERP into a collaboration platform”, Computerworld, 9 June, IDG, Framingham, United States, www.computerworld.com/article/3199977/enterprise-applications/blockchain-integration-turns-erp-into-a-collaboration-platform.html.

Mont Blanc (n.d.), “Mont Blanc project”, webpage, http://montblanc-project.eu/ (accessed 9 July 2019).

National Research Council (1999), Funding a Revolution: Government Support for Computing Research, The National Academies Press, Washington, DC, https://doi.org/10.17226/6323.

Ocean Protocol (n.d.), Ocean Protocol website, www.oceanprotocol.com (accessed 9 July 2019).

OECD (2019), OECD Main Science and Technology Indicators (database), http://oe.cd/msti (accessed 10 July 2019).

OECD (2018a), Going Digital in a Multilateral World, Interim Report of the OECD Going Digital Project, Meeting of the OECD Council at Ministerial Level, Paris, 30-31 May 2018, OECD, Paris, www.oecd.org/going-digital/C-MIN-2018-6-EN.pdf.

OECD (2018b), “Bridging the rural digital divide”, OECD Digital Economy Papers, No. 265, OECD Publishing, Paris, https://doi.org/10.1787/852bd3b9-en.

OECD (2017a), The Next Production Revolution: Implications for Governments and Business, OECD Publishing, Paris, https://doi.org/10.1787/9789264271036-en.

OECD (2017b), OECD Digital Economy Outlook 2017, OECD Publishing, Paris, https://doi.org/10.1787/9789264276284-en.

OpenAI (16 May 2018), “AI and compute”, OpenAI blog, San Francisco, https://blog.openai.com/ai-and-compute/.

O’Sullivan, E. and C.López-Gómez (2017), “An international review of emerging manufacturing R&D priorities and policies for the next production revolution”, in The Next Production Revolution: Implications for Governments and Business, OECD Publishing, Paris, https://doi.org/10.1787/9789264271036-14-en.

Pratt, G.A. (2015), “Is a Cambrian explosion coming for robotics?”, Journal of Economic Perspectives, Volume 29/3, American Economic Association, Pittsburgh, pp. 51-60, https://doi.org/10.1257/jep.29.3.51.

Ransbotham, S. et al. (2017), “Reshaping business with artificial intelligence: Closing the gap between ambition and action”, MIT Sloan Management Review, Massachusetts Institute of Technology, Cambridge, https://sloanreview.mit.edu/projects/reshaping-business-with-artificial-intelligence/.

Robinson, L. and K. McMahon (2016), “TMS launches materials data infrastructure study,” JOM, Vol. 68/8, Springer, New York, pp. 2014-2016, https://doi.org/10.1007/s11837-016-2011-1.

Schleicher, A. (2018), “How Japan’s schools are creating a new generation of innovators”, 13 March, http://oecdeducationtoday.blogspot.com/2018/03/japan-kosen-school-innovation-technology.html.

Shapira, P and J. Youtie (2017), “The next production revolution and institutions for technology diffusion”, in The Next Production Revolution: Implications for Governments and Business, OECD Publishing, Paris, https://doi.org/10.1787/9789264271036-11-en.

Sikich (2017), “2017 manufacturing report”, www.sikich.com/wp-content/uploads/2017/09/SKCH-Manufacturing-Report-2017-08-17.pdf.

Simonite, T. (2016), “Algorithms that learn with less data could expand AI’s power”, MIT Technology Review, 24 May, Massachusetts Institute of Technology, Cambridge, www.technologyreview.com/s/601551/algorithms-that-learn-with-less-data-could-expand-ais-power/.

Sverdlik, Y. (2018), “Google is switching to a self-driving data center management system”, 2 August, Data Center Knowledge, www.datacenterknowledge.com/google-alphabet/google-switching-self-driving-data-center-management-system.

Tassey, G. (2014), “Competing in advanced manufacturing: The need for improved growth models and policies”, Journal of Economic Perspectives, Vol. 28, No.1, pp 27-48.

Teresko, J. (2008), “Designing the next materials revolution”, IndustryWeek, 8 October, Informa, Cleveland, www.industryweek.com/none/designing-next-materials-revolution.

The Economist (2017), “Oil struggles to enter the digital age”, The Economist, 6 April, London, www.economist.com/business/2017/04/06/oil-struggles-to-enter-the-digital-age.

The Economist (2015), “Material difference”, Technology Quarterly, 12 May, The Economist, London, www.economist.com/technology-quarterly/2015-12-05/new-materials-for-manufacturing.

US Council on Competitiveness (2014), “The Exascale effect: The benefits of supercomputing for US industry”, US Council on Competitiveness, Washington, DC, www.compete.org/storage/images/uploads/File/PDF%20Files/Solve_Report_Final.pdf.

Vujinovic, M. (2018), “Manufacturing and blockchain: Prime time has yet to come”, CoinDesk, 24 May, www.coindesk.com/manufacturing-blockchain-prime-time-yet-come/.

Walker, J. (2017), “AI in mining: Mineral exploration, autonomous drills, and more”, Techemergence, 3 December, www.techemergence.com/ai-in-mining-mineral-exploration-autonomous-drills/.

Williams, M. (2013), “Counterfeit parts are costing the industry billions”, Automotive Logistics, 1 January, Ultima Media, London, https://automotivelogistics.media/intelligence/16979.

Wissner-Gross, A. (2016), “Datasets over algorithms”, Edge.org, Edge Foundation, Seattle, www.edge.org/response-detail/26587.

Yanisky-Ravid, S. and X.Liu (2017), “When artificial intelligence systems produce inventions: The 3A era and an alternative model for patent law”, 39 Cardozo Law Review, 2215-2263 (2018), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2931828.

Zeng, W. (22 February 2018), “Forest 1.3: Upgraded developer tools, improved stability, and faster execution”, Rigetti Computing blog, https://medium.com/rigetti/forest-1-3-upgraded-developer-tools-improved-stability-and-faster-execution-561b8b44c875.

ZEW-IKT (2015), “Industrie 4.0: Digitale (R)Evolution der Wirtschaft” [Industry 4.0: Digital (r)evolution of the economy], ZEW, Mannheim, http://ftp.zew.de/pub/zew-docs/div/IKTRep/IKT_Report_2015.pdf.

Zweben, M. and M.S. Fox (1994), Intelligent Scheduling, Morgan Kaufmann Publishers, San Francisco.

Notes

← 1. Deep learning with artificial neural networks is a technique in the broader field of machine learning (ML) that seeks to emulate how human beings acquire certain types of knowledge. The word “deep” refers to the numerous layers of data processing. The term “artificial neural network” refers to hardware and/or software modelled on the functioning of neurons in a human brain.

← 2. AI will, of course, have many economic and social impacts. In relation to labour markets alone, intense debates exist on AI’s possible effects on labour displacement, income distribution, skills demand and occupational change. However, these and other considerations are not a focus of this chapter.

← 3. In the development of improved forms of AI, increased data availability has been critical. Over the past 30 years, the length of time between data creation and the most publicised AI breakthroughs has been much shorter than between algorithmic progress and the same breakthroughs (Wissner-Gross, 2016). Using a variant of an algorithm developed 25 years earlier, for example, Google’s GoogLeNet software achieved near-human level object classification in 2014. But the software was trained on ImageNet, a huge corpus of labelled images and object categories that had become available just four years earlier (at its peak, ImageNet reportedly employed close to 50 000 people in 167 countries, who sorted around 14 million images [House of Lords, 2018]).

← 4. Many tools that firms employ to manage and use AI exist as free software in open source (i.e. their source code is public and modifiable). These include software libraries such as TensorFlow and Keras, and tools that facilitate coding such as GitHub, text editors like Atom and Nano, and development environments like Anaconda and RStudio. Machine learning-as-a-service platforms also exist, such as Michelangelo – Uber’s internal system that helps teams build, deploy and operate ML solutions.

← 5. An example of a data exchange is datacollaboratives.org.

← 6. For example, Ezell (2018) reports that “BMW has set a goal of knowing the real-time status of all major production equipment at each company that produces key components for each of its vehicles”.

← 7. See Professor Li’s full remarks at the 2017 Global StartupGrind Conference: https://www.startupgrind.com/blog/cloud-will-democratize-ai/.

Metadata, Legal and Rights

This document, as well as any data and map included herein, are without prejudice to the status of or sovereignty over any territory, to the delimitation of international frontiers and boundaries and to the name of any territory, city or area. Extracts from publications may be subject to additional disclaimers, which are set out in the complete version of the publication, available at the link provided.

https://doi.org/10.1787/b9e4a2c0-en

© OECD 2020

The use of this work, whether digital or print, is governed by the Terms and Conditions to be found at http://www.oecd.org/termsandconditions.