Chapter 2. Artificial intelligence and the technologies of the Next Production Revolution

Nolan Alistair

Developing and adopting new production technologies is essential to raising living standards and countering the declining labour productivity growth in many OECD countries over recent decades. Rapid population ageing – the dependency ratio in OECD countries is set to double over the next 35 years – makes raising labour productivity more urgent. Digital technologies can increase productivity in many ways. For example, they can reduce machine downtime, as intelligent systems predict maintenance needs. They can also perform work more quickly, precisely and consistently, as increasingly autonomous, interactive and inexpensive robots are deployed. New production technologies will also benefit the natural environment in several new ways. For example, nanotechnology is helping to develop materials that cool themselves to below ambient temperature without consuming energy.1

This chapter examines a selection of policies aiming to enable the Next Production Revolution. With the exceptions of artificial intelligence (AI) and blockchain, it describes only briefly some of the many transformational uses of digital technology in production, as these developments are reviewed in (among other publications) OECD (2017, 2018a). Instead, the chapter emphasises policy initiatives and policy research findings that have arisen recently, or were not addressed in OECD (2017).

This chapter has two parts. The first covers individual technologies and their specific policy implications, namely AI and blockchain in production, 3D printing, industrial biotechnology, new materials and nanotechnology. The second addresses just two of the many cross-cutting policy issues relevant to future production, namely: access to and awareness of high-performance computing (HPC), and public support for research. Particular attention is given to public research related to computing and AI, as well as the institutional mechanisms needed to enhance the impact of public research.

The Oxford English Dictionary defines artificial intelligence as “the theory and development of computer systems able to perform tasks normally requiring human intelligence”. Expert systems – a form of AI drawing on pre-programmed expert knowledge – have been used in industrial processes for close to four decades (Zweben and Fox, 1994). However, with the development of deep learning using artificial neural networks2 – the main source of recent progress in the field – AI can be applied to most industrial activities, from optimising multi-machine systems to enhancing industrial research (Box 2.1). Furthermore, the use of AI in production will be spurred by automated machine learning processes that can help businesses, scientists and other users employ the technology more readily. Currently, with respect to AI that uses deep learning techniques and artificial neural networks, the greatest commercial potential for advanced manufacturing is expected to exist in supply chains, logistics and process optimisation (McKinsey Global Institute, 2018). Some survey evidence also suggests that the transportation and logistics, automotive and technology sectors lead in terms of the share of early AI-adopting firms (Boston Consulting Group, 2018).

Beyond its direct uses in production, the use of AI in logistics is enabling real-time fleet management, while significantly reducing fuel consumption and other costs. AI can also lower energy consumption in data centres (Sverdlik, 2018). In addition, AI can assist digital security: for example, the software firm Pivotal has created an AI system that recognises when text is likely to be part of a password, helping to avoid accidental online dissemination of passwords. Meanwhile, Lex Machina is blending AI and data analytics to radically alter patent litigation (Harbert, 2013). Many social-bot start-ups also automate tasks, such as meeting scheduling (X.ai), business-data and information retrieval (butter.ai), and expense management (Birdly). Finally, AI is being combined with other technologies – such as augmented and virtual reality – to enhance workforce training and cognitive assistance (Box 2.2).

Beyond such applications, a main effect of AI on future production could be the creation of entirely new industries, based on scientific breakthroughs enabled by AI, much as the discovery of DNA structure in the 1950s led to a revolution in industrial biotechnology and the creation of vast economic value (the global market for recombinant DNA technology has been estimated at around USD 500 billion [US dollars]).3 Approximately 40 years separated the elucidation of DNA structure and the emergence of a major biotech industry, and around 100 years passed between the scientific revolution in quantum physics and the recent birth of quantum computing (Box 2.5). Such observations underscore the importance of basic research and the importance of long time horizons in some aspects of research policy.

Several types of policy affect the development and diffusion of AI. These include: regulations governing data privacy (because of the critical importance of training data for AI systems); liability rules (which particularly affect diffusion); research support (Section 3.2); intellectual property rules; and, systems for skills. Other policies are most relevant to the (still uncertain) consequences of AI. These could include: competition policy; economic and social policies that mitigate inequality; policies for education and training; measures that affect public perceptions of AI; and, policies related to digital security. Well-designed policies for AI are likely to have high returns, because AI can be widely applied and accelerate innovation (Cockburn et al., 2018). Some of the policies concerned – such as those affecting skills – are relevant to any important new technology. This section focuses on policies most specifically affecting AI in production, namely, policies that affect the availability of training data, measures to address hardware constraints, and the design of regulations that do not unnecessarily hinder innovation.

Wissner-Gross (2016) reviews the timing of the most publicised AI advances over the past 30 years and notes that the average length of time between significant data creation and major AI performance breakthroughs has been much shorter than the average time between algorithmic progress and the same AI breakthroughs. Among many examples, Wissner-Gross cites the performance of Google’s GoogLeNet software, which achieved near-human level object classification in 2014, using a variant of an algorithm developed 25 years earlier. But the software was trained on ImageNet, a huge corpus of labelled images and object categories that had become available just four years earlier.4

Many tools that firms employ to manage and use AI exist as free software in open source (i.e. their source code is public and modifiable). These include software libraries such as TensorFlow and Keras, and tools that facilitate coding such as GitHub, text editors like Atom and Nano, and development environments like Anaconda and RStudio. Machine learning-as-a-service platforms also exist, such as Michelangelo, Uber’s internal system that helps teams build, deploy and operate machine-learning solutions. The challenges in using AI in production relate to its application in specific systems and the creation of high-quality training data.

Without large volumes of training data, many AI models are inaccurate. A deep-learning supervised algorithm may need 5 000 labelled examples per item and up to 10 million labelled examples to match human performance (Goodfellow, Bengio and Courville, 2016). The highest-value uses of AI often combine diverse data types, such as audio, text and video. In many uses, training data must be refreshed monthly or even daily (McKinsey Global Institute, 2018). Consequently, companies with large data resources and internal AI expertise, such as Google and Alibaba, have an advantage in deploying AI. Furthermore, many industrial applications are still somewhat new and bespoke, limiting data availability. By contrast, sectors such as finance and marketing have used AI for a longer time (Faggella, 2018).

In the future, research advances may make AI systems less data-hungry. For instance, AI may learn from fewer examples, or generate robust training data (Simonite, 2016). In December 2017, the computer program AlphaZero famously achieved a world-beating level of performance in chess by playing against itself, using just the rules of the game, without recourse to external data. In rules-based games such as chess and Go, however, high performance can be achieved based on simulated data. For the time being, external training data must be cultivated for real-world applications.

Many firms hold valuable data which they do not use effectively (whether through lacking in-house skills and knowledge, lack of a corporate data strategy, lack of data infrastructure, or other reasons). This can be the case even in firms with enormous financial resources. For example, by some accounts, less than 1% of the data generated on oil rigs are used (The Economist, 2017). However, many AI start-ups, and other businesses using AI, could create value from data they cannot easily access. To help address this mismatch, governments can act as catalysts and honest brokers for data partnerships. Among other measures, they could work with relevant stakeholders to develop voluntary model agreements for trusted data sharing. For example, the US Department of Transportation has prepared the draft “Guiding Principles on Data Exchanges to Accelerate Safe Deployment of Automated Vehicles”. The Digital Catapult in the United Kingdom also plans to publish model agreements for start-ups entering into data-sharing agreements (DSAs).

DSAs operate between firms, and between firms and public research institutions. Co-ordination could be helpful in cases where all data holders would benefit from data sharing, but individual data holders are reluctant to share data unilaterally, or are unaware of potential data-sharing opportunities. For example, a total of 359 offshore oil rigs were operational in the North Sea and the Gulf of Mexico as of January 2018. AI-based prediction of potentially costly accidents on oil rigs would be improved if this statistically small number of data holders were to share their data (in fact, the Norwegian Oil and Gas Association has asked all members to have a data-sharing strategy in place by the end of 2018).

The Digital Catapult’s Pit Stop open-innovation activity (which complements the Catapult’s model DSAs mentioned earlier) is an example of co-ordination aiming to foster DSAs. Pit Stop brings together large businesses, academic researchers and start-ups in collaborative problem-solving challenges around data and digital technologies. Also in the United Kingdom, the Turing Institute operates the Data Study Group, to which major private and public-sector organisations bring data-science problems for analysis: Institute researchers are thereby able to work on real-world problems using industry datasets, while businesses have their problems solved and learn about the value of their data. In a model that promotes data sharing without DSAs, Japan has developed the Industrial Value Chain Initiative, a collaborative cloud-based platform/repository where member firms share data to help implement digital applications.

Open-data initiatives exist in many countries, covering diverse public administrative and research data (Chapter 6). To facilitate AI applications, disclosed public data should be machine-readable. A further measure to encourage AI could consist in ensuring that copyright laws allow data and text mining, providing this does not lead to substitution of the original works or unreasonably prejudice legitimate interests of the copyright owners. Governments can also promote the use of digital data exchanges5 that share public and private data for the public good.

Sharing data can require overcoming a number of institutional barriers. Data holders in large organisations can face considerable internal bureaucracy before receiving permission to release data. Even with a DSA, data holders worry that data might not be used according to the terms of an agreement, or that client data will be shared accidentally. In addition, some datasets may be too big to share in practical ways: for instance, the data in 100 human genomes could consume 30 terabytes (30 million megabytes). Uncertainty over the provenance of counterpart data can also hinder data sharing or purchase. Ocean Protocol,6 an open-source protocol built by the non-profit Ocean Protocol Foundation, is pioneering a system linking blockchain and AI, to address such concerns and incentivise secure data exchange. By combining blockchain and AI, data holders can obtain the benefits of data collaboration, with full control and verifiable audit. Under one use case, data are not shared or copied. Instead, algorithms go to the data for training purposes, with all work on the data recorded in the distributed ledger. Ocean Protocol is currently building a reference open-source marketplace for data, which users can adapt to their own needs to trade data services securely. Governments should be alert to the possibilities of using such technology in public open-data initiatives.

As AI projects move from concept to commercial application, specialised and expensive cloud-computing and graphic-processing unit (GPU) resources are often needed. Trends in AI experiments show extraordinary growth in the computational power required. According to one estimate, the largest recent experiment, AlphaGo Zero, required 300 000 times the computing power needed for the largest experiment just 6 years before (OpenAI, 2018). Indeed, AlphaGo Zero’s achievements in chess and Go involved computing power estimated to exceed that of the world’s ten most powerful supercomputers combined (Digital Catapult, 2018).

An AI entrepreneur might have the knowledge and financial resources to develop a proof-of-concept for a business, but lack the necessary hardware-related expertise and hardware resources to build a viable AI company. To help address such issues, Digital Catapult runs the Machine Intelligence Garage programme, which works with industry partners – such as GPU manufacturer NVidia, intelligent processing unit-producer Graphcore, and cloud providers Amazon Web Services and Google Cloud Platform – to give early-stage AI businesses access to computing power and technical expertise.

Algorithmic transparency, explainability and accountability are among the key concerns in discussions on AI regulation (OECD, 2018b). While this chapter does not examine these questions, a few overarching observations are relevant. First, economy-wide regulation of AI may not be optimal at this time: the technology is still young, and many of its impacts are still unclear (Chapter 10). While international experience on the regulation of AI is still limited, there are grounds for thinking that regulation should specifically cover identified harms arising in particular sectors and applications, and addressed by those agencies already responsible for regulating the relevant sectors. A broad trade-off exists between the accuracy of algorithms and their scrutability. This trade-off highlights the risk of universal regulation of transparency and explainability dampening innovation. New and Castro (2017) argue that an overall approach emphasising algorithmic accountability might best protect society’s needs, while also encouraging innovation. The impacts of any adopted regulation, whatever its form, should be closely monitored. Finally, regulatory reviews should be frequent, because AI technology is changing rapidly.7

Blockchain – a distributed ledger technology – has many potential applications in production (Box 2.3). Blockchain is still an immature technology, and many applications are only at the proof-of-concept stage. The future evolution of blockchain involves various unknowns, for example with respect to standards for interoperability across systems. However, similar to the ‘software as a service’ model, “blockchain as a service” is already provided by companies such as Microsoft, SAP, Oracle, Hewlett-Packard, Amazon and IBM. Furthermore, consortia such as Hyperledger and the Ethereum Enterprise Alliance are developing open source-distributed ledger technologies in several industries (European Commission, 2018).

Adopting blockchain in production creates several challenges: blockchain involves fundamental changes in business processes, particularly with regard to agreements and engagement among many actors in a supply chain. When many computers are involved, the transaction speeds may also be slower than some alternative processes, at least with current technology (fast protocols operating on top of blockchain are under development). Blockchains are most appropriate when disintermediation, security, proof of source and establishing a chain of custody are priorities (Vujinovic, 2018). A further challenge relates to the fact that much blockchain development remains atomised: the scalability of any single blockchain-based platform – be it in supply chains or financial services – will depend on whether it is interoperable with other platforms (Hardjano et al., 2018).

Regulatory sandboxes are designed to help governments better understand a new technology and its regulatory implications, while at the same time giving industry an opportunity to test new technology and business models in a live environment (Chapter 10). Evaluations of the impacts of regulatory sandboxes are sparse (Financial Conduct Authority (2017) is an exception8). Blockchain regulatory sandboxes mostly focus on Fintech, and are being developed in countries as diverse as Australia, Canada, Indonesia, Japan, Malaysia, Switzerland, Thailand and the United Kingdom (European Commission, 2018). Pursuant to proper impact assessment of such schemes, and being sure to design selection processes that avoid benefitting some companies at the expense of others, the scope of sandboxes could be broadened to encompass blockchain applications in industry and other non-financial sectors.

By using blockchain in the public sector, governments could also raise awareness of blockchain’s potential, when it improves on existing technologies. Technical issues also need to be resolved, such as how to trust the data placed on the blockchain. Trustworthy data may need to be certified in some way. Blockchain may also raise concerns about competition policy, as some large corporations begin to mobilise through consortia to establish blockchain standards, e.g. for supply-chain management.

3D printing is expanding rapidly, thanks to falling printer and materials prices, higher-quality printed objects and innovation in methods. Recent innovations include 3D printing with novel materials, such as glass, biological cells and even liquids (maintained as structures using nanoparticles); robot-arm printheads that allow printing objects larger than the printer itself (opening the way for automated construction); touchless manipulation of print particles with ultrasound (allowing printing electronic components sensitive to static electricity); and hybrid 3D printers, combining additive manufacturing with computer-controlled machining and milling. Research is also advancing on 3D printing, with materials programmed to change shape after printing.

Most 3D printing is used to make prototypes, models and tools. Currently, 3D printing is not cost-competitive at volume with traditional mass-production technologies, such as plastic injection moulding. Wider use of 3D printing depends on how the technology evolves in terms of the print time, cost, quality, size and choice of materials (OECD, 2017). The costs of switching from traditional mass-production technologies to 3D printing are expected to decline in the coming years as production volumes grow, although it is difficult to predict precisely how fast 3D printing will diffuse. Furthermore, the cost of switching is not the same across all industries and applications.

OECD (2017) examined policy options to enhance 3D printing’s effects on environmental sustainability. One priority is to encourage low-energy printing processes (e.g. using chemical processes rather than melting material, and automatic switching to low-power states when printers are idle). Another priority is to use and develop low-impact materials with useful end-of-life characteristics (such as compostable biomaterials). Policy mechanisms to achieve these priorities include:

  • targeting grants or investments to commercialise research in these directions

  • creating a voluntary certification system to label 3D printers with different grades of sustainability across multiple characteristics, which could also be linked to preferential purchasing programmes by governments and other large institutions.

Ensuring legal clarity around intellectual property rights, for 3D printing of spare parts for products that are no longer manufactured, could also be environmentally beneficial. For example, a washing machine that is no longer in production may be thrown away because a single part is broken; a CAD file for the required part could keep the machine in operation. However, most CADs are proprietary. One solution would be to incentivise rights for third parties to print replacement parts for products, with royalties paid to the original product manufacturers as needed.

Bonnin-Roca et al. (2016) describe another possible policy area. They observe that metals-based additive manufacturing (MAM) has many potential uses in commercial aviation. However, MAM is a relatively immature technology – the fabrication processes at the technological frontier have not yet been standardised – and aviation requires high safety standards. The aviation sector – and the commercialisation of MAM technology – would benefit if the mechanical properties of printed parts of any shape, using any given feedstock on any given MAM machine, could be accurately and consistently predicted. Government could help develop the necessary knowledge. Specifically, the public sector could support the basic science, particularly by funding and stewarding curated databases on materials’ properties, and brokering DSAs across users of MAM technology, government laboratories and academia; support the development of independent manufacturing and testing standards; and help quantify the advantages of adopting the new technology, by creating a platform documenting early users’ experiences.

Bonnin-Roca et al. (2016) suggest such policies for the United States, which leads globally in installed industrial 3D manufacturing systems and aerospace production. However, the same ideas could apply to other countries and industries. These ideas also illustrate how policy opportunities can arise from a specific understanding of emerging technologies and their potential uses. Indeed, governments should strive to develop expertise on emerging technologies in relevant public structures, which will also help anticipate hard-to-foresee needs for technology regulation.

As part of the bioeconomy, industrial biotechnology involves the production of goods from renewable biomass –i.e. wood, food crops, non-food crops or even domestic waste – instead of finite fossil-based reserves. Much progress has taken place in the tools and achievements of industrial biotechnology (OECD, 2018c). For example, several decades of research in biology have yielded gene-editing technologies and synthetic biology (which aims to design and engineer biologically based parts, devices and systems, and redesign existing natural biological systems). When combined with other scientific and technological advances – for instance in materials science and robotics – the tools are in place to begin a bio-based production revolution. Bio-based batteries, artificial photosynthesis and micro-organisms that produce biofuels are just some examples of recent advances in biotechnology. Notwithstanding these advances, the largest positive medium-term environmental impacts of industrial biotechnology hinge on the development of advanced biorefineries, which transform sustainable biomass into marketable products (food, animal feed, materials, chemicals) and energy(fuel, power, heat) (OECD, 2017).

Strategies to expand biorefining must address the sustainability of the biomass used. Governments should urgently support efforts to develop standard definitions of sustainability (as regards feedstocks), tools for measuring sustainability, and international agreements on the indicators required to drive data collection and measurement. Furthermore, environmental performance standards are essential: regulators often impose sustainability criteria for bio-based products, most of which are not currently cost-competitive with petrochemicals.

Demonstrator biorefineries operate between pilot and commercial scales, and are critical to answering technical and economic questions about production before costly investments are made at full scale. However, biorefineries and demonstrator facilities are high-risk investments, and some aspects of the technologies are not fully proven. Additional study is also required of the economics of large bio-production facilities. Financing through public-private partnerships is needed to de-risk private investments and demonstrate governments’ commitment to long-term, coherent policies on energy and industrial production.

Public initiatives for bio-based fuels have existed for decades, but little policy support has been extended to producing bio-based chemicals, which could substantially reduce greenhouse gas emissions and preserve non-renewable resources OECD, 2018c).

With respect to regulations, governments should focus on boosting the use of instruments – particularly standards – to reduce barriers to trade in bio-based products; addressing regulatory hurdles that hinder investment; and establishing a level playing field between bio-based products and biofuels. Better waste regulation could also boost the bioeconomy. For example, governments could promote less proscriptive and more flexible waste regulations, allowing the use of agricultural and forestry residues and domestic waste in biorefineries.

Governments could also lead in supporting the bioeconomy and industrial biotechnology through public procurement. Bio-based materials are not always amenable to public procurement, as they sometimes form only part of a product (e.g. a bio-based screen on a mobile phone), but public purchasing of biofuels (e.g. for public vehicle fleets) is easier (OECD, 2017).

Advances in scientific instrumentation, such as atomic-force microscopes, and developments in computational simulations have allowed scientists to study materials in more detail than ever before. Today, materials with entirely novel properties are emerging: solids with densities comparable to the density of air; super-strong lightweight composites; materials that remember their shape, repair themselves or assemble themselves into components; and materials that respond to light and sound, are all now realities (The Economist, 2015).

The era of trial and error in material development is also coming to an end. Powerful computer modelling and simulation of materials’ structure and properties can indicate how they might be used in products. Desired properties, such as conductivity and corrosion resistance, can be intentionally built into new materials. Better computation is leading to faster development of new and improved materials, more rapid insertion of existing materials into new products, and the ability to improve existing processes and products. In the near future, engineers will not just design products, but will also design the materials from which the products are made (Teresko, 2008). Furthermore, large companies will increasingly compete in terms of materials development. For example, a manufacturer of automotive engines with a superior design could enjoy longer-term competitive advantage if it also owned the material from which the engine is built.

No single company or organisation will be able to own the entire array of technologies associated with a materials-innovation ecosystem. Accordingly, a public-private investment model is warranted, particularly to build cyber-physical infrastructure and train the future workforce (Chapter 6 in OECD, 2017).

New materials will raise new policy issues and give renewed emphasis to longstanding policy concerns. For example, new digital-security risks could arise because in a medium-term future, a computationally assisted materials “pipeline” based on computer simulations could be hackable. Progress in new materials also requires effective policy in already important areas, often related to the science-industry interface. For example, well-designed policies are needed for open data and open science (e.g. for sharing simulations of materials’ structures or sharing experimental data in return for access to modelling tools).

Policy co-ordination is needed across the materials-innovation infrastructure at the national and international levels. Major efforts are under way in professional societies to develop a materials-information infrastructure – such as databases of materials’ behaviour, digital representations of materials’ microstructures and predicted structure-property relations, and associated data standards – to provide decision support to materials-discovery processes (Robinson and McMahon, 2016). International policy co-ordination is necessary to harmonise and combine elements of cyber-physical infrastructure across a range of European, North American and Asian investments and capabilities, as it is too costly (and unnecessary) to replicate resources that can be accessed through web services. A culture of data sharing – particularly pre-competitive data – is required (Chapter 6 in OECD, 2017).

Closely related to new materials, nanotechnology involves the ability to work with phenomena and processes occurring at a scale of 1 to 100 nanometres (a standard sheet of paper is about 100 000 nanometres thick). Control of materials on the nanoscale – working with their smallest functional units – is a general-purpose technology with applications across production (Chapter 4 in OECD, 2017). Advanced nanomaterials are increasingly used in manufacturing high-tech products, e.g. to polish optical components. Recent innovations include nano-enabled artificial tissue, biomimetic solar cells and lab-on-a-chip diagnostics.

Sophisticated and expensive tools are needed for research in nanotechnology. State-of-the-art equipment costs several million euros and often requires bespoke buildings. It is almost impossible to gather an all-encompassing nanotechnology research and development (R&D) infrastructure in a single institute, or even a single region. Consequently, nanotechnology requires interinstitutional and/or international collaboration to reach its full potential. Publicly funded R&D programmes should allow involvement of academia and industry from other countries, and enable targeted collaborations between the most suitable partners. The Global Collaboration initiative under the European Union’s Horizon 2020 programme is one example of this approach.

Support is also needed for innovation and commercialisation in small companies. Nanotechnology R&D is mostly conducted by larger companies, thanks to their critical mass of R&D and production; their ability to acquire and operate expensive instrumentation; and their ability to access and use external knowledge. Policy makers could improve the access to equipment of small and medium-sized enterprises (SMEs) by: 1) increasing the size of SME research grants; 2) subsidising or waiving service fees; and/or 3) providing SMEs with vouchers for equipment use.

Regulatory uncertainties regarding risk assessment and approval of nanotechnology-enabled products must also be addressed, ideally through international collaboration. These uncertainties severely hamper the commercialisation of nano-technological innovation. Products awaiting market entry are sometimes shelved for years before a regulatory decision is taken. This has sometimes led to promising nanotechnology start-ups failing, and to large companies terminating R&D projects and innovative products. Policies should support the development of transparent and timely guidelines for assessing the risk of nanotechnology-enabled products, while also striving for international harmonisation in guidelines and enforcement. In addition, more needs to be done to properly treat nanotechnology-enabled products in the waste stream.

Developing a productive base that masters the technologies of the “next production revolution” involves diverse policy challenges, from implementing the types of technology-specific policies discussed above, in Section 2, to developing cross-cutting policies relevant to all the relevant technologies. Figure 2.1 depicts the types and scope of the policies involved. Cross-cutting policies must address issues as diverse as designing micro-economic framework conditions promoting technology diffusion; building fibre-optic cable networks to carry 5G; increasing trust in cloud computing; and designing education and training systems to respond efficiently to changing needs for skills. OECD (2017a) examines many of these issues in detail. This section covers two cross-cutting policy issues only, namely: improving access to and awareness of High-Performance Computing (HPC); and ensuring public support for R&D. It includes subjects, such as the race to achieve quantum computing and possible public research agendas for AI, that were not addressed in OECD (2017).

HPC – which involves computing performance far beyond that of general-purpose computers – is increasingly important to firms in industries ranging from construction to pharmaceuticals, the automotive sector and aerospace. Airbus, for instance, owns 3 of the 500 fastest supercomputers in the world. Two-thirds of US-based companies that use HPC say that “increasing performance of computational models is a matter of competitive survival” (US Council on Competitiveness, 2014). The applications of HPC in manufacturing are also expanding beyond design and simulation, to include real-time control of complex production processes. Among European companies, the financial rates of return for HPC use are reportedly extremely high (European Commission, 2016). A 2016 review observed that “[m]aking HPC accessible to all manufacturers in a country can be a tremendous differentiator, and no nation has cracked the puzzle yet” (Ezell and Atkinson, 2016).

As Industry 4.0 becomes more widespread, demand for HPC will rise. But like other digital technologies, the use of HPC in manufacturing falls short of potential. According to one estimate, 8% of US manufacturers with fewer than 100 employees use HPC, yet one-half of manufacturing SMEs could potentially use HPC for prototyping, testing and design (Ezell and Atkinson, 2016). Public HPC initiatives often focus on the computation needs of “big science”. Greater outreach to industry, especially SMEs, is frequently needed. Box 2.4 sets out some possible ways forward, several of which are described in European Commission (2016).

The technologies discussed in this chapter ultimately emerge from science. Microelectronics, synthetic biology, new materials and nanotechnology, among others, have arisen from advances in scientific knowledge and instrumentation. Publicly financed research in universities and public research institutions has often been critical to AI. Furthermore, because the complexity of many emerging production technologies exceeds even the largest firms’ research capacities, public-private research partnerships are essential. Hence, the declining public support for research in some major economies is a concern (Chapter 8).

Public R&D and commercialisation efforts have many possible targets, from advancing the use of data analytics and digital technologies in metabolic engineering, to developing bio-friendly feedstocks for 3D printers. One interesting possibility is shaping research agendas to alleviate shortages of economically critical materials (as proposed by the Ames Laboratory’s Critical Materials Institute in the United States).

The processing speeds, memory capacities, sensor density and accuracy of many digital devices are linked to Moore’s Law. However, atomic-level phenomena and rising costs constrain further shrinkage of transistors on integrated circuits. Many experts believe a limit to miniaturisation will soon be reached. At the same time (as noted earlier), the computing power needed for the largest AI experiments is doubling every 3.5 months (OpenAI, 2018). By one estimate, this trend can be sustained for at most three-and-a-half to ten years, even assuming public R&D commitments on a scale similar to the Apollo or Manhattan projects (Carey, 2018). Much, therefore, depends on achieving superior computing performance (including in terms of energy requirements). Many hope that significant advances in computing will stem from research breakthroughs in optical computing (using photons instead of electrons), biological computing (using DNA to store data and calculate) and/or quantum computing (Box 2.5).

Public research funding has been key to progress in AI since the origin of the field. The National Research Council (1999) shows that while the concept of AI originated in the private sector – in close collaboration with academia – its growth largely results from many decades of public investments. Global centres of AI research excellence (e.g. at Stanford, Carnegie Mellon and MIT) arose because of public support, often linked to US Department of Defense funding. However, recent successes in AI have propelled growth in private-sector R&D for AI. For example, earnings reports indicate that Google, Amazon, Apple, Facebook and Microsoft spent a combined USD 60 billion on R&D in 2017, including an important share on AI. By comparison, total US Federal Government R&D for non-defence industrial production and technology amounted to around USD 760 million in 2017.11

While many in business, government and among the public believe AI stands at an inflection point, some experts emphasise the scale and difficulties of the outstanding research challenges. Some AI research breakthroughs could be particularly important for society, the economy and public policy. However, corporate and public research goals might not fully align: Jordan (2018) notes that much AI research is not directly relevant to the major challenges of building safe intelligent infrastructures, such as medical or transport systems. He observes that unlike human-imitative AI, such critical systems must have the ability to deal with “distributed repositories of knowledge that are rapidly changing and are likely to be globally incoherent. Such systems must cope with cloud-edge interactions in making timely, distributed decisions and they must deal with long-tail phenomena whereby there is lots of data on some individuals and little data on most individuals. They must address the difficulties of sharing data across administrative and competitive boundaries” (Jordan, 2018).

Other outstanding research challenges relevant to public policy relate to making AI explainable; making AI systems robust (image-recognition systems can easily be misled, for instance); determining how much prior knowledge will be needed for AI to perform difficult tasks (Marcus, 2018); bringing abstract and higher-order reasoning, and “common sense”, into AI systems; inferring and representing causality; and developing computationally tractable representations of uncertainty (Jordan, 2018). No reliable basis exists for judging when – or whether – research breakthroughs will occur. Indeed, past predictions of timelines in the development of AI have been extremely inaccurate.

Interdisciplinary research is essential to advancing production. Materials research involves disciplines such as traditional materials science and engineering, as well as physics, chemistry, chemical engineering, bio-engineering, applied mathematics, computer science and mechanical engineering. Environments supporting interdisciplinary research include institutes (e.g. Interdisciplinary Research Collaborations in the United Kingdom);12 networks (e.g. the eNNab Excellence Network NanoBio Technology in Germany, which supports biomedical nanotechnology);13 and individual institutions (e.g. Harvard’s Wyss Institute for Biologically Inspired Engineering).14

Government-funded research institutions and programmes should have the freedom to assemble the right combinations of partners and facilities to solve scale-up and interdisciplinarity challenges. Investments are often essential in applied research centres and pilot production facilities, to take innovations from the laboratory to production. Demonstration facilities – such as test beds, pilot lines and factory demonstrators – which provide dedicated research environments, with the right mix of enabling technologies and operating technicians, are also necessary. Some manufacturing R&D challenges may need the expertise not only of manufacturing engineers and industrial researchers, but also of designers, equipment suppliers, shop-floor technicians and users (Chapter 10 in OECD, 2017).

Beyond traditional metrics – such as numbers of publications and patents – more effective research institutions and programmes in advanced production may also need new evaluation indicators. These new indicators could assess such criteria as successful pilot-line and test-bed demonstrations; technician and engineer training; membership in consortia; incorporation of SMEs in supply chains; and the role of research in attracting foreign direct investment.

Financing business scale-up is a widespread concern. This owes in great part to the fact that many venture-capital firms prefer to invest in software, biotech and media start-ups rather than advanced manufacturing firms, which often work with costlier and riskier technologies (in the United States, only around 5% of venture funding in 2015 targeted the industrial/energy sector) (Singer and Bonvillian, 2017). Partnerships between universities, industry and government can help provide start-ups with the know-how, equipment and initial funding to test and scale new technologies, so that investments are more likely to attract venture funding. Singer and Bonvillian (2017) describe several such collaborations. For example, Cyclotron Road, supported by the US Department of Energy’s Lawrence Berkeley Lab, provides energy start-ups with equipment, technology and know-how for advanced prototyping, demonstration, testing and production design. Cooperative Research and Development Agreements – which are struck between a government agency and a private company or university – have also been valuable in providing frameworks for intellectual property rights in such collaborations.

Mastering the technologies of the Next Production Revolution requires effective policy in wide-ranging fields, including digital infrastructure, skills and intellectual property rights. Typically, these diverse policy fields are not closely connected in government structures and processes. Governments must also adopt long-term time horizons, for instance, in pursuing research agendas with possible long-term payoffs. As this chapter has illustrated, public institutions must possess specific understanding of many fast-evolving technologies. One leading authority argues that converging developments in several technologies are about to yield a “Cambrian explosion” in robot diversity and use (Pratt, 2015). Adopting Industry 4.0 poses challenges for firms, particularly small ones. It also challenges governments’ ability to act with foresight and technical knowledge across multiple policy domains.

References

AI Intelligent Automation Network (2018), AI 2020 : The Global State of Intelligent Enterprise, https://intelligentautomation.iqpc.com/downloads/ai-2020-the-global-state-of-intelligent-enterprise.

Almudever, C.G. et al. (2017), "The engineering challenges in quantum computing", conference paper presented at Design, Automation & Test in Europe Conference & Exhibition (DATE), 27-31 March 2017, Lausanne, pp. 836-845, 10.23919/DATE.2017.7927104.

Azhar, A. (2018), “Exponential View: Dept. of Quantum Computing”, The Exponential View, 15 July, http://www.exponentialview.co/evarchive/#174.

Bonnin-Roca, J et al. (2016), “Policy Needed for Additive Manufacturing”, Nature Materials, Vol. 15, pp. 815-818, Nature Publishing Group, United Kingdom, https://doi.org/10.1038/nmat4658.

Boston Consulting Group (2018), “AI in the Factory of the Future : The Ghost in the Machine”, The Boston Consulting Group, Boston, MA, https://www.bcg.com/publications/2018/artificial-intelligence-factory-future.aspx.

Carey, R. (2018), “Interpreting AI Compute Trends”, 10 July, blog post, AI Impacts, https://aiimpacts.org/interpreting-ai-compute-trends/.

Castellanos, S. (2017), “Volkswagen Pilots Quantum Computing Experiments”, The Wall Street Journal, New York, 8 May 2018, https://blogs.wsj.com/cio/2017/05/08/volkswagen-pilots-quantum-computing-experiments/.

Champain, V. (2018), “Comment l’intelligence artificielle augmentée va changer l’industrie”, La Tribune, Paris, 27 March, https://www.latribune.fr/opinions/tribunes/comment-l-intelligence-artificielle-augmentee-va-changer-l-industrie-772791.html.

Chen, S. (2018), “Scientists Are Using AI to Painstakingly Assemble Single Atoms”, Science, American Association for the Advancement of Science, Washington, DC, 23 May, https://www.wired.com/story/scientists-are-using-ai-to-painstakingly-assemble-single-atoms/.

Chen, S. (2017), “The AI Company That Helps Boeing Cook New Metals for Jets”, Science, American Association for the Advancement of Science, Washington, DC, 12 June, https://www.wired.com/story/the-ai-company-that-helps-boeing-cook-new-metals-for-jets.

Cockburn, I., R.Henderson and S.Stern (2018), “The Impact of Artificial Intelligence on Innovation”, NBER Working Paper No.24449, Issued March 18, http://www.nber.org/papers/w24449.

Digital Catapult (2018), “Machines for Machine Intelligence: Providing the tools and expertise to turn potential into reality”, Machine Intelligence Garage, Research Report 2018, London, https://www.migarage.ai.

Dorfman, P. (2018), “3 Advances Changing the Future of Artificial Intelligence in Manufacturing”, Autodesk Newsletter, 3 January 2018, https://www.autodesk.com/redshift/future-of-artificial-intelligence/.

European Commission (2018), “#Blockchain4EU: Blockchain for Industrial Transformations”, Publications Office of the European Union, Luxembourg, https://ec.europa.eu/jrc/en/publication/eur-scientific-and-technical-research-reports/blockchain4eu-blockchain-industrial-transformations.

European Commission (2016), “Implementation of the Action Plan for the European High-Performance Computing Strategy”, Commission Staff Working Document SWD(2016)106, European Commission, Brussels, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52016SC0106.

Ezell, S.J. and R.D. Atkinson (2016), “The Vital Importance of High-Performance Computing to US Competitiveness”, Information Technology and Innovation Foundation, Washington DC, http://www2.itif.org/2016-high-performance-computing.pdf.

Faggella, D. (2018), “Industrial AI Applications – How Time Series and Sensor Data Improve Processes”, Techemergence, San Francisco, 31 May, https://www.techemergence.com/industrial-ai-applications-time-series-sensor-data-improve-processes.

Financial Conduct Authority (2017), “Regulatory sandbox lessons learned report”, London, https://www.fca.org.uk/publication/research-and-data/regulatory-sandbox-lessons-learned-report.pdf.

Gambetta, J.M., J.M. Chow and M. Teffen (2017), “Building logical qubits in a superconducting quantum computing system”, npj Quantum Information, Vol. 3, article No. 2, Nature Publishing Group and University of New South Wales, London and Sydney, https://arxiv.org/pdf/1510.04375.pdf.

Giles, M. (2018a), “Google wants to make programming quantum computers easier”, MIT Technology Review, Massachusetts Institute of Technology, Cambridge, MA, 18 July, https://www.technologyreview.com/s/611673/google-wants-to-make-programming-quantum-computers-easier/.

Giles, M. (2018b), “The world’s first quantum software superstore – or so it hopes – is here”, MIT Technology Review, Massachusetts Institute of Technology, Cambridge, MA, 17 May, https://www.technologyreview.com/s/611139/the-worlds-first-quantum-software-superstore-or-so-it-hopes-is-here/.

Goodfellow, I., Y. Bengio and A. Courville (2016), Deep Learning, MIT Press, Massachusetts Institute of Technology, Cambridge, MA.

Harbert, T. (2013), “Supercharging Patent Lawyers with AI: How Silicon Valley's Lex Machina is blending AI and data analytics to radically alter patent litigation”, IEEE Spectrum, IEEE, New York, 30 October, https://spectrum.ieee.org/geek-life/profiles/supercharging-patent-lawyers-with-ai.

Hardjano, T., A.Lipton and A.S.Pentland (2018), “Towards a Design Philosophy for Interoperable Blockchain Systems”, Massachusetts Institute of Technology, Cambridge, MA, 7 July, https://hardjono.mit.edu/sites/default/files/documents/hardjono-lipton-pentland-p2pfisy-2018.pdf.

House of Lords (2018), “AI in the UK: ready, willing and able?”, Select Committee on Artificial Intelligence – Report of Session 2017-19, Authority of the House of Lords, London, https://publications.parliament.uk/pa/ld201719/ldselect/ldai/100/100.pdf.

Jordan, M. (2018), “Artificial Intelligence — The Revolution Hasn’t Happened Yet”, Medium, A Medium Corporation, San Francisco, https://medium.com/@mijordan3/artificial-intelligence-the-revolution-hasnt-happened-yet-5e1d5812e1e7.

Knight, W. (2018), “Microsoft is Creating an Oracle for Catching Biased AI Algorithms”, MIT Technology Review, Massachusetts Institute of Technology, Cambridge, MA, 25 May, https://www.technologyreview.com/s/611138/microsoft-is-creating-an-oracle-for-catching-biased-ai-algorithms/.

Letzer, R. (2018), “Chinese Researchers Achieve Stunning Quantum Entanglement Record”, Scientific American, Springer Nature, 17 July, https://www.scientificamerican.com/article/chinese-researchers-achieve-stunning-quantum-entanglement-record/.

Marcus, G. (2018), ‘Innateness, AlphaZero, and Artificial Intelligence”, arxiv.org, Cornell University, Ithaca, NY, https://arxiv.org/ftp/arxiv/papers/1801/1801.05667.pdf.

McKinsey Global Institute (2018), “Notes from the AI frontier: Insights from hundreds of use cases”, discussion paper, McKinsey & Company, New York, April, https://www.mckinsey.com/featured-insights/artificial-intelligence/notes-from-the-ai-frontier-applications-and-value-of-deep-learning.

Mearian, L. (2017), “Blockchain integration turns ERP into a collaboration platform”, Computerworld, IDG, Framingham, MA, 9 June, https://www.computerworld.com/article/3199977/enterprise-applications/blockchain-integration-turns-erp-into-a-collaboration-platform.html.

National Research Council (1999), Funding a Revolution: Government Support for Computing Research, The National Academies Press, Washington, DC, https://doi.org/10.17226/6323.

New, J. and D. Castro (2018), “How Policymakers can Foster Algorithmic Accountability”, Information Technology and Innovation Foundation, Washington DC, https://itif.org/publications/2018/05/21/how-policymakers-can-foster-algorithmic-accountability.

OECD (2018a), “Going Digital in a Multilateral World, Interim Report of the OECD Going Digital Project, Meeting of the OECD Council at Ministerial Level”, Paris, 30-31 May 2018, OECD, Paris, http://www.oecd.org/going-digital/C-MIN-2018-6-EN.pdf.

OECD (2018b), “AI: Intelligent machines, smart policies: Conference summary", OECD Digital Economy Papers, No. 270, OECD Publishing, Paris, https://doi.org/10.1787/f1a650d9-en.

OECD (2018c), Meeting Policy Challenges for a Sustainable Bioeconomy, OECD Publishing, Paris, https://doi.org/10.1787/9789264292345-en.

OECD (2017), The Next Production Revolution: Implications for Governments and Business, OECD Publishing, Paris, https://doi.org/10.1787/9789264271036-en.

OpenAI (2018), “AI and Compute”, OpenAI blog, San Francisco, 16 May, https://blog.openai.com/ai-and-compute/.

Pratt, G.A. (2015), “Is a Cambrian Explosion Coming for Robotics?”, Journal of Economic Perspectives, Volume 29/3, AEA Publications, Pittsburgh, DOI: 10.1257/jep.29.3.51.

Ransbotham, S et al. (2017), “Reshaping Business with Artificial Intelligence: Closing the Gap Between Ambition and Action”, MIT Sloan Management Review, Massachusetts Institute of Technology, Cambridge, MA, https://sloanreview.mit.edu/projects/reshaping-business-with-artificial-intelligence/.

Robinson, L. and K. McMahon (2016), “TMS launches materials data infrastructure study,” JOM, Vol. 68/8, Springer US, New York, https://doi.org/10.1007/s11837-016-2011-1.

Simonite, T. (2016), “Algorithms that learn with less data could expand AI’s power”, MIT Technology Review, May 24th, Boston, https://www.technologyreview.com/s/601551/algorithms-that-learn-with-less-data-could-expand-ais-power/.

Singer, P.L. and W.B. Bonvillian (2017), “’Innovation Orchards’: Helping tech startups scale”, Information Technology and Innovation Foundation, Washington DC, http://www2.itif.org/2017-innovation-orchards.pdf?_ga=2.11272691.618351442.1529315338-1396354467.1529315338.

Sverdlik, Y. (2018), “Google is Switching to a Self-Driving Data Center Management System”, Data Center Knowledge, August 2nd, https://www.datacenterknowledge.com/google-alphabet/google-switching-self-driving-data-center-management-system.

Teresko, J. (2008), “Designing the next materials revolution”, IndustryWeek, Informa, Cleveland, 8 October, www.industryweek.com/none/designing-next-materials-revolution.

The Economist (2017), “Oil struggles to enter the digital age”, The Economist, London, 6 April, https://www.economist.com/business/2017/04/06/oil-struggles-to-enter-the-digital-age.

The Economist (2015), “Material difference”, Technology Quarterly, The Economist, London, 12 May, www.economist.com/technology-quarterly/2015-12-05/new-materials-for-manufacturing.

U.S. Council on Competitiveness (2014), “The Exascale Effect: the Benefits of Supercomputing for US Industry”, U.S. Council on Competitiveness, Washington, DC, https://www.compete.org/storage/images/uploads/File/PDF%20Files/Solve_Report_Final.pdf.

Vujinovic, M. (2018), “Manufacturing and Blockchain: Prime Time Has Yet to Come”, CoinDesk, New York, 24 May, https://www.coindesk.com/manufacturing-blockchain-prime-time-yet-come/.

Walker, J. (2017), “AI in Mining: Mineral Exploration, Autonomous Drills, and More”, Techemergence, San Francisco, 3 December, https://www.techemergence.com/ai-in-mining-mineral-exploration-autonomous-drills/.

Williams, M. (2013), “Counterfeit parts are costing the industry billions”, Automotive Logistics, 1 January, Ultima Media, London, https://automotivelogistics.media/intelligence/16979.

Wissner-Gross, A. (2016), “Datasets Over Algorithms”, Edge.org, Edge Foundation, Seattle, https://www.edge.org/response-detail/26587.

Zeng, W. (2018), “Forest 1.3: Upgraded developer tools, improved stability, and faster execution”, Rigetti Computing blog, Berkeley, https://medium.com/rigetti/forest-1-3-upgraded-developer-tools-improved-stability-and-faster-execution-561b8b44c875.

Zweben, M and M.S. Fox (1994), Intelligent Scheduling, Morgan Kaufmann Publishers, San Francisco.

Notes

← 1. See Aaswath Raman’s 2018 TED talk, “How can we turn the cold of outer space into a renewable resource”, https://www.ted.com/talks/aaswath_raman_how_we_can_turn_the_cold_of_outer_space_into_a_renewable_resource.

← 2. Deep learning with artificial neural networks is a technique in the broader field of machine learning that seeks to emulate how human beings acquire certain types of knowledge. The word ‘deep’ refers to the numerous layers of data processing. The term “artificial neural network” refers to hardware and/or software modelled on the functioning of neurons in a human brain.

← 3. AI will of course have many economic and social impacts. In relation to labour markets alone, intense debates exist on AI’s possible effects on labour displacement, income distribution, skills demand and occupational change. However, these and other considerations are not a focus of this chapter.

← 4. At its peak, ImageNet reportedly employed close to 50 000 people in 167 countries, who sorted around 14 million images (House of Lords, 2018).

← 5. e.g. datacollaboratives.org.

← 6. www.oceanprotocol.com.

← 7. Microsoft, for instance, is developing a dashboard capable of scrutinising an AI system and automatically identifying signs of potential bias (Knight, 2018).

← 8. Even if this assessment covers only the first year of a scheme in the United Kingdom.

← 9. http://montblanc-project.eu/.

← 10. www.research.ibm.com/quantum.

← 11. OECD Main Science and Technology Indicators Database, http://oe.cd/msti.

← 12. https://epsrc.ukri.org/funding/applicationprocess/routes/capacity/ircs/.

← 13. www.ennab.de.

← 14. https://wyss.harvard.edu.

Legal and rights

This document, as well as any data and map included herein, are without prejudice to the status of or sovereignty over any territory, to the delimitation of international frontiers and boundaries and to the name of any territory, city or area. Extracts from publications may be subject to additional disclaimers, which are set out in the complete version of the publication, available at the link provided.

© OECD 2018

The use of this work, whether digital or print, is governed by the Terms and Conditions to be found at https://www.oecd.org/termsandconditions.