2. Artificial intelligence and the labour market: Introduction

Stijn Broecke

In 2019, the OECD’s Employment Outlook focused on the future of work and explored how various megatrends such as digitalisation, globalisation, and population ageing were reshaping the world of work. The overall message was one of cautious optimism: many of the megatrends appeared to bring new opportunities for improving labour market outcomes and mass technological unemployment seemed unlikely. Indeed, at the end of 2019, prior to the COVID-19 crisis, employment rates in most OECD countries were at record highs, despite the adoption of automating technologies. While some risks regarding job quality and inclusiveness were identified, most notably for low- and medium-skilled workers, the OECD argued that, with the right policies and institutions in place, the risks could be mitigated and the opportunities seized (OECD, 2019[1]).

Just four years later, the OECD is dedicating yet another volume of the Employment Outlook to the future of work and, more specifically, to the impact of artificial intelligence (AI) on the labour market. The OECD defines AI as:1

“A machine-based system that is capable of influencing the environment by producing an output (predictions, recommendations or decisions) for a given set of objectives. It uses machine and/or human-based data and inputs to: (i) perceive real and/or virtual environments; (ii) abstract these perceptions into models through analysis in an automated manner (e.g. with machine learning), or manually; and (iii) use model inference to formulate options for outcomes. AI systems are designed to operate with varying levels of autonomy.”

The justification for another Employment Outlook on the future of work lies in the astonishing progress that AI has made to a point that, in some areas, it has become difficult if not impossible to distinguish its output from that of humans. This raises urgent and important questions about the future of work. As an example, consider the following two paragraphs about the impact of AI on workers, one of which was written by AI, the other by a human:

  1. i. AI has the potential to transform our society and the way we work, just as innovations such as the computer and the internet have in recent decades. With its ability to analyse large amounts of data, perceive the world around it and generate text, AI will be a useful tool for some workers while displacing others. Although economists generally do not believe that AI will lead to the end of work, they do raise concerns about the impact of AI on wages and employment, the potential for AI to deepen inequalities, and whether the right kinds of AI are being developed and deployed.

  2. ii. The impact of AI on workers is likely to be both positive and negative. On the positive side, AI can automate repetitive tasks, which can increase efficiency and productivity, and free up workers to focus on more creative and higher-level tasks. This can lead to new job opportunities and higher wages for workers with the necessary skills to take advantage of the new technologies. On the negative side, AI can also lead to job displacement as machines and algorithms take over tasks that were previously performed by humans. This could lead to a decline in wages and employment opportunities for certain types of workers.

The second of these paragraphs was written by AI and, more specifically, by Chat Generative Pre-Trained Transformer (more commonly known by its acronym: ChatGPT) – a large language model (LLM) trained on vast amounts of data from the internet to produce text that sounds human-like. While it may have been possible to spot the AI-generated output (e.g. because it is more formulaic or impersonal than the human text) there is no question about its capabilities and the implications for a range of real life situations, including at work, are immediately apparent. In fact, since ChatGPT was launched in November 2022, various potential applications of large language models to the work environment have emerged – from co-authoring scientific papers (Stokel-Walker, 2023[2]) to passing graduate-level exams in law and business (Murphy Kelly, 2023[3]), assisting in clinical decision-making (Science Media Centre, 2023[4]) and helping to make court decisions (Jamal, 2023[5]).

Even before the advent of large language models like ChatGPT, high-profile examples of rapid AI progress had received significant attention. While recent advances in generative AI have been mind-boggling (including also in image,2 voice and even video generation), AI has made equally impressive progress in many other domains, including: computer vision (e.g. image classification and labelling), reasoning, solving problems, playing games, as well as reading comprehension and learning. AI can already answer 80% of the literacy questions of the OECD Survey of Adult Skills of the Programme for International Assessment of Adult Competencies (PIAAC), and two-thirds of the numeracy questions (OECD, 2023[6]). Even more astonishing is the short time span in which this progress has been made. Just six years ago, the biggest AI news story was that it had defeated the world champion at Go, a relatively simple game that follows a clear set of rules (BBC, 2017[7]). Today, AI is capable of beating humans at Diplomacy, a strategy board game that requires persuasion, co-operation and negotiation (Hsu, 2022[8]). Experts believe that AI will be able to solve the entire PIAAC literacy and numeracy tests by 2026 (OECD, 2023[6]).

However, there are still limitations to what AI can do. While the progress in AI has been impressive, there are still many things it cannot do – so-called “bottleneck skills” like complex problem-solving, high-level management and social interaction (Lassébie and Quintini, 2022[9]). Also, despite the hype, AI still frequently makes headlines for the wrong reasons, such as: driverless car crashes (Laing, 2023[10]) racist image recognition software (Kayser-Bril, 2020[11]), biased recruitment tools (Dastin, 2018[12]) and chatbot blunders (Laing, 2023[10]). ChatGPT has been accused of bias (Jain, 2023[13]), hallucinations (Smith, 2023[14]) and copyright infringements (McKendrick, 2022[15]), amongst others. These limitations point to some of the risks of using AI tools, particularly without human oversight.

Despite the limitations and risks, AI is beginning to make its way into the workplace. In 2022, through new surveys and case studies, the OECD gathered data on the impact of AI on workers and the workplace in the manufacturing and finance sectors of eight OECD countries.3 The research, which pre-dates some of the latest developments in generative AI, collected many interesting examples of AI use in the workplace, including: an image recognition technology that identifies spare auto parts from photos uploaded by customers; a production tracking and monitoring system that uses a computer vision system to locate tools and bring them to the correct place in the factory at the right time; and a natural language processing tool that assists maintenance workers in troubleshooting the root causes of machine breakdowns by querying a database of past service issues and their resolutions.

While the adoption of AI still remains relatively low, rapid progress, falling costs and the increasing availability of workers with AI skills indicate that OECD economies might be on the brink of an AI revolution. The available data suggest that the share of firms that have adopted AI remains in the single digits, although large firms are more likely to have done so (approximately one in three) (Lane, Williams and Broecke, 2023[16]). Cost was the greatest barrier to AI adoption: it was cited by more than half of the finance and manufacturing firms the OECD surveyed in 2022 about AI use in the workplace (Lane, Williams and Broecke, 2023[16]).4 The second biggest barrier identified was a lack of skills to adopt AI (see also Chapter 5). These findings align closely with those of other surveys – e.g. IBM (2022[17]). Yet the cost of AI technologies is rapidly declining. For example, since 2018, the cost to train an image classification system has decreased by 63.6% (Zhang et al., 2022[18]) and, as AI enters the public domain, the rate at which these costs fall may be expected to accelerate. Generative AI applications such as ChatGPT are becoming increasingly available at a low monthly fee or even for free. At the same time, the availability of workers with suitable skills is growing. OECD research suggests that the AI workforce has more than tripled between 2012 and 2019 (Green and Lamby, 2023[19]). Combined with the fact that AI is a general-purpose technology – i.e. a technology that can affect an entire economy – the indication is that AI may soon permeate workplaces, affecting all sectors and occupations.

A key motivation for employers to adopt AI is to boost productivity, and workers may gain as well. As an automating technology, AI carries the promise of cost savings and productivity gains, helping employers gain a competitive advantage. Indeed, one works council member at a manufacturer of automotive parts told the OECD of the importance of AI adoption in his industry as follows: “If a company does not adopt new technologies, then sooner or later it will no longer be able to continue to exist” (Milanez, 2023[20]). AI can also help companies improve product or service quality. At the same time, workers may benefit through improvements in job quality, worker well-being and job satisfaction. Indeed, AI has the potential to eliminate dangerous or tedious tasks and create more complex and interesting ones instead. It can boost worker engagement, give workers greater autonomy, and even improve their mental health. Some workers may also benefit from higher wages (Chapter 4).

While there are potential benefits, there are also significant risks, including for employment. Firms do not hide the fact that one of their main motivations to invest in AI is to improve worker performance (i.e. productivity) and reduce staff costs (Lane, Williams and Broecke, 2023[16]). It is not surprising, therefore, that about 20% of workers in finance and manufacturing (across seven OECD countries) said that they were very or extremely worried about job loss in the next ten years (Lane, Williams and Broecke, 2023[16]). A key distinction between AI and previous technologies is that AI is capable of automating non-routine tasks. As such, AI has made most progress in areas like information ordering, memorisation, perceptual speed, and deductive reasoning – all of which are related to non-routine, cognitive tasks (see Chapter 3). As a result, high-skilled occupations have been most exposed to recent advances in AI, including: business professionals; managers; science and engineering professionals; and legal, social and cultural professionals. This extends the potential scope of automation considerably beyond what had previously been possible.5 While to date there is little evidence of negative employment effects due to AI (Chapter 3), this may be because AI adoption is still relatively low and/or because firms prefer to rely on voluntary quits and retirement to make workforce adjustments. Any negative employment effects of AI may therefore take time to materialise. Moreover, the risks of automation are not equally spread across socio-demographic groups, which could harm inclusiveness. While the impact of the latest wave of generative AI is not entirely clear yet, early estimates of occupational AI exposure that take into account the capabilities of large language models like ChatGPT reach conclusions similar to those of previous estimates of AI exposure: it is primarily high-pay occupations requiring higher than average education or training that are most exposed to AI.

There are also risks to job quality and AI raises a number of ethical questions. Although AI has the potential to improve certain aspects of job quality, there are also reports that AI can heighten work intensity and increase stress (Chapter 4). In addition, the use of AI in the workplace opens up, or amplifies, a whole set of ethical issues (Chapter 6), some of which can also negatively impact on job quality. For example, AI can change the way work is monitored or managed, which can increase perceived fairness, but poses risks to workers’ privacy and autonomy to execute tasks. AI can also introduce or perpetuate bias. In addition, there are concerns around transparency and explainability, as well as around accountability. While many of these issues are not new, AI has the potential to amplify them. For example, even though human beings can be biased when making hiring decisions, the adverse impact of AI could be far greater by virtue of the volume and velocity of the decisions it takes, which could systematise and multiply bias. Once again, these risks tend to be greater for some socio-demographic groups who are often disadvantaged in the labour market already.

While there is much uncertainty about the impact AI will have on labour markets, there is a need to avoid technological determinism. A key message of the OECD Employment Outlook 2019 was that “The future of work will largely depend on the policy decisions countries make.” This message is also one brought by prominent labour economists like David Autor who argue that “As we ponder our uncertain AI future, our goal should not merely be to predict that future, but to create it.” (Autor, 2022[21]).

There is urgent need for policy action to ensure that AI is developed and used in a trustworthy way. This means that AI must be safe and respectful of fundamental rights such as privacy and fairness, the right of labour to organise, transparency and explainability. It also means that it must be clear who is accountable in case something goes wrong. Proactive and decisive action is not only important to protect workers, but also to promote AI innovation and diffusion because it reduces uncertainty. Principles like those developed by the OECD can help promote the use of AI that is innovative and trustworthy and that respects human rights and democratic values (OECD, 2019[22]). As an OECD legal instrument, the OECD AI principles represent a common aspiration for its adhering countries and were adopted in May 2019. Since then, other countries, including Argentina, Brazil, Egypt, Malta, Peru, Romania, Singapore and Ukraine have adhered to the principles and, in June 2019, the G20 adopted human-centred AI Principles that draw from the OECD AI Principles. In addition, many firms, sectors and industries have adopted their own AI principles.

Some countries are adapting, strengthening and/or enforcing legislation. While guidelines can be more timely and adaptable in response to a changing landscape, legislation is more enforceable. In most countries, existing non-AI-specific legislation already provides a foundation for addressing several concerns about the use of AI in the workplace, for example legislation on data protection, discrimination, and consumer protection (see Chapter 6). Making sure that such legislation is up to date and reflects the new realities and challenges brought by AI will be important. In addition, many countries are considering AI-specific legislation, such as the AI Act in the European Union and the Algorithmic Accountability Act in the United States. The latest generative AI developments seem to have rekindled action in this area. The success of these measures will depend as much on their formulation as on their implementation. Measures that could facilitate implementation include technical standards and oversight mechanisms such as regulatory bodies or independent auditing. Additionally, guidance for AI developers and employers to understand and comply with the legislation, as well as engagement with stakeholders, can foster a shared understanding of the goals and requirements of the legislation. A multifaceted approach that combines these measures may be necessary to ensure effective implementation.

Collective bargaining and social dialogue have an important role to play in supporting workers and businesses in the AI transition. They can facilitate AI adoption and use in the workplace, as well as shape and implement rights to address AI-related issues in a flexible and pragmatic manner while promoting fairness. Collective bargaining can also complement public policies in enhancing workers’ security and adaptability. In the insurance and telecommunication sector, for instance, European social partners have signed two framework agreements on AI that addressed transparency in data use and protection against bias and discrimination. More recently, social partners have started engaging in “algorithm negotiations”, but only a few AI-related agreements have been signed to this date. Yet, social dialogue and collective bargaining are facing a number of challenges: the number of workers who are members of unions and are covered by collective agreements has declined in many OECD countries. In addition, the specific characteristics of AI and the way it is implemented – such as its rapid speed of diffusion, its ability to learn and the greater power imbalance it can create – add further pressure on labour relations. While AI technologies have the potential to assist social partners to pursue their goals and strategies, the lack of AI-related expertise among social partners is a major challenge (Chapter 7).

Training will be important for workers to successfully navigate the transition. The impact of AI on tasks and jobs will engender changing skills needs. On the one hand, AI will replicate some skills, like manual and fine psychomotor abilities, and cognitive skills such as comprehension, planning and advising. On the other hand, skills needed to develop and maintain AI systems, and those to adopt, use and interact with AI applications, will become more important. The demand for basic digital skills, data science and other cognitive and transversal skills will also increase. While companies using AI say they provide training for AI, a lack of skills remains a major barrier to adoption, suggesting more could be done. Public policies will therefore have an important role to play, not only to incentivise employer training, but also because a significant proportion of training for the development and adoption of AI takes place in formal education. AI itself may present opportunities to improve the design, targeting and delivery of training, but several risks exist and challenges must be addressed (Chapter 5).

Policy should be evidence-based, yet little is currently known about the impact of AI on workers, the workplace, and the labour market more generally. The current edition of the OECD Employment Outlook seeks to address this gap and the chapters that follow provide policy makers with the current state of knowledge about the impact of AI on job quantity (Chapter 3) and quality (Chapter 4), as well as the implications for three key policy areas: skill policy (Chapter 5), the ethical challenges posed by AI (Chapter 6), and the role of social dialogue and collective bargaining in supporting the AI transition (Chapter 7). These chapters draw on the OECD’s own work in these areas over the past few years, as well as on other available evidence.6

While this Employment Outlook is a step towards more evidence-based policy making on AI, there are still many unknowns. The challenge for research is similar to that facing policy makers: the exponential speed of AI development and its growing pervasiveness imply that one is constantly running after the facts. In addition to keeping tabs on the labour market impact of some of the latest AI technologies (e.g. generative AI), some key areas for future research include: the impact of AI on inclusiveness and labour market concentration; its role in the delivery of public services; how it will change management practices; and the governance processes and structures required for the trustworthy adoption of AI in the workplace, amongst others. The OECD will continue to work on these as well as other related topics in years to come. In doing so, it will be important to continue to gather new and better data on AI adoption and its use.

References

[21] Autor, D. (2022), “The labor market impacts of technological change: From unbridled enthusiasm to qualified optimism to vast uncertainty”, in Qureshi, Z. (ed.), An Inclusive Future? Technology, New Dynamics, and Policy Challenges, Brookings Institution, https://www.brookings.edu/wp-content/uploads/2022/05/Inclusive-future_Technology-new-dynamics-policy-challenges.pdf.

[7] BBC (2017), Google AI defeats human Go champion, BBC, https://www.bbc.com/news/technology-40042581 (accessed on 3 February 2023).

[25] Brodsky, S. (2022), Some Human Authors Worry AI Will Take Their Jobs—Here’s Why, Lifewire, https://www.lifewire.com/some-human-authors-worry-ai-will-take-their-jobs-heres-why-6951001 (accessed on 7 February 2023).

[12] Dastin, J. (2018), “Amazon scraps secret AI recruiting tool that showed bias against women”, Reuters, https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G (accessed on 21 February 2022).

[23] Gault, M. (2022), “An AI-Generated Artwork Won First Place at a State Fair Fine Arts Competition, and Artists Are Pissed”, Vice, https://www.vice.com/en/article/bvmvqm/an-ai-generated-artwork-won-first-place-at-a-state-fair-fine-arts-competition-and-artists-are-pissed (accessed on 31 January 2023).

[19] Green, A. and L. Lamby (2023), “The supply, demand and characteristics of the AI workforce across OECD countries”, OECD Social, Employment and Migration Working Papers, No. 287, OECD Publishing, Paris, https://doi.org/10.1787/bb17314a-en.

[27] Harris, L. (2023), The Latest Casualty of Generative AI? Animators, dot.LA, https://dot.la/the-dog-and-the-boy-2659365902.html (accessed on 7 February 2023).

[26] Hirani, L. (2023), Can AI replace us all?, Barn and Bench, https://www.barandbench.com/law-firms/view-point/can-ai-replace-us-all (accessed on 7 February 2023).

[8] Hsu, J. (2022), Artificial intelligence: AIs built by Meta beat human experts at Diplomacy, New Scientist, https://www.newscientist.com/article/2343027-ais-built-by-meta-beat-human-experts-at-diplomacy/ (accessed on 3 February 2023).

[17] IBM (2022), IBM Global AI Adoption Index 2022, IBM, Armonk, NY, https://www.ibm.com/downloads/cas/GVAGA3JP (accessed on 31 January 2023).

[13] Jain, A. (2023), ChatGPT won’t crack jokes on women & Indians, netizens left guessing why, mint, https://www.livemint.com/news/india/chatgpt-won-t-crack-jokes-on-women-indians-netizens-left-guessing-why-11676171036353.html (accessed on 4 May 2023).

[5] Jamal, S. (2023), Pakistani judge uses ChatGPT to make court decision, Gulf News, https://gulfnews.com/amp/world/asia/pakistan/pakistani-judge-uses-chatgpt-to-make-court-decision-1.95104528 (accessed on 4 May 2023).

[11] Kayser-Bril, N. (2020), Google apologizes after its Vision AI produced racist results, AlgorithmWatch, https://algorithmwatch.org/en/google-vision-racism/ (accessed on 4 May 2023).

[10] Laing, K. (2023), Tesla (TSLA) Reports New Fatal Crash for Self-Driving Car Model S, Bloomberg, https://www.bloomberg.com/news/articles/2023-04-17/tesla-reports-new-fatal-crash-for-self-driving-car-model-s (accessed on 4 May 2023).

[16] Lane, M., M. Williams and S. Broecke (2023), “The impact of AI on the workplace: Main findings from the OECD AI surveys of employers and workers”, OECD Social, Employment and Migration Working Papers, No. 288, OECD Publishing, Paris, https://doi.org/10.1787/ea0a0fe1-en.

[9] Lassébie, J. and G. Quintini (2022), “What skills and abilities can automation technologies replicate and what does it mean for workers?: New evidence”, OECD Social, Employment and Migration Working Papers, No. 282, OECD Publishing, Paris, https://doi.org/10.1787/646aad77-en.

[15] McKendrick, J. (2022), Who Ultimately Owns Content Generated By ChatGPT And Other AI Platforms?, Forbes, https://www.forbes.com/sites/joemckendrick/2022/12/21/who-ultimately-owns-content-generated-by-chatgpt-and-other-ai-platforms/ (accessed on 4 May 2023).

[20] Milanez, A. (2023), “The impact of AI on the workplace: Evidence from OECD case studies of AI implementation”, OECD Social, Employment and Migration Working Papers, No. 289, OECD Publishing, Paris, https://doi.org/10.1787/2247ce58-en.

[3] Murphy Kelly, S. (2023), ChatGPT passes exams from law and business schools, CNN Business, https://edition.cnn.com/2023/01/26/tech/chatgpt-passes-exams/index.html (accessed on 4 May 2023).

[24] O’Connor, S. (2022), Actors worry that AI is taking centre stage, Financial Times, https://www.ft.com/content/7c26a93f-88ec-4a50-8529-7df81af86208 (accessed on 7 February 2023).

[6] OECD (2023), Is Education Losing the Race with Technology? AI’s Progress in Maths and Reading, https://www.oecd.org/education/is-education-losing-the-race-with-technology-73105f99-en.htm (accessed on 30 May 2023).

[22] OECD (2019), Artificial Intelligence in Society, OECD Publishing, Paris, https://doi.org/10.1787/eedfee77-en.

[1] OECD (2019), OECD Employment Outlook 2019: The Future of Work, OECD Publishing, Paris, https://doi.org/10.1787/9ee00155-en.

[4] Science Media Centre (2023), Expert reaction to study on ChatGPT almost passing the US Medical Licensing Exam, Science Media Centre, https://www.sciencemediacentre.org/expert-reaction-to-study-on-chatgpt-almost-passing-the-us-medical-licensing-exam/ (accessed on 4 May 2023).

[14] Smith, G. (2023), Hallucinations Could Blunt ChatGPT’s Success, IEEE Spectrum, https://spectrum.ieee.org/ai-hallucination (accessed on 4 May 2023).

[2] Stokel-Walker, C. (2023), “ChatGPT listed as author on research papers: many scientists disapprove”, Nature, Vol. 613/7945, pp. 620-621, https://doi.org/10.1038/D41586-023-00107-Z.

[18] Zhang, D. et al. (2022), The AI Index 2022 Annual Report, AI Index Steering Committee, Stanford Institute for Human-Centered AI, Stanford University, https://aiindex.stanford.edu/wp-content/uploads/2022/03/2022-AI-Index-Report_Master.pdf (accessed on 3 February 2023).

Notes

← 1. https://oecd.ai/en/ai-principles.

← 2. In 2022, AI astonished the world with its image creation abilities (e.g. DALL-E 2 and Stable Diffusion) which are now so good that it can fool humans and win art competitions, such as the Colorado State Fair’s digital art competition (Gault, 2022[23]).

← 3. The United States, Canada, Germany, Austria, the United Kingdom, Ireland, France and Japan (Lane, Williams and Broecke, 2023[16]; Milanez, 2023[20]). Note however that the results of these studies do not necessarily generalise to other sectors. Also, as in all cross-sectional studies, caution should be exerted in interpreting the results insofar as they may be partially affected by selectivity bias as only workers who remain in the firm after AI adoption are surveyed.

← 4. Although the survey did not go into further detail about the various types of costs, these might include: the cost of acquiring the technology but also the data processing capabilities required to run the tools.

← 5. Some have raised concern that the latest wave of generative AI may expand the range of occupations at risk of automation even further. Several occupation groups have voiced concern about the most recent wave of generative AI. In February 2023, animators were up in arms when an animation studio used AI generative software to create background images for a new film, threatening many jobs in the industry (Harris, 2023[27]). Voice actors (O’Connor, 2022[24]) and writers (Brodsky, 2022[25]) are equally worried about what AI might mean for their jobs, and lawyers are another profession where AI is expected to replace a considerable share of human work (Hirani, 2023[26]).

← 6. The OECD’s work on Artificial Intelligence in Work, Innovation, Productivity and Skills (AI-WIPS) has been generously supported by the German Federal Ministry of Labour and Social Affairs (BMAS), with support also from Austria’s Federal Ministry of Labour, Social Affairs and Consumer Protection; the department for Employment and Social Development Canada; Ireland’s Department of Enterprise, Trade and Employment, the U.S. Department of Labor; the UK Economic and Social Research Council; ESSEC Business School, and the Japan Institute for Labour Policy and Training.

Metadata, Legal and Rights

This document, as well as any data and map included herein, are without prejudice to the status of or sovereignty over any territory, to the delimitation of international frontiers and boundaries and to the name of any territory, city or area. Extracts from publications may be subject to additional disclaimers, which are set out in the complete version of the publication, available at the link provided.

© OECD 2023

The use of this work, whether digital or print, is governed by the Terms and Conditions to be found at https://www.oecd.org/termsandconditions.