2. Expert commentary on the future of the public service

Prof. Peter Cappelli
Director of Wharton’s Center for Human Resources
University of Pennsylvania

Few topics generate more interest, and perhaps are more important, than anticipating the future. How to go about it and in particular how to assess predictions is not well-known. The arguments below consider what we know about that topic in the context of predictions about the future of work. The conclusions offer suggestions about how to proceed in the face of considerable uncertainty about what the future will bring.

How do we make sense of the arguments being discussed about the future of work?

There are at least two quite different types of predictions about the future that are made in the sciences. The first relates to questions where we are predicting events that have occurred before and have historical data to help us. We calculate these predictions by building forecasts. These include predictions as to the state of the economy or of politics where we use outcomes and experiences in the past to extrapolate to the present.

A great advantage of this approach in terms of epistemology is that we have some ability to assess how accurate our forecast of the future is based on how well our model has predicted outcomes in the past. The downside of the approach is that it only works if the underlying structure of the situation or context remains the same in the future as it has been in our historical data. Typically, we cannot tell that until we build the model and discover that it does not work.

The second type of claim is one where the past is not likely to be a good predictor of the future. This is the idea that something new has or will happen that will cause current arrangements to be, in the remarkably overused phrase of former Harvard Professor Clayton Christiansen, “disrupted.” Claims about the influence of artificial intelligence can fall into this category.

We might describe the effort to make such predictions as “expert judgment.” Tetlock (2017) studied the phenomenon of predictions by experts extensively, especially with respect to political events. He found that the accuracy of experts in making these predictions barely beat “monkeys tossing darts at a dart board,” or less creatively, no better than chance. Tetlock and Gardner (2018) engaged in a large exercise to see what makes some individuals better than others at actual predictions of events that could later be confirmed. Their conclusions are important to bear in mind in looking at the forecasts of disruptive events.

For example, those who question assumptions, who look for comparable situations and events elsewhere, and who consider the counter arguments to their positions do better at predicting. Those who are advocating for a position rather than examining it soberly, do poorly in their predictions. This should not be too surprising. In many cases, the goal of the advocates is much more about generating attention (and subsequent business) than being right in their forecasts.

What has driven the popular interest in the future of work appears to be ideas about how work might change in the future. An important development in the workplace that has helped drive media interest in the future of work has been the outsourcing of human resource tasks to vendors. The US vendor industry alone does over half a trillion US dollars of business every year, almost equivalent in scale to the entire construction industry. Their public relations efforts include issuing reports and studies about work. These often include reports about problems – mainly about the labor force or related attributes such as education – and not surprisingly they also offer solutions to those problems that their business provides.

To illustrate, in the mid-1990s, some of these vendors began to argue that the United States was facing a real labor shortage because of the smaller “baby bust” cohort of young people in the labor force, at least compared to the baby boomers. The United States had never had anything like a labor shortage in modern times, and the projections of the US Bureau of Labor Statistics showed no evidence of any decline in the projected size of the labor force. Nevertheless, reports continued about dealing with the coming labor shortage, and human resource departments developed contingency plans for dealing with it.

That was followed in the early 2000s by the creation of the “millennial” notion, the idea that younger people were somehow wired differently than those that came before them and had to be treated differently. The National Academy of Sciences in the United States examined the idea and thoroughly debunked the notion that there even is a definable millennial generation with specific preferences at the workplace. It pointed out that these claims mainly confused the fact that this age group was simply younger than the older individuals to whom they were compared and would change as they got older.1 Yet consulting projects to understand millennials and training programmes to somehow accommodate them persist undaunted.

It is difficult to understate the influence of efforts like these on employer practices. The consequences of chasing problems that are not real is that they distract human resource executives and their limited resources from tasks that truly matter to employees and the organisation. Even when the reports and the marketing efforts are targeted at private sector employers, they eventually get to the public sector as well in part because of the conceit that the private sector is better managed, more advanced in its ideas, and so forth. Especially in Anglo-American countries where business expertise is seen as especially applicable to government operations, public sector leaders have to respond to these ideas simply because their private sector counterparts take them up.2

In the public sector context, many and perhaps the most important of the arguments about the future of work are practices that are already underway in the private and public sectors. Here the assumption is that thoughtful public sector leaders may look into these arguments as a way to anticipate changes. There are also pressures on some leaders to adopt ideas from the private sector as a means for legitimacy in countries where business has a high level of legitimacy.

One set of ideas are those concerning the management of workers. These are the easiest ideas to assess because they actually exist already, and we know something about how these practices are actually being used and what they do.

A short list of recent developments in this domain includes the following:

  • Greater use of contracting, which includes independent contractors but especially “leased” employees provided under contract from vendors.

  • Greater outsourcing of employee management tasks.

  • Agile project management systems.

  • Reforms of performance appraisals and moves toward more continuous feedback.

  • Greater use of performance-related pay .

  • Greater use of data science to provide answers to predictions, such as which candidates will be good performers.

A second set has to do with claims about new developments affecting work per se that have yet to happen. Here there appear to be two major claims:

  • Permanent remote work, at least for some jobs

  • The role of artificial intelligence.

These will all be considered in the sections that follow.

Bearing the above criteria in mind about making accurate predictions, we turn to the first set of projections about how the future of work might change in the public sector. These are practices and arrangements for managing employees that already exist in some leading organisations. They are unlikely to be revolutionary notions, but they have the advantage of not being speculative. They exist now, and we have a reasonably clear idea how they work. The relevant projection is do we think they would bring benefits if expanded in the public sector.

There is enormous diversity within the public sectors of any individual country as well as diversity within any function or programme across countries in how work is executed and employees managed. It is difficult in that context to do more than make broad generalisations. With that in mind, we turn to these management practices, which have become popular in workplace management.

These two related concepts refer to the boundary of organisations and the extent to which the work of those organisations is pushed outside its boundary to vendors in the form of outsourcing and the extent to which non-employees are brought inside the organisation to perform tasks that had been done by employees. The advantages of outsourcing and contracting with respect to managing uncertain futures is the ability to secure organisational capabilities quickly that would otherwise require considerable time to develop, especially in the public sector with its extra oversight requirements.

Outsourcing depends on the availability of a competitive market of suppliers and vendors, something which is now robust in virtually every aspect of managing work and the workplace. It is quite possible that vendors might also access greater expertise and lower cost from scale economies vendors than we might find even if we developed the capabilities ourselves. The most common examples have been managing payroll, where employees are spread across locations, each have their own tax and reporting requirements with respect to pay, and employee benefits such as retirement plans where actuarial matters and again tax considerations require specialised expertise.

There are disadvantages as well, of course, some of them fundamental. How well outsourcing works depends on the contract that is negotiated for the arrangement as vendors have no obligation to adjust what they deliver to the changing needs of the client or if the client makes an initial mistake as to what they need. Vendors also fail and go out of business, in which case most all of their obligations are nil; even though contracts may bind them, vendors may decide that an agreement has become too costly and pay the damages necessary to get out of it; disagreements about what is being delivered are common and can be difficult to adjudicate.

Contracting takes two forms. The first is with individual contractors, who come in to perform specific tasks that can be described in contracts. The second is to engage vendors to provide workers under contract to the organisation, sometimes known as staffing agencies. In some cases, the distinction between the latter and outsourcing is modest. For example, “master service providers” in the United States are vendors who take over entire functions, typically IT work, at the client’s location. They may be deeply embedded in the organisation, but they are not employees of the organisation.

The advantages of contracting are similar to outsourcing in the ability to access expertise or even just additional hands that the organisation needs. A second advantage is that contracting arrangements tend to be much easier to adjust than is outsourcing. The contracts for individual contractors tend to be short-term and violating them, e.g. asking a contractor to change what they are doing, are often accommodated informally without turning to legal solutions. Depending on a country’s legal framework, the workers from staffing firms can be redirected to different tasks by the client (common law countries refer to this as the “borrowed servant doctrine”), and the contracts with the vendor typically allow the client to adjust the number of workers provided up or down. Another way to describe this is that these vendors make labour more of a variable than a fixed cost. The downsides are, first, that finding the precise skills needed to operate in the public sector may well be quite difficult in the outside market. Second, start-up issues such as security clearances can be considerable, and contractors have little that prevents them from walking away from a job if something better pops up elsewhere.

In summary, contracting gives clients more control as opposed to outsourcing, and it is easier for the clients to get resolutions if problems occur.

Arguably the biggest innovation in people management in the 2000’s has been the rise of the idea of “agile” as an approach to managing projects. The term got going in 2001 when the software developers at Adobe developed a shortlist as to what they saw were the key factors in developing good software. As agile manifesto framers Alistair Cockburn and Jim Hightower noted in 2001, the essence of the agile approach is that it puts people and their interactions above process and planning (Cockburn and Highsmith, 2001[1]). Defining precisely what constitutes an agile approach has become something of a fetish – more than 1500 academic articles were written about it just in the first decade after the Agile Manifesto was published3 – but there is agreement on the key themes.

  • Small teams working collaboratively using an approach called “Scrum,” where decisions are made in a transparent fashion.

  • Priority to face-to-face interactions, as opposed to top-down decisions, and to iterations over plans. Autonomy for the team.

  • Customers/Users are involved all along including in design.

  • Resources are allocated based on need as it emerges – including “sprints” where they are used intensively to crack hard tasks – as opposed to based on plans.

  • Stand up prototypes quickly, get feedback to improve them.

  • Feedback and testing progress with users is everywhere.

The most notable aspect of agile may be what it takes away, and that is, top-down planning and control systems. When we started projects in the past, most of us have to develop a plan for it that specifies what we are going to achieve – what will the end result look like, what will it be capable of doing, and so forth – and then what it will cost, how long it will take, and what will the intermediate goals look like (e.g. how much will be accomplished this quarter, how much next quarter). That plan has to be approved by the leadership and especially by the CFO.

Most everyone who has led a project using this approach knows that it is largely guesswork. For a project doing something new, it is impossible to predict all that with any accuracy. Good project managers have to learn how to build buffers into the timelines and budgets to deal with inevitable unforeseen problems, how to package interim results to make them look like progress, and even how to hold back evidence of real progress to make it look like we are hitting our marks on the plan.

Agile gets rid of all of that. The project team is tasked with a problem to solve, and while there may well be discussions as to what this might cost to do, they are then on their own to do it, asking for additional resources when they need it, and finishing it as soon as they can. The evidence we have now, including companies like GE that are not simply in tech, is that agile projects are cheaper, faster, and have better outcomes than the previous planning-based approach. Simply cutting out the planning time and energy and the buffers that are built in to ensure that the plan is met saves a lot of time and resources.

Will agile expand through the public sector?4 Nascent examples of agile-based projects are in the works in many governments around the globe now. A year or so before the pandemic, agile was about to be institutionalised in many companies, and it might well have become one of those “best practices” that was pushed faster onto the public sector. An important constraint for agile in government operations is the concern about accountability, which in the context of agile projects manifests itself as knowing in advance how much money will go to whom and for what. The idea of trusting that employees will only spend what is needed is perhaps especially difficult when they are spending government money.

The human resource implications of agile systems are worth considering. Agile teams need resources just in time, when problems occur. That means securing additional staff, additional training and skills, contractors, and so forth when needed. It is extremely difficult for agile to work without that support. Whether public sector agencies could deliver that resource flexibility is an open question (Cappelli et al., 2018[2]).

At the heart of performance management is the idea that it is important to see how employees are performing in their jobs, in part to fix performance problems and in some countries in part to recognise and reinforce better performance. This interest is greatest in the Anglo-American countries perhaps because it is rooted in the more open-ended nature of common law employment and its set of mutual obligations that need to be managed continuously.

The most common manifestation of performance management is the performance appraisal process.

At various times, performance appraisals were designed to do three different tasks. The first was to measure the performance of employees to determine who should be promoted. The second is to help employees with career development, particularly important for white collar, corporate employees in the 1950s. The third was to help improve their performance. By the 1980s, the emphasis returned to measure worker performance, this time to hold them accountable for it, rewarding the good performers with merit-based pay and disciplining the poor ones.

Recognition that contemporary practice was not doing any of these tasks well has been widespread for decades. Specifically, career development more or less disappeared as internal advancement faded. Efforts to improve performance were essentially impossible with a simple end-of-the-year accounting. Merit pay budgets were too modest to expect much differentiation in motivation from them.

The most important development in performance management in 50 years or more started in the US private sector in the 2010’s out of frustration with the appraisal process. The idea was simply to get rid of the end of the year appraisal exercise and substitute for it a process of continuous discussion between supervisors and subordinates as to how things are going. If it could be executed, it was almost certain to be better at the task of improving performance, and it could not be much worse at “holding workers accountable” than a single, end-of-the-year meeting.

Before the 2020 pandemic, some estimates suggest that as many as 30% of US corporate employers had either gotten rid of their annual performance appraisal system altogether or moved to reform it in the direction of more continuous discussions. That included companies like GE, which had been famous under Jack Welch for advocating the mandatory dismissal of the poorest performers in its annual review, as well as all the major accounting and consulting companies and many of the Silicon Valley software employers. The initial reviews of these new approaches were almost uniformly positive, as we might imagine: there was more learning, problems were identified and solved faster, relationships with supervisors improved, and so forth.5

As with all innovations, however, this one involved change, which also leads to resistance. The initial push back on the reform of performance appraisals came from top executives who were firmly behind the idea that it was important for everyone to have a score and that without it, there would be no accountability. Reform efforts died in many organisations for that reason alone.

Further, dropping performance appraisals was the easy part. Forcing supervisors to talk to their subordinates proved more difficult, and many organisations that dropped appraisal did nothing to make that happen. Whether these efforts will continue after the pandemic are not clear. They are not yet seen as the type of uniform best practices that the public sector would be expected to adopt. But instituting the notion of continuous conversations between supervisors and subordinates beyond the annual performance appraisal process would be very useful.

The fact that public sector employment arrangements are different than in the private sector in that there are fewer carrots and sticks – dismissing employees is more difficult, pay raises are more limited, and so forth – increases the need for better performance management precisely because rewards and punishments are not sufficient. The assumption is often that performance issues are simply due to motivation, but that is rarely the case. Often there are misunderstandings about performance goals, about how tasks should be performed, and about the issues facing subordinates. Supervisors also have much more influence than they may think, such as influencing the tasks that subordinates perform and crafting their work to make it more meaningful. The first step in doing so is to have more conversations with subordinates, which is the goal of performance appraisal reform.

The idea here is that incentives are an important source of motivation that can be harnessed by tying goals for employees to payments to them. Central to the incentive idea is the notion that employees should know in advance what it is that you want them to do and what the reward will be for doing it. It began as an Anglo-American notion, especially important in the United States and is rooted in the belief that employees are rational economic actors interested in maximising their own pay: give them more rewards for something you would like them to do, and they will do more of it.

A great advantage of incentives as a management practice is that they are simple to introduce, at least initially. Unlike the hard work of changing an organisational culture or improving employee commitment, dropping incentives into place is typically a quicker intervention.

A fundamental choice in using incentive pay is deciding what the measure of performance should be. Do we tie pay to overall organisation or company performance, on the grounds that it is ultimately what we want? The drawback is that individual employees may well see no ability to influence those high-level outcomes. Do we pay for task-level outcomes that individual workers can easily control, at the risk that they suboptimise to achieve that goal, such as ignoring other tasks to run up my performance on the one that is being measured? There is no obvious solution to that tradeoff.

In the United States, incentive pay has been advocated as a solution for many public sector settings, most notably in education where the belief was that schools that were run by leaders who had a financial incentive for students to have higher academic achievement will perform better.

There is considerable evidence that incentives do get individuals to do more of what is rewarded, provided that they know how to do it. For example, students who are paid to do homework sets will do more of them. There is also evidence that they do not appear to increase the ability of individuals to succeed at tasks that they do not know how to do: schools whose leaders are paid more when they produce higher levels of student outcomes do not necessarily produce that outcome in part because getting students to learn more is a complex and difficult task. Lack of motivation is not necessarily the problem. There is also extensive evidence that incentives encourage the recipients to find suboptimal ways to achieve their goals. For example, schools with the above incentives make considerable efforts to get rid of poor performing students and attract higher performing ones to improve average student achievement scores. Finally, when people have a social purpose or charitable motivation for doing something, paying them to do more of it actually reduces those other motivations.6

In short, incentives work well in very focused contexts where outcome is within the control of the individuals, where it is easy to specify and measure, where the possibility of suboptimisation (overusing resources to achieve the measured goal and cutting back on other outcomes) is minimal.

In other contexts, it is not so useful and may even lead to perverse outcomes.

The pressure for the public sector to use more incentives comes from those who believe as a matter of principle that they are basic to motivation and, with few exceptions, are unaware of the evidence to the contrary. They also may appear to be a fairer way to distribute limited resources: rather than a very minor across-the-board increase, it may seem more reasonable to allocate meaningful increases where performance has improved.

It is also the case, however, that some of the mechanisms used in incentive contexts may create motivation even without the payment of the incentive. Measuring and reporting on individual performance has a strong motivational effect. Making comparisons of performance across individuals has a separate, motivating effect through social comparisons and peer pressure. Some part of the motivating effect that incentive programmes produce may actually come through that effect.

Separate from the arguments about what artificial intelligence might do to work going forward (see below) are predictions about what the more focused interventions from data science might do to workforce management. Because data science in aid of workforce management is already in use in much of the private sector, at least in modest ways, it is not a stretch to look at the lessons there and what they might suggest for the public sector.

Data science comes from the field of engineering, and it is to statistics what engineering is to science: an approach more focused on getting accurate answers rather than concerned about how those answers were generated. Machine learning is the technique most commonly used in data science. In contrast to standard statistical models that focus on one or two factors already known to be associated with an outcome like job performance, machine learning algorithms are agnostics about which variables have worked before or why they work. The more the merrier: It throws them all together and produces one model to predict some outcome like who will be a good hire, giving each applicant a single, easy-to-interpret score as to how likely it is that they will perform well in a job. It builds a complex, nonlinear model that maps the independent variables, say the attributes of individuals, onto the outcome, say their job performance. Then, as with forecasts, we plug in the attributes of an individual job candidate into that complex model, and it tells us how closely aligned that candidate is to our best performing employees.

In contrast to traditional statistical models, which might tell us how well candidates score separately on each of several attributes that have been shown elsewhere to predict job performance, such as IQ scores or experience, machine learning models give us one score that summarises all the attributes for an individual candidate. What that model cannot tell us is why one candidate scored better than another: was it their personality score, their college grades, or something else. It takes considerable additional effort and programming to identify the effect of any one factor in a machine learning algorithm, let alone to compare across attributes.

Algorithms derived in this manner can be used to predict anything in the workplace, not only which candidates are likely to do best but which ones should be promoted, what career paths make the most sense in terms of future success, and so forth.

The advantages of the data science approach begin with the fact that the way we make decisions now is not very good. We typically rely on immediate supervisors to make these decisions, sometimes with support from test scores and other measures of attributes, sometimes without, and those decisions are likely to be full of bias and inconsistencies across decision makers. Machine learning algorithms have the great advantage of being less biased in that regard. They look at the same attributes for all candidates, and they treat them all in the same manner, based on their relationship in the “learning” data with what actually tracks the outcomes in question.

Partly because they standardise both data and practices and because they have only one goal, that is to produce an accurate estimate, they will do a better job than we are doing now in predicting the desired outcome, whatever that is. But because they standardise and treat everyone the same way, when they make mistakes, they tend to be at scale and are easier to spot. The best-known examples have been where the data on which the algorithms were based were biased, not surprising given that virtually all historical data reflects the biases of the period when they were created. For example, past discrimination against women meant that fewer of them made it to positions most associated with success, so an algorithm built on that data would suggest that women candidates are less likely to succeed.7

The same prejudice is likely to be in the heads of individual decision makers, but it is much more difficult to pin down and identify than in an algorithm where it is easy to identify whether, say, scores for women candidates are lower than for men, other things equal (Hoffman, Kahn and Li, 2018[3]).In this regard, current practice in the public sector and civil service systems in particular are likely to be far better than what we see in the private sector because the former are more standardised and give less discretion to individual decision makers.

The question about the use of data science algorithms in something like hiring, where it seems to be used the most, turns on how important it is to get the best likely performer into a role versus how important other factors are, such as the fairness of the process (procedural justice) and how the outcomes are distributed across different stakeholders (distributive justice). In the private sector, the latter seems to be more important than in the public sector; in the public sector, procedural justice issues are important and codified. For example, an algorithm that predicted well who to hire would nevertheless have serious difficulties if it turned out that it did not track well with civil service test scores, a process issue, or if it was positively associated with certain nationalities and negatively with others for international agency jobs, a distributive issue.

On topics that are less consequential than hiring, algorithmic guidance might well be welcomed. Even on material outcomes such as hiring, it is possible to generate algorithms and use them as only one input in the hiring process. In many contexts, that might appear to defeat the purpose for having them, but where issues other than the best predictor matter, it may well be a reasonable approach.

Data science is pushing the use of these tools into areas that create concerns even in the private sector. For example, algorithms that predict turnover that are built on data from social media sites raise issues of privacy, raised earliest in the European Union. A more general concern is the inability to explain to those affected by algorithms what their scores mean. When we use algorithms to drive decisions on something like promotions, we lose the ability to explain to employees who did not advance what they need to do for a chance next time.8

We turn now to the more challenging predictions, those claiming that some development will cause the future of work to be unlike anything we have seen in the past. That makes it impossible to assess the likelihood that such predictions will happen with forecasts or to use experience either to assess what is required to meet them or the costs and benefits of introducing them.

The easiest of these prediction to address has to do with the temporary practices associated with social distancing policies of the pandemic, and that is the idea of continuous or permanent working from home. More generally, the idea here is to separate where we actually do our work from the location of our organisation.

Depending on the country, more than half of employees do work that could be done remotely (mainly white collar jobs, excluding jobs requiring interaction with people, such as services, and physical integration as in manufacturing). Public sector employees in the United States appear to have higher rates of remote work than others in the economy as the chart below indicates.

In the European Union, roughly 42% of employees in public administration were working full-time from home during the pandemic (Eurofound, 2020[4]). The prediction concerns how that number changes when pandemic requirements ease and employees are allowed to return to the office.

Why would we think that employees will not return to the office? It is difficult to know how well remote work functioned from the purposes of the organisation, but at a minimum, it was not the disaster that many employers expected, and in many organisations, work appeared to function more or less as it had with employees in the office. That may suggest some ability to keep workers at home.

In countries where we have data addressing the employee experience, most employees appreciate some aspects of remote work and want to continue them going forward. Of course, employees want many things that they never get from employers. If employees want it, there is some chance that their staff unions might advocate for it, perhaps as an alternative to higher wages, which appear difficult to secure.

On the organisation’s side, why might employers be willing to accommodate working from home? In the US private sector, chief financial officers like the idea because it means eliminating offices and real estate costs. In other words, they are thinking about permanent working from home, not the occasional use as most employees prefer. Public sector agencies may have a similar interest in cutting office space, especially those that operate in expensive cities.

The benefits to employers of remote work appear to turn mainly on permanent remote work where it is possible to eliminate office space. It is not clear that many employees want that arrangement, and what we know from research on remote work before the pandemic is that employees who work remotely are disadvantaged in many ways as opposed to their in-office counterparts. If an employer moves in the direction of having employees who have no office and are located elsewhere, it is only a small move toward having them be independent contractors.

During the dot.com boom in the late 1990s, companies with expensive real estate encouraged employees to come to their office location only when necessary and by appointment. This became known as “hoteling” or “hot-desks” outside the United States. The idea was that, on balance, fewer offices would be needed, and companies could shrink their office costs. That model essentially failed in the United States, in part because employees wanted their offices, in part because an office where the people who are there change every day has few social benefits – no consistent interactions and networks – which is one of the main benefits of having offices.

Public sector and private sector offices operated remotely as of Spring 2021 with some success, which raised the issue of what if anything from that experience will translate after the restrictions are lifted. Many employers are talking about hybrid workplaces, where employees spend some time continuing to work from home. The private sector interest frankly is driven by saving office space if employees are no longer there. As a result, there has been considerable interest and early announcements of moving some jobs to permanent remote work. The other alternative, allowing employees to keep their offices and work from home as well, has less financial appeal for the employers and potentially greater complexity. The reason for doing this appears to be because employees say they like it. It is not clear that there are equivalent benefits for the employers except that hybrid schedules may be a perk that attracts candidates for hiring as some tech employers have argued.

We know a fair bit about what happens to individuals who are working remotely when their colleagues are not, and the results are not good. They tend to be overlooked and cut off from social relationships.9 My read of the anecdotal experience with arrangements where some are in the office and some are remote, connecting electronically, has not been positive. If employees choose their own work schedules, then these half-zoom/half office interactions are inevitable because of the difficulty of having people needed in the office at the right time.

If public sector employers decide to maintain higher levels of work-from-home arrangements permanently, it is not that difficult an exercise to undertake, given that they did it during the pandemic and were, to various extents, doing it already before the pandemic. We have had more than a year of experience to learn whether it is worth continuing. We know roughly what the implications are, and the costs of being wrong are ones we should be able to measure. What is not so clear is the benefits, although experimenting with it would answer those questions.

Far and away the best-known and loudest arguments about the future of work have been those associated with artificial intelligence and what it might be possible to do to the world of work in the future. These are disruptive claims asserting that the future is not like the past, that the new developments in AI will change the structure of the employment relationships such that extrapolations from prior experiences are unlikely to be accurate predictors of the future. We might think of this as a double uncertainty: We cannot say with any certainty what AI innovations will look like in the future, which makes it impossible to assess what is required to introduce them, what the benefits will be from them, and what the costs are if the predictions turn out to be wrong and we follow them.

Arguably the most influential of these prediction arguments is from Brynjolfsson and McAfee, who argued that the technology that is emerging now is different in fundamental ways from what we have seen before and will affect the workplace in different ways than we have seen before (Brynjolfsson and McAfee, 2012[5]; Brynjolfsson and McAfee, 2014[6]).The most attention-getting claim in their book, which appeared at a time of substantial unemployment in the United States, is that this new technology will lead to considerable job loss. So far, there is little if any evidence of it happening.

An effort to quantify such expert judgment, which has had considerable influence, was Frey and Osborne’s survey that asked computer experts to assess whether it was possible for computers under the best circumstances to take over the central task of a set of jobs or if it will be possible to do so shortly (Frey and Osborne, 2017[7]).Their conclusion was that this was the case for almost half of jobs.

But the popular conclusion drawn from this study was typically that those jobs will be taken over by computers and soon. The obvious problems with that conclusion begin with the fact that jobs are made up of many tasks where the one that is “central” may not take up a majority of time, may not add the most value, and so forth. Even if computers did take over that central task, the other ones still have to be done. Applying a task-based methodology, the OECD concluded that only 14 percent of jobs are likely to be fully automated, and 32% partially automated.

The fact that it is technically possible for computer systems to take over a task does not mean that they could do it well or that it would be cost-effective for them to do so. Perhaps more important, new IT systems tend to add functionality that was not there before, creating new tasks.

More directly, it is possible to compare what happens to employment levels when IT investments go up. The assumption is often that IT is introduced to eliminate jobs, but there is no real evidence for that. Bessen looks at US data and finds that increased IT use is actually associated with expanded jobs. He also finds no evidence of job polarisation associated with greater IT use (Bessen, 2016[8]).Aum, Lee, and Shin found that IT investments were actually smaller for lower-level jobs doing routine work than for higher-level jobs, inconsistent with the earlier view that IT eliminates lower-level jobs but also inconsistent with the notion that it disproportionately targets middle level jobs (Aum, Lee and Shin, 2017[9]).Gregory, Salomons, and Zierhn (2016) also conclude that IT investment in Europe is associated with increases in employment (Gregory, Salomons and Zierahn, 2016[10]).

In summary, to the extent that there is evidence about the introduction of IT in the past, there is no consistent evidence that it reduced employment. Of course, the claim is that this time it will be different. By 2018, belief that this technology will soon be in place was so wide-spread that one US Presidential candidate made how to deal with the inevitable job loss among truck drivers a centerpiece of his campaign.10Two years later, the prospects of it happening in the foreseeable future looked so far away that most of the original players had pulled out of the effort to develop driverless vehicles, including Uber, which appeared to have the strongest interest in replacing drivers (Marshall, 2020[11]).

Because we have no clear idea what the impact of AI will be on public service jobs in a general sense, it is difficult to know how to respond. For example, we know that AI could eliminate many procedural tasks, but this depends on which AI are introduced, and many jobs in the public sector require competencies that are, so far, not easily replaced. Returning to the discussion above, replacing some tasks doesn’t necessarily mean replacing entire jobs. For these reasons, public employers can be wary of most sweeping claims of the impact of AI in the abstract, particularly those about entire job families disappearing. Rather they should recognise that digital technologies will transform jobs rather than replace them. Then, governments can employ change management strategies on a case-by-case basis, ensuring that they take the time to assess the workforce impacts of any technological project, and design appropriate training and transition strategies for the employees whose jobs will be changed or replaced. Governments should not underestimate the investments in people that go alongside those in technology. The idea that we should retrain large groups of government workers now for other jobs implies that we know what those other jobs should be, and we do not. AI may also increase the demand for some key jobs, but we do not know what those will be, either, aside from the relatively small number of people who are engaged in developing it. Even if we believed the AI projections, the fact that it is not clear what to do about them creates a strong argument for investigating further, learning by doing, and focusing on specific cases.

The arguments above begin with understanding the notion of uncertainty and especially predictions about the future. The second issue is to consider how to respond to uncertainty in ways other than simply going with our best guess.

Beyond assessing the likelihood that different predictions and forecasts will be true, it is important to go further and consider the range of possible outcomes within which particular predictions might be embedded: on what assumptions does a given prediction rely, and what happens if those assumptions are wrong? That takes us to an understanding that we face a menu of possible outcomes rather than simply an up-or-down choice on one. As a result, planning has to move away from the traditional and mechanical process of extrapolating from the past.

From there, we need to consider what the predictions imply for responses. What are the costs of waiting, the costs of acting, the costs of being wrong if the underlying predictions turn out to be wrong or our interventions do not work, and so forth? What tools can we use to help reduce the costs of uncertainty, which centre on our predictions turning out to be wrong. In short, “planning” needs to move from the mechanical exercise of extrapolating from the present to coming to grips in a serious way with the inevitable uncertainty associated with a changing world.

There are many approaches that have been used to get a sense of an uncertain future. The most sensible of these move away from a single prediction to something that recognises the diversity of potential futures. Scenario planning is one tool such tool that allows participants to examine competing predictions of the future and compare their implications. Scenario plans are a kind of expert judgment, ideally done by people who are knowledgeable but not advocates for positions. The approach begins with judgments about factors we believe are relatively certain in the future for the prediction at hand and then judgments about the most important uncertainties. The patterns of certainties and uncertainties coalesce into sets, which represent the scenarios (Shoemaker, 1995[12]). Other approaches as well are useful such as assigning players to debate the pros and cons of different predictions being put forward.

When we review possible candidates that may affect the future of work in the public sector, it is not surprising that the ones describing developments that already exist in some form appear to be the most promising to take seriously because they have the best likelihood of expanding into something meaningful. Those centre on changes in demography within each country and management practices already underway, sometimes in parts of the public sector where we believe they may well grow and sometimes just emerging in the private sector. The projections that have the least merit to be taken seriously are also those that have gotten the most attention, in part because they are so extreme. Those concern AI. The fact that we have little contemporary evidence that the projections are turning out to be true and the fact that the implications as to how to respond are not obvious make them a difficult bet for large scale action. In these areas, public employers would do well to take a well thought-through project management approach as AI is designed and implemented, complete with change management strategies that identify and mitigate its impacts on the workforce each step of the way.

References

[9] Aum, S., S. Lee and Y. Shin (2017), “Industrial and Occupational Employment Changes During the Great Recession”, Federal Reserve Bank of St. Louis Review, Vol. 99/4, pp. 307-317, https://doi.org/10.20955/r.2017.307-317.

[8] Bessen, J. (2016), “How Computer Automation Affects Occupations: Technology, Jobs, and Skills”, SSRN Electronic Journal, No. 15-49, https://doi.org/10.2139/ssrn.2690435.

[6] Brynjolfsson, E. and A. McAfee (2014), The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies, WW Norton & Company, New York.

[5] Brynjolfsson, E. and A. McAfee (2012), Race against the machine: How the digital revolution is accelerating innovation, driving productivity, and irreversibly transforming employment and the economy, Digital Frontier Press, Lexington, Massachussets.

[13] Cappelli, P. and R. Bonet (2021), “After Covid, Should You Keep Working from Home? Here’s How to Decide”, The Wall Street Journal, 22 March 2021, https://www.wsj.com/articles/after-covid-should-you-keep-working-from-home-heres-how-to-decide-11616176802.

[16] Cappelli, P. and A. Tavis (2016), “The Performance Management Revolution”, Harvard Business Review.

[2] Cappelli, P. et al. (2018), “The New Rules of Talent Management”, Harvard Business Review, https://hbsp.harvard.edu/product/R1802B-PDF-ENG.

[1] Cockburn, A. and J. Highsmith (2001), “Agile software development, the people factor”, Computer, Vol. 34/11, pp. 131-133, https://doi.org/10.1109/2.963450.

[15] Dingsøyr, T. et al. (2012), “A decade of agile methodologies: Towards explaining agile software development”, Journal of Systems and Software, Vol. 85/6, pp. 1213-1221, https://doi.org/10.1016/j.jss.2012.02.033.

[4] Eurofound (2020), “Living, working and COVID-19”, COVID-19 series, Publications Office of the European Union, Luxembourg.

[7] Frey, C. and M. Osborne (2017), “The future of employment: How susceptible are jobs to computerisation?”, Technological Forecasting and Social Change, Vol. 114, pp. 254-280, https://doi.org/10.1016/j.techfore.2016.08.019.

[10] Gregory, T., A. Salomons and U. Zierahn (2016), “Racing With or Against the Machine? Evidence from Europe”, ZEW - Centre for European Economic Research Discussion Paper, No. 16-053, https://doi.org/10.2139/ssrn.2815469.

[14] Hilton, M. (2008), “Skills for Work in the 21st Century: What Does the Research Tell Us?”, Academy of Management Perspectives, Vol. 22/4, pp. 63-78, https://doi.org/10.5465/amp.2008.35590354.

[3] Hoffman, M., L. Kahn and D. Li (2018), “Discretion in Hiring”, The Quarterly Journal of Economics, Vol. 133/2, pp. 765-800, https://doi.org/10.1093/qje/qjx042.

[11] Marshall, A. (2020), “Uber Gives Up on the Self-Driving Dream”, Wired, https://www.wired.com/story/uber-gives-up-self-driving-dream/ (accessed on 13 September 2021).

[17] Rey-Biel, P., U. Gneezy and S. Meier (2011), “When and Why Incentives (Don’t) Work to Modify Behavior”, Journal of Economic Perspectives, Vol. 25, pp. 191-210, https://doi.org/10.2307/41337236.

[12] Shoemaker, P. (1995), “Scenario Planning: A Tool for Strategic Thinking”, Sloan Management Review, Vol. 36/2, pp. 25-40.

Notes

← 1. National Academies. 2020. Are Generational Categories Meaningful Distinctions for Workforce Management? Washington, DC: National Academies Press.

← 2. To illustrate, the National Academy of Sciences in the United States took up the question of whether there was a skills gap because of the presumed rising of skill requirements in jobs. That task was given to them by the Director of the National Institutes of Health, and it came to him from business CEO’s. I was a member of that Committee. It concluded that there was no skills gap. See Margaret Hilton (2008[14]), “Skills for Work in the 21st Century: What Does the Research Tell Us?”, Academy of Management Perspectives, Vol. 22/4, pp. 63-78.

← 3. For an account of the academic research on agile, see Dingsøyr et al. (2012[15]), “A decade of agile methodologies: Towards explaining agile software development”, Journal of Systems and Software, Vol. 85/6, pp. 1213-1221, 033.

← 4. Examples of agile projects around the world are detailed in Agile Government: Building Greater Flexibility and Adaptability in the Public Sector. Deloitte Insights March 2021.

← 5. For a review of the evidence, see Peter Cappelli and Anna Tavis (2016[16]), “The Performance Management Revolution”, Harvard Business Review.

← 6. For an overview, see Rey-Biel, Gneezy and Meier (2011[17]), “When and Why Incentives (Don’t) Work to Modify Behavior”, Journal of Economic Perspectives, Vol. 25, pp. 191-210.

← 7. NB from the OECD: As these models are based on statistical averages, they can also have difficulty properly evaluating the potential of candidates whose skills and leadership are expressed in non-conventional ways, outside of the “norm”. Care is also required to ensure the tools work equally well for all people, including those with different accents, those with special conditions or disabilities, etc. Certain uses, such as facial recognition technology for emotion detection from facial expression, or to derive other traits such as political affiliation, intelligence or fitness for employment, are being criticized as discriminatory, unreliable and invasive, and should be avoided entirely.

← 8. NB from the OECD: Spotting algorithm bias takes proactive monitoring and analysis efforts from those deploying the models. Institutions should ensure that meaningful explanation be provided to those subject to automated assessments, for transparency in the model and the data used, and that they monitor predictions and assessment for unexpected outcomes.

← 9. For a review of this research, see Peter Cappelli and Rocio Bonet (2021[13]), “After Covid, Should You Keep Working from Home? Here’s How to Decide”, Wall Street Journal, 22 March 2021.

← 10. https://2020.yang2020.com/policies/trucking-czar/.

Metadata, Legal and Rights

This document, as well as any data and map included herein, are without prejudice to the status of or sovereignty over any territory, to the delimitation of international frontiers and boundaries and to the name of any territory, city or area. Extracts from publications may be subject to additional disclaimers, which are set out in the complete version of the publication, available at the link provided.

© OECD 2021

The use of this work, whether digital or print, is governed by the Terms and Conditions to be found at http://www.oecd.org/termsandconditions.