Skills for Future Workers

In “Artificial Intelligence, Robots, and Work: Is This Time Different?” (Issues, Fall 2018), Stuart W. Elliott provides a thoughtful analysis of whether artificial intelligence (AI) will have a disruptive impact on the types of jobs available to people during the next few decades. When considering literacy in international assessments, approximately half of people do not have much difficulty handling tasks up through Levels 2, whereas three out of eight can handle Level 3, and one out of eight can handle Levels 4 and 5. Computers with AI facilities are similarly projected by experts to handle Level 2 tasks and possibly Level 3 tasks in the next decade or so, but not many tasks at Levels 4 and 5. Elliott claims that the race between computers and humans in handling tasks at the more difficult levels will require input from experts in psychology, education, and testing in addition to computer science and economics. The analyses he presents are quite compelling to me as a professor who has conducted interdisciplinary research in all these areas (except for economics) and who has developed computerized learning environments with AI components.

There are reasons to be optimistic that humans will maintain the lead on tasks at Levels 3-5. First, AI can perform quite impressively on very specialized tasks at these levels, but the performance does not generalize beyond each specialized task. A program that interprets problems in biology will not transfer to medicine because the corpus of knowledge and reasoning strategies are quite different. Humans are also not particularly impressive at generalization and transfer, but they hold the lead and such capacities are essential for nonroutine problem solving at Levels 4 and 5. Second, humans hold the lead in social interaction and collaborative problem solving, which are critical twenty-first century skills. Computers products such as Siri, Alexa, and Echo can handle short exchanges on generic topics but not lengthier coherent conversations, nonliteral language, humor, deictic expressions (here, there, this, that) and other dimensions of discourse that are contextually specific—and that will always have high demand in the workforce.

There are also reasons to believe that computers will take the lead on performing both routine and nonroutine tasks within specific domains of application, but without much need for generalization and transfer. This has already been demonstrated in law for information retrieval tasks and in finance for report generation tasks. Intelligent computer tutors are approximately on par with human tutors in helping students learn about specific topics, such math, physics, or biology. Computer tutors are likely to exceed human tutors as they acquire new content and strategies through larger corpora, data mining, and machine learning. Human memories cannot handle such rich volumes of information with precise detail and discriminations.

Elliott persuasively argues that it is difficult to predict how disruptive AI will be on the workforce. Markets and enterprises can evolve swiftly and require a large workforce even when they have dubious value. The intrinsic value of agriculture, manufacturing, and education is indisputable whereas sports, games, and social media have value on very different dimensions.

Professor
Department of Psychology and Institute for Intelligent Systems
University of Memphis
Former chair of the Collaborative Problem Solving Expert Group for the Programme for International Student Assessment 2015

“After the invention of the steam engine, even if horses had worked for free, nobody would have hired them.” This saying, often attributed to the late Nobel Prize winner Robert Solow, describes in drastic terms what happened to the major source of work power during and after the industrial revolution. Just as steam engines were able to replace dozens or more horses, computers and robots are now able to replace the cognitive work of hundreds of humans in routinized tasks. In the past, automation and the use of machines were seen as signs of progress because most of the work, where humans were substituted by machines, was unpleasant, dangerous, and low-paid.

Today, it’s different. Computers and artificial intelligence (AI) are replacing humans in tasks that demand higher cognitive skills and longer education—and therefore those jobs are better paid and take place in nicer working environments. This gives rise to fear, and this is precisely where Stuart W. Elliott’s article sets in, showing that the majority of the adult US workforce reaches only levels of cognitive competencies that could relatively easily be replaced by computers already today.

Facing the threat of being replaced by machines and AI, there are three possible escape routes, two of which Elliott describes. Of the three, one is unlikely, one undesirable, and one difficult and uncertain but nevertheless the only way forward. The first and unlikely way to get around the threat of being replaced by machines is a massive upskilling of the current adult population and the coming ones. I fully agree with Elliott’s assessment that although the differences in the levels of cognitive competencies between adults of different countries show some potential for the United States, it is very unlikely that education alone could protect a high share of people from the menace of being replaced by AI.

The second strategy, sometimes put forward by the developers of AI themselves, is reducing the cognitive demand of activities with the help of computers, such that humans with lower competencies are able to execute more demanding tasks. Although this sounds appealing at first, what share of the profit of producing goods and services would companies want to give to humans who are up to their job only because of the use of expensive AI? This leaves us with the third option, which is doing things that computers are not (yet) able to do. But as Elliott correctly points out, a lot of the skills that computers are not (yet) able to perform are those that are also asking high levels of cognitive skills, such as creativity or critical thinking.

But the question to ask is not which skill or competence computers are least likely to perform in the near future. Rather, the question—that the nation’s educational authorities have to think about when designing curricula—is: what are the skills necessary to produce services and goods that humans in the future would rather pay another human for and not a machine? Horses, by the way, have found their new role. In many countries, there are more horses than ever since World War II—not to plough land anymore, but as leisure companions.

Department of Economics
Centre for Research in Economics of Education
University of Bern
Switzerland

The future of education is of course closely connected with the future of jobs and the labor market. Both the article by Stuart W. Elliott and another in the same volume by Philip Brown and Stuart Keep, “Rethinking the Race Between Education & Technology,” underline this, albeit in different ways.

Brown and Keep show that substantial differences exist in the reading of today’s labor market and in predictions about its future. The latter may be very confusing, as a well-known Dutch joke holds that “predicting is difficult, especially when it is about the future.” What they do show, however, is that each of the three contrasting views on the future of the labor market and workers that they describe will have a severe impact on education’s main objectives and organization. In Elliott’s article, the search is for what skills and what level of skills future education systems should prioritize in order to secure a match between demand and supply of skills and to foster labor market success—which also relates to the future of education.

Brown and Keep more than capably present their projected contrasting futures and summarize the different claims, but they only briefly touch on the underlying differences in perspective and the concomitant emphasis on different forces that lead to these different views. Unfortunately, merely presenting the three different views next to one another without offering a critical evaluation of them is, we think, a missed opportunity.

Elliott tries to overcome the unsettled dispute between those who believe that the new technology (artificial intelligence, robots) differs strongly from previous spurs in technological improvement and those who believe that this time is no different from previous times. It does so in a very interesting, practical, and innovative way by looking into the possibility of “machines” to perform human tasks. In his approach, he applies standardized test items that are developed in the context of PIAAC. PIAAC is the Organization for Economic Cooperation and Development’s (OECD) survey of adult skills, in which literacy, numeracy, and problem-solving skills of people between the ages of 16 and 64 are assessed. He convincingly concludes that new technologies will probably pose a serious challenge for education systems to equip all people with the right type and the right level of skills that they need to be competitive with the “machine.”

Another stimulating feature of Elliott’s contribution is the critical assessment of the somewhat lazy allegation that humans have a great advantage over machines in other skill domains, such as social skills. His call to look more closely at these skills to arrive at a more balanced assessment of the position of humans is much needed.

Finally, we’d like to underline an important aspect that is touched on in both articles, however only briefly. Our reading of the impact of today’s and possibly future technology is that it will probably lead to a very different organization of work. In the words of Andreas Schleicher, the OECD’s most distinguished spokesperson on education and education policy, “in the future everybody is creating his own job.” This of course refers to the world of the platform and gig economy. We find that both articles fall short in fully grasping that aspect of the future of work on the future of education.

Dutch Ministry of Education, Culture, and Science

Policy articles about “the future of work” are often based on little more than speculation. So Stuart W. Elliott is to be commended for consulting with artificial intelligence (AI) experts on how computers are likely to perform over the next few years on reading proficiency tests. His experts predict that machines will soon score at least as well as most US adults. Barring educational advances that he finds implausible, Elliott concludes that “employers will automate many literacy-related tasks over the next few decades” and so, with such jobs gone, people will either have to develop other skills or face unemployment.

Elliott’s argument rests on two assertions and a deduction. He asserts first that people who can do only tasks that machines can soon perform will not have jobs, and second that people with skill X can do tasks that machines will not be able to perform any time soon. He then deduces that people with skill X will therefore have jobs.

The first assertion reflects what economists call the “lump of labor fallacy,” a zero-sum view of how new jobs displace old ones that doesn’t necessarily hold up in practice. For example, the economist James Besson found in a study reported in 2015 that demand for bank tellers actually increased after ATM machines were introduced. Rather than assuming a fixed amount of work that people or machines will compete to do (and that machines will always get the job if they can perform it at all), it is better to concentrate on new ways that humans and AI can complement one another, as Ajay Agrawal, Joshua Gans, and Avi Goldfarb suggest in their 2018 book, Prediction Machines: The Simple Economics of Artificial Intelligence. Perhaps, for example, AI could help provide the education people need to read better than machines?

The second assertion is backed up by judgments that Elliott has compiled from experts in the case where X = “can perform at Level 3 or higher on a PIAAC literacy test.” He also calls for similar research on X = “can perform high levels of social interaction.”

Elliott’s deduction, however, does not follow from his two assertions even if we accept them as true. The deduction would follow if, for example, the first assertion said instead that “people who can do tasks that machines cannot perform will have jobs.” But this is different from his assertion, and the two versions are not logically equivalent. Saying they are represents what is called the inverse fallacy. In fact, the alternative statement itself seems hard to believe. Machines are not likely to have human empathy any time soon. Does that mean we can all count on having jobs?

Even harder to believe is the related claim that if more people received education enabling them to score higher than Level 3 on PIAAC, then their jobs would be safe. This is a matter of causal inference about counterfactuals. That kind of reasoning is truly an example of a task that no machine learning system is on a path to perform since all such algorithms are ultimately based on statistical correlations only. By Elliott’s argument, perhaps this means there will eventually be sustained demand for humans who can carry out such reasoning.

New York, NY

Cite this Article

“Skills for Future Workers.” Issues in Science and Technology 35, no. 2 (Winter 2019).

Vol. XXXV, No. 2, Winter 2019