Diane Burko, “USA COVID” (2020)

AI and Jobs

A DISCUSSION OF

Can AI Make Your Job More Interesting?
Read Responses From

During my tenure as program manager at the Defense Advanced Research Projects Agency, I watched with admiration the efforts of John Paschkewitz and Dan Patt to explore human-AI teaming, and I applaud the estimable vision they set forth in “Can AI Make Your Job More Interesting?” (Issues, Fall 2020). My intent here is not to challenge the vision or potential of AI, but to question whether the tools at hand are up to the task, and whether the current AI trajectory will get us there without substantial reimagining.

The promise of AI lies in its ability to learn mappings from high dimensional data and transform them into a more compact representation or abstraction space. It does surprisingly well in well-conditioned domains, as long as the questions are simple and the input data don’t stray far from the training data. Early successes in several AI showpieces have brought, if not complacency, a lowering of the guard, a sense that deep learning has solved most of the hard problems in AI and that all that’s left is domain adaptation and some robustness engineering.

But a fundamental question remains—whether AI can learn compact, semantically grounded representations that capture the degrees of freedom we care about. Ask a slightly different question than the one AI was trained on, and one quickly observes how brittle its internal representations are. If perturbing a handful of pixels can cause a deep network to misclassify a stop sign as a yield sign, it’s clear that the AI has failed to learn the semantically relevant letters “STOP” or the shape “octagon.” AI both overfits and underfits its training data, but despite exhaustive training on massive datasets, few deep image networks learn topology, perspective, rotations, projections, or any of the compact operators that give rise to the apparent degrees of freedom in pixel space.

Ask a slightly different question than the one AI was trained on, and one quickly observes how brittle its internal representations are.

To its credit, the AI community is beginning to address problems of data efficiency, robustness, reliability, verifiability, interpretability, and trust. But the community has not fully internalized that these are not simply matters of better engineering. Is this because AI is fundamentally limited? No, biology offers an existence proof. But we have failed our AI offspring in being the responsible parents it needs to learn how to navigate in the real world.

Paschkewitz and Patt’s article poses a fundamental question: how does one scale intelligence? Except for easily composable problems, this is a persistent challenge for humans. And this, despite millions of years of evolution under the harsh reward function of survivability in which teaming was essential. Could an AI teammate help us to do better?

Astonishingly, despite the stated and unstated challenges of AI, I believe that the answer could be yes! But we are still a few groundbreaking ideas short of a phase transition. This article can be taken as a call to action to the AI community to address the still-to-be-invented AI fundamentals necessary for AI to become a truly symbiotic partner, for AI to accept the outreached human hand and together step into the vision painted by the authors.

Former Program Manager

Defense Sciences Office

Defense Advanced Research Projects Agency

Technologists—as John Paschkewitz and Dan Patt describe themselves—are to be applauded for their ever-hopeful vision of a “human-machine symbiosis” that will “create more dynamic and rewarding places for both people and robots to work,” and even become the “future machinery of democracy.” Their single-minded focus on technological possibilities is inspiring for those working in the field and arguably necessary to garner the support of policymakers and funders. Yet their vision of a bright, harmonious future that solves the historically intractable problems of the industrial workplace fails to consider the reality seen from the office cubicle or the warehouse floor, and the authors’ wanderings into history, politics, and policy warrant some caution.

While it is heartening to read about a future where humans have the opportunity to use their “unique talents” alongside robots that also benefit from this “true symbiosis,” contemplating that vision through the lens of the past technology-driven decades is a head-scratcher. This was an era that brought endless wars facilitated by the one-sided safety of remote-control battlefields, and though there was an increase in democratic participation, it was in reaction to flagrant, technology-facilitated abuses that provoked outrage about political corruption (real and imagined) motivating citizens to go to the ballot box—thought to be secure only when unplugged from the latest technology. We should also consider the technology promises of Facebook to unite the global community in harmony, or the Obama administration’s e-government technology initiative expanding access and participation to “restore public faith in political institutions and reinvigorate democracy.”

Their vision of a bright, harmonious future that solves the historically intractable problems of the industrial workplace fails to consider the reality seen from the office cubicle or the warehouse floor.

As to the advances in the workplace, they did produce the marvel of near-instant home delivery of everything imaginable. But those employing the technology also chose to expand the size of the workforce that drew low pay and few benefits, and relied on putting in longer hours and working multiple jobs to pay the rent—all while transferring ever-greater wealth to the captains of industry, enabling them to go beyond merely acquiring yachts to purchasing rockets for space travel.

Of course, it might be different this time. But it will take more than the efforts of well-meaning technologists to transform the current trajectory of AI-mediated workplaces into a harmonious community. Instead, the future now emerging tilts to the dystopian robotic symbiosis that the Czech author Karel Čapek envisioned a century ago. Evidence tempering our hopeful technologists’ vision is in the analyses of the two articles between which theirs is sandwiched—one about robotic trucks intensifying the sweatshops of long-haul drivers, and the other about how political and corporate corruption flourished under the cover of the “abstract and unrealizable notions” of Vannevar Bush’s Endless Frontier for science and innovation.

For technologists in the labs, symbiotic robots may be a hopeful and inspirational vision, but before we abandon development of effective policy in favor of AI optimization, let us consider the reality of Facebook democracy, Amazonian sweatshops, and Uber wages that barely rise above the minimum. We’d be on the wrong road if we pursue a technologist’s solution to the problems of power and conflict in the workplace and the subversion of democracy.

Professor of Planning and Public Policy, Edward J. Bloustein School

Senior Faculty Fellow, John J. Heldrich Center for Workforce Development

Rutgers University

John Paschkewitz and Don Patt provide a counterpoint to those who warn of the coming AIpocalypse, which happens, as we all know, when SkyNet becomes self-aware. The authors make two points.

First, attention has focused on the ways that artificial intelligence will substitute for human activities; overlooked is that it may complement them as well. If AI is a substitute for humans, the challenge becomes one of identifying what AI can do better and vice versa. While this may lead to increases in efficiency and productivity, perhaps, the greater gains are to be had when AI complements human activity as an intermediary in coordinating groups to tackle large-scale problems.

The degree to which AI will be a substitute or complement will depend upon the activity as well as the new kinds of activities that AI may make possible. Whether the authors are correct, time will judge. Nevertheless, the role of AI as intermediary is worth thinking about particularly in the context of the economist Ronald Coase’s classic question: what is a firm? One answer is that it is a coordinating device. Might AI supplant this role? It would mean the transformation of the firm from employer to intermediary, such as ride-sharing platforms.

Greater gains are to be had when AI complements human activity as an intermediary in coordinating groups to tackle large-scale problems.

The second point is more provocative. AI-assisted governance anyone? Paschkewitz and Patt are not suggesting that Plato’s philosopher king be transformed into an AI-assisted monarch. Rather, they posit that AI has a role in improving the quality of regulation and government interventions. They provide the following as illustration: “An alternative would be to write desired outcomes into law (an acceptable unemployment threshold) accompanied by a supporting mechanism (such as flowing federal dollars to state unemployment agencies and tax-incentivization of business hiring) that could be automatically regulated according to an algorithm until an acceptable level of unemployment is again reached.”

This proposal is in the vein of economics’ Taylor rule, whose goal was to remove discretion over how interest rates should be set. Following the rule, rates should be pegged to the gap between the desired inflation rate and the actual rate. AI would allow one to implement rules more complex than this and contingent on far more factors.

We have examples of such things “in the small”—for example, whose income tax returns should be audited and how should public housing be allocated. Although these applications have had problems—say, with bias—the problems are not fundamental in that one knows how to correct for them. However, for things “in the large,” I see three fundamental barriers.

First, as the authors acknowledge, it does not eliminate political debate, but shifts it, from the ex post (what should we do now) to the ex ante (what should we do if). It is unclear that we are any better at resolving the second kind of debate than the first. Second, who is accountable for outcomes with AI-assisted policy? For even the “small” things, this issue is unresolved. Third, the greater the sensitivity of regulation to the environment, the greater the need for accurate measurements of the environment and the greater the incentive to corrupt it.

George A. Weiss and Lydia Bravo Weiss University Professor

Department of Economics & Department of Electrical and Systems Engineering

University of Pennsylvania

Cite this Article

“AI and Jobs.” Issues in Science and Technology 37, no. 2 (Winter 2021).

Vol. XXXVII, No. 2, Winter 2021