AI and Jobs
During my tenure as program manager at the Defense Advanced Research Projects Agency, I watched with admiration the efforts of John Paschkewitz and Dan Patt to explore human-AI teaming, and I applaud the estimable vision they set forth in โCan AI Make Your Job More Interesting?โ (Issues, Fall 2020). My intent here is not to challenge the vision or potential of AI, but to question whether the tools at hand are up to the task, and whether the current AI trajectory will get us there without substantial reimagining.
The promise of AI lies in its ability to learn mappings from high dimensional data and transform them into a more compact representation or abstraction space. It does surprisingly well in well-conditioned domains, as long as the questions are simple and the input data donโt stray far from the training data. Early successes in several AI showpieces have brought, if not complacency, a lowering of the guard, a sense that deep learning has solved most of the hard problems in AI and that all thatโs left is domain adaptation and some robustness engineering.
But a fundamental question remainsโwhether AI can learn compact, semantically grounded representations that capture the degrees of freedom we care about. Ask a slightly different question than the one AI was trained on, and one quickly observes how brittle its internal representations are. If perturbing a handful of pixels can cause a deep network to misclassify a stop sign as a yield sign, itโs clear that the AI has failed to learn the semantically relevant letters โSTOPโ or the shape โoctagon.โ AI both overfits and underfits its training data, but despite exhaustive training on massive datasets, few deep image networks learn topology, perspective, rotations, projections, or any of the compact operators that give rise to the apparent degrees of freedom in pixel space.
Ask a slightly different question than the one AI was trained on, and one quickly observes how brittle its internal representations are.
To its credit, the AI community is beginning to address problems of data efficiency, robustness, reliability, verifiability, interpretability, and trust. But the community has not fully internalized that these are not simply matters of better engineering. Is this because AI is fundamentally limited? No, biology offers an existence proof. But we have failed our AI offspring in being the responsible parents it needs to learn how to navigate in the real world.
Paschkewitz and Pattโs article poses a fundamental question: how does one scale intelligence? Except for easily composable problems, this is a persistent challenge for humans. And this, despite millions of years of evolution under the harsh reward function of survivability in which teaming was essential. Could an AI teammate help us to do better?
Astonishingly, despite the stated and unstated challenges of AI, I believe that the answer could be yes! But we are still a few groundbreaking ideas short of a phase transition. This article can be taken as a call to action to the AI community to address the still-to-be-invented AI fundamentals necessary for AI to become a truly symbiotic partner, for AI to accept the outreached human hand and together step into the vision painted by the authors.
James Gimlett
Former Program Manager
Defense Sciences Office
Defense Advanced Research Projects Agency
Technologistsโas John Paschkewitz and Dan Patt describe themselvesโare to be applauded for their ever-hopeful vision of a โhuman-machine symbiosisโ that will โcreate more dynamic and rewarding places for both people and robots to work,โ and even become the โfuture machinery of democracy.โ Their single-minded focus on technological possibilities is inspiring for those working in the field and arguably necessary to garner the support of policymakers and funders. Yet their vision of a bright, harmonious future that solves the historically intractable problems of the industrial workplace fails to consider the reality seen from the office cubicle or the warehouse floor, and the authorsโ wanderings into history, politics, and policy warrant some caution.
While it is heartening to read about a future where humans have the opportunity to use their โunique talentsโ alongside robots that also benefit from this โtrue symbiosis,โ contemplating that vision through the lens of the past technology-driven decades is a head-scratcher. This was an era that brought endless wars facilitated by the one-sided safety of remote-control battlefields, and though there was an increase in democratic participation, it was in reaction to flagrant, technology-facilitated abuses that provoked outrage about political corruption (real and imagined) motivating citizens to go to the ballot boxโthought to be secure only when unplugged from the latest technology. We should also consider the technology promises of Facebook to unite the global community in harmony, or the Obama administrationโs e-government technology initiative expanding access and participation to โrestore public faith in political institutions and reinvigorate democracy.โ
Their vision of a bright, harmonious future that solves the historically intractable problems of the industrial workplace fails to consider the reality seen from the office cubicle or the warehouse floor.
As to the advances in the workplace, they did produce the marvel of near-instant home delivery of everything imaginable. But those employing the technology also chose to expand the size of the workforce that drew low pay and few benefits, and relied on putting in longer hours and working multiple jobs to pay the rentโall while transferring ever-greater wealth to the captains of industry, enabling them to go beyond merely acquiring yachts to purchasing rockets for space travel.
Of course, it might be different this time. But it will take more than the efforts of well-meaning technologists to transform the current trajectory of AI-mediated workplaces into a harmonious community. Instead, the future now emerging tilts to the dystopian robotic symbiosis that the Czech author Karel ฤapek envisioned a century ago. Evidence tempering our hopeful technologistsโ vision is in the analyses of the two articles between which theirs is sandwichedโone about robotic trucks intensifying the sweatshops of long-haul drivers, and the other about how political and corporate corruption flourished under the cover of the โabstract and unrealizable notionsโ of Vannevar Bushโs Endless Frontier for science and innovation.
For technologists in the labs, symbiotic robots may be a hopeful and inspirational vision, but before we abandon development of effective policy in favor of AI optimization, let us consider the reality of Facebook democracy, Amazonian sweatshops, and Uber wages that barely rise above the minimum. Weโd be on the wrong road if we pursue a technologistโs solution to the problems of power and conflict in the workplace and the subversion of democracy.
Hal Salzman
Professor of Planning and Public Policy, Edward J. Bloustein School
Senior Faculty Fellow, John J. Heldrich Center for Workforce Development
Rutgers University
John Paschkewitz and Don Patt provide a counterpoint to those who warn of the coming AIpocalypse, which happens, as we all know, when SkyNet becomes self-aware. The authors make two points.
First, attention has focused on the ways that artificial intelligence will substitute for human activities; overlooked is that it may complement them as well. If AI is a substitute for humans, the challenge becomes one of identifying what AI can do better and vice versa. While this may lead to increases in efficiency and productivity, perhaps, the greater gains are to be had when AI complements human activity as an intermediary in coordinating groups to tackle large-scale problems.
The degree to which AI will be a substitute or complement will depend upon the activity as well as the new kinds of activities that AI may make possible. Whether the authors are correct, time will judge. Nevertheless, the role of AI as intermediary is worth thinking about particularly in the context of the economist Ronald Coaseโs classic question: what is a firm? One answer is that it is a coordinating device. Might AI supplant this role? It would mean the transformation of the firm from employer to intermediary, such as ride-sharing platforms.
Greater gains are to be had when AI complements human activity as an intermediary in coordinating groups to tackle large-scale problems.
The second point is more provocative. AI-assisted governance anyone? Paschkewitz and Patt are not suggesting that Platoโs philosopher king be transformed into an AI-assisted monarch. Rather, they posit that AI has a role in improving the quality of regulation and government interventions. They provide the following as illustration: โAn alternative would be to write desired outcomes into law (an acceptable unemployment threshold) accompanied by a supporting mechanism (such as flowing federal dollars to state unemployment agencies and tax-incentivization of business hiring) that could be automatically regulated according to an algorithm until an acceptable level of unemployment is again reached.โ
This proposal is in the vein of economicsโ Taylor rule, whose goal was to remove discretion over how interest rates should be set. Following the rule, rates should be pegged to the gap between the desired inflation rate and the actual rate. AI would allow one to implement rules more complex than this and contingent on far more factors.
We have examples of such things โin the smallโโfor example, whose income tax returns should be audited and how should public housing be allocated. Although these applications have had problemsโsay, with biasโthe problems are not fundamental in that one knows how to correct for them. However, for things โin the large,โ I see three fundamental barriers.
First, as the authors acknowledge, it does not eliminate political debate, but shifts it, from the ex post (what should we do now) to the ex ante (what should we do if). It is unclear that we are any better at resolving the second kind of debate than the first. Second, who is accountable for outcomes with AI-assisted policy? For even the โsmallโ things, this issue is unresolved. Third, the greater the sensitivity of regulation to the environment, the greater the need for accurate measurements of the environment and the greater the incentive to corrupt it.
Rakesh V. Vohra
George A. Weiss and Lydia Bravo Weiss University Professor
Department of Economics & Department of Electrical and Systems Engineering
University of Pennsylvania