Artificial Intelligence and Galileo’s Telescope
Review of
The Age of AI: And Our Human Future
New York, NY: Little, Brown and Company, 2021, 272 pp.
In 2018 Henry Kissinger published a remarkable essay in The Atlantic on artificial intelligence. At a time when most foreign policy experts interested in AI were laser-focused on the rise of China, Kissinger pointed to a different challenge. In “How The Enlightenment Ends” Kissinger warned that the Age of Reason may come crashing down as machines displace people with decisions we cannot comprehend and outcomes we cannot control. “We must expect AI to make mistakes faster—and of greater magnitude—than humans do,” he wrote.
This sentiment is nowhere to be found in The Age of AI: And Our Human Future, coauthored by Kissinger, Eric Schmidt, and Daniel Huttenlocher. If Kissinger’s entry into the AI world appeared surprising, Schmidt and Huttenlocher’s should not be. Schmidt, the former head of Google, has just wrapped up a two-year stint as chair of the National Security Commission on Artificial Intelligence. Huttenlocher is the inaugural dean of the College of Computing at the Massachusetts Institute of Technology.
The stories they tell in The Age of AI are familiar. AlphaZero defeated the reigning chess program in 2017 by teaching itself the game rather than incorporating the knowledge of grandmasters. Understanding the 3D structure of proteins, an enormously complex problem, was tackled by AI-driven protein folding which uncovered new molecular qualities that humans had not previously recognized. GPT-3, a natural language processor, produces text that is surprisingly humanlike. We are somewhere beyond the Turing test, the challenge to mimic human behaviour, and into a realm where machines produce results we do not fully understand and cannot replicate or prove. But the results are impressive.
Once past the recent successes of AI, a deep current of technological determinism underlies the authors’ views of the AI future and our place in that world. They state that the advance of AI is inevitable and warn that those who might oppose its development “merely cede the future to the element of humanity courageous enough to face the implications of its own inventiveness.” Given the choice, most readers will opt for Team Courage. And if there are any doubters, the authors warn there could be consequences. If the AI is better than a human at a given task, “failing to apply that AI … may appear increasingly as perverse or even negligent.” Early in the book, the authors suggest that military commanders might defer to the AI to sacrifice some number of citizens if a larger number can be saved, although later on they propose a more reasoned approach to strategic defense. Elsewhere, readers are instructed that “as AI can predict what is relevant to our lives,” the role of human reason will change—a dangerous invitation to disarm the human intellect.
The authors’ technological determinism, and their unquestioned assertion of inevitability, operates on several levels. The AI that will dominate our world, they assert, is of a particular form. “Since machine learning will drive AI for the foreseeable future, humans will remain unaware of what it is learning and how it knows what it has learned.” In an earlier AI world, systems could be tested and tweaked based on outcomes and human insight. If a chess program sacrificed pieces too freely, a few coefficients were adjusted, and the results could then be assessed. That process, by the way, is the essence of the scientific method: a constant testing of hypotheses based on the careful examination of data.
As the current AI world faces increasingly opaque systems, a debate rages over transparency and accountability—how to validate AI outputs when they cannot be replicated. The authors sidestep this important debate and propose licensing to validate proficiency, but a smart AI can evade compliance. Consider the well-known instances of systems designed to skirt regulation: Volkswagen hacked emissions testing by ensuring compliance while in testing mode but otherwise ignoring regulatory obligations, and Uber pulled a similar tactic with its Greyball tool, which used data collected from its app to circumvent authorities. Imagine the ability of a sophisticated AI system with access to extensive training data on enforcement actions concerning health, consumer safety, or environmental protection.
Determinism is also a handy technique to assume an outcome that could otherwise be contested. The authors write that with “the rise of AI, the definition of the human role, human aspiration, and human fulfillment will change.” In The Age of AI, the authors argue that people should simply accept, without explanation, an AI’s determination of the denial of credit, the loss of a job interview, or the determination that research is not worth pursuing. Parents who “want to push their children to succeed” are admonished not to limit access to AI. Elsewhere, those who reject AI are likened to the Amish and the Mennonites. But even they will be caught in The Matrix as AI’s reach, according to the authors, “may prove all but inescapable.” You will be assimilated.
The pro-AI bias is also reflected in the authors’ tour de table of Western philosophy. Making much of the German Enlightenment thinker Immanuel Kant’s description of the imprecision of human knowledge (from the Critique of Pure Reason), the authors suggest that the philosopher’s insight can prepare us for an era when AI has knowledge of a reality beyond our perception.
Kant certainly recognized the limitations of human knowledge, but in his “What is Enlightenment?” essay he also argued for the centrality of human reason. “Dare to know! (Sapere aude.) ‘Have the courage to use your own understanding’ is therefore the motto of the enlightenment,” he explained. Kant was particularly concerned about deferring to “guardians who imposed their judgment on others.” Reason, in all matters, is the basis of human freedom. It is difficult to imagine, as the authors of The Age of AI contend, that one of the most influential figures from the Age of Enlightenment would welcome a world dominated by opaque and unaccountable machines.
On this philosophical journey, we also confront a central teleological question: Should we adapt to AI or should AI adapt to us? On this point, the authors appear to side with the machines: “it is incumbent on societies across the globe to understand these changes so they reconcile them with their values, structures, and social contracts.” In fact, many governments have chosen a very different course, seeking to ensure that AI is aligned with human values, described in many national strategic plans as “trustworthy” and “human-centric” AI. As more countries around the world have engaged on this question, the expectation that AI aligns with human values has only increased.
A related question is whether the Age of AI, as presented by the authors, is a step forward beyond the Age of Reason or a step backward to an Age of Faith. Increasingly, we are asked by the AI priesthood to accept without questioning the Delphic predictions that their devices produce. Those who challenge these outcomes, a form of skepticism traditionally associated with innovation and progress, could now be considered heretics. This alignment of technology with the power of a reigning elite stands in sharp contrast to previous innovations, such as Galileo’s telescope, that challenged an existing order and carried forward human knowledge.
There is also an apologia that runs through much of the book, a purposeful decision to elide the hard problems that AI poses. Among the most widely discussed AI problems today is the replication of bias, the encoding of past discrimination in hiring, housing, medical care, and criminal sentencing. To the credit of many AI ethicists and the White House Office of Science and Technology Policy, considerable work is now underway to understand and correct this problem. Maybe the solution requires better data sets. Maybe it requires a closer examination of decision-making and the decisionmakers. Maybe it requires limiting the use of AI. Maybe it cannot be solved until larger social problems are addressed.
But for the authors, this central problem is not such a big deal. “Of course,” they write, “the problem of bias in technology is not limited to AI,” before going on to explain that the pulse oximeter, a (non-AI) medical device that estimates blood oxygen levels, has been found to overestimate oxygen saturation in dark-skinned individuals. If that example is too narrow, the authors encourage us to recognize that “bias besets all aspects of society.”
The authors also ignore a growing problem with internet search when they write that search is optimized to benefit the interests of the end-user. That description doesn’t fit the current business model that prioritizes advertising revenue, a company’s related products and services, and keeping the user on the website (or affiliated websites) for as long as possible. Traditional methods for organizing access to information, such as the Library of Congress Classification system, are transparent. The organizing system is known to the person providing information and the person seeking information. Knowledge is symmetric. AI-enabled search does not replicate that experience.
The book is not without warnings. On the issue of democratic deliberation, the authors warn that artificial intelligence will amplify disinformation and wisely admonish that AI speech should not be protected as part of democratic discourse. On this point, though, a more useful legal rule would impose transparency obligations to enable independent assessment, allowing us to distinguish bots from human speakers.
Toward the end of their journey through the Age of AI, the authors allow that some restrictions on AI may be necessary. They acknowledge the effort of the European Union to develop comprehensive legislation for AI, although Schmidt had previously criticized the EU initiative, most notably for the effort to make AI transparent.
Much has happened in the AI policy world in the three years since Kissinger warned that human society is unprepared for the rise of artificial intelligence. International organizations have moved to establish new legal norms for the governance of AI. The Organisation for Economic Co-operation and Development, made up of leading democratic nations, set out the OECD Principles on Artificial Intelligence in 2019. The G20 countries, which include Russia and China, backed similar guidelines in 2019. Earlier in 2021, the top human rights official at the United Nations, Michelle Bachelet, called for a prohibition on AI techniques that fail to comply with international human rights law. The UNESCO agency in November 2021 endorsed a comprehensive Recommendation on the Ethics of Artificial Intelligence that may actually limit the ability of China to go forward with its AI-enabled social credit system for evaluating—and disciplining—citizens based on their behavior and trustworthiness.
The more governments have studied the benefits as well as the risks of AI, the more they have supported these policy initiatives. That shouldn’t be surprising. One can be impressed by a world-class chess program and acknowledge advances in medical science, and still see that autonomous vehicles, opaque evaluations of employees and students, and the enormous energy requirements of datasets with trillions of elements will pose new challenges for society.
The United States has stood mostly on the sidelines as other nations define rules for the Age of AI. But “democratic values” has appeared repeatedly in the US formulation of AI policy as the Biden administration attempts to connect with European allies, and sharpen the contrast between AI policies that promote pluralism and open societies and those which concentrate the power of authoritarian governments. That is an important contribution for a leading democratic nation.
In his 2018 “How the Enlightenment Ends” essay, Kissinger seemed well aware of the threat AI posed to democratic institutions. Information overwhelms wisdom. Political leaders are deprived of opportunity to think or reflect on context. AI itself is unstable, he wrote, as “uncertainty and ambiguity are inherent in its results.” He outlined three areas of particular concern: AI may achieve unintended results; AI may alter human reasoning (“Do we want children to learn values through untethered algorithms?”); and AI may achieve results that cannot be explained (“Will AI’s decision making surpass the explanatory powers of human language and reason?”). Throughout human history, civilizations have created ways to explain the world around them, if not through reason, then through religion, ideology, or history. How do we exist in a world we are told we can never comprehend?
Kissinger observed in 2018 that other countries have made it a priority to assess the human implications of AI and urged the establishment of a national commission in the United States to investigate these topics. His essay ended with another warning: “If we do not start this effort soon, before long we shall discover we started too late.” That work is still to be done.