Algorithm of the Enlightenment

In 2010, two researchers at Cornell University hooked up a double pendulum system to a machine learning algorithm, instructing it to seek patterns in the seemingly random data emerging from the movements of the machine. Within minutes, the algorithm constructed its own version of Newton’s second law of motion. After hearing of their work, a biologist asked the researchers to use their software to analyze his data of a complex system of single-cell organisms. Again the system spit out a law that not only matched the data but effectively predicted what cells would do next. Except this time, while everyone agreed the equation worked, nobody understood what it meant. As early as 1976, mathematicians turned computers loose on the famous four-color map theorem, using brute force to check a huge number of possible scenarios and prove the theorem correct. Increasingly, we are using computational systems to assist us not just in the retention or management of information but in identifying patterns, comparing disparate information sources, and integrating diverse observations to create a new holistic understanding of the problem in question.

The ambition reflected in these scientific projects pales in comparison with the dreams of Silicon Valley, which hopes to render just about everything tractable by algorithm, from investing in stocks to driving a car, from eliminating genetic disorders to finding the perfect date. Thanks in large part to these enterprises, the world is now covered with a thickening layer of computation built on our smartphones, ubiquitous wireless networking, constellations of satellites, and the proliferation of cloud-based computing and storage solutions that allow anyone to tap sophisticated high-end processing power anywhere, for any problem. When we combine this technical platform for computation with the growing popularity of machine learning, it is possible to spin up a cutting-edge neural network based on open source tools from field leaders such as Google, link it to large repositories of public and private data stored on the cloud, and develop scarily effective pattern-seeking systems in a matter of hours. For example, over the course of 24 hours at a hackathon in 2016 a programmer coded a web camera, Amazon’s cloud computing, and Google’s TensorFlow machine learning tool kit to learn how to recognize San Francisco’s traffic enforcement vehicles and send him an instant message in time to move his car before getting a ticket.

In short, we are entering a new age of computational insight, one that extends beyond the self-driving Uber vehicles that are currently circling my office in Tempe, Arizona. Tools such as IBM’s Watson artificial intelligence system are not just good at Jeopardy! but also at medical diagnosis, legal analysis, actuarial work, and a range of other tasks they can be trained on for specific applications. The rising tide of automation is already affecting a tremendous range of jobs that demand intellectual judgment as well as specific physical capacities, and we’re only getting started.

Algorithms are also beginning to surprise us. Consider the trajectory of computational systems playing complex, effectively unbounded games such as chess and Go. Commentators studying the performance of the artificial intelligence system AlphaGo in its historic matches with world champion Lee Sedol often struggled to understand the system’s moves—sometimes describing them as programming failures, other times as opaque but potentially brilliant decisions. When a previous IBM system, Deep Blue, defeated world chess champion Garry Kasparov in 1997 in a six-game series, it did so in part by playing a strange, unpredictable move near the end of the first game. As Nate Silver tells it in The Signal and the Noise, that move was a bug, but Kasparov took it as a sign of unsettling depth and intelligence; it distressed him enough to throw off his play for the second game and opened the door to his ultimate defeat. We assume that the algorithm is perfectly rational and all-knowing, and that gives it an advantage over us.

This uncertainty in interpreting the performance of algorithms extends to far more nebulous cultural problems, such as the project Netflix undertook a few years ago to organize its vast catalog of films and television shows into a set of 76,897 predictable categories. The system, behaving more like something out of Futurama than a computer algorithm, demonstrated an inexplicable fondness for Perry Mason. Its chief (human) architect waxed philosophical about the implications of this output that he could not explain: “In a human world, life is made interesting by serendipity. The more complexity you add to a machine world, you’re adding serendipity that you couldn’t imagine. Perry Mason is going to happen. These ghosts in the machine are always going to be a by-product of the complexity. And sometimes we call it a bug and sometimes we call it a feature.”

Like so many other algorithmic systems, the Netflix algorithm was creating knowledge without understanding. Steven Strogatz, an applied mathematician at Cornell, has provocatively called this the “end of insight”—the era when our machines will deliver answers that we can validate but not fully comprehend. This may not seem so different from the typical doctor’s visit or lawyer’s advice today, where an ordained expert gives you a solution to a problem without giving you new insight or context. But of course you can pester that person to explain his or her thinking, and you can often learn a great deal from the emotional affect with which he or she delivers the judgment. It’s not so easy to do that with a black box machine that’s specifically designed to keep that decision-making apparatus secret. Nevertheless, this kind of intellectual outsourcing is a central function of civilization, a deferral to our tools and technologies (and experts) that happens for each of us on a daily basis. We already trust computational systems to perform a range of tasks using methods most of us do not understand, such as negotiating a secure connection to a wireless network or filtering spam e-mail. But the gap between result and analysis becomes stark, perhaps existentially so, when we are taking about scientific research itself.

The replication crises currently plaguing a number of scientific disciplines stem in part from this lack of insight. Scientists use complex computational tool kits without fully understanding them, and then make inferences based on the results that introduce various kinds of bias or false equivalencies. In 2016, for example, a team of researchers from Sweden and the United Kingdom uncovered bugs and error rates with the statistical sampling of three leading software applications used in functional magnetic resonance imaging, a technique for measuring brain activity, that may cast the findings of 40,000 research papers into doubt. A more pernicious form of this problem emerges from the fact that so much scientific work depends entirely on computational models of the universe, rather than on direct observation. In climate science, for example, we depend on models of incredibly complex air and water systems, but it is sometimes extremely difficult to disentangle the emergent complexity of the models themselves from the emergent complexity of the systems they are intended to represent.

The science fiction author Douglas Adams envisioned a magnificent reduction ad absurdum of this argument, when an advanced civilization constructs a supercomputer called Deep Thought to find the “Answer to the Ultimate Question of Life, The Universe, and Everything.” After 7.5 million years of calculation, Deep Thought returns its answer: 42. Precise but meaningless answers such as 42 or Perry Mason can easily be generated from incoherent questions.

This is an old problem. Philosophers have dreamed for centuries of what Gottfried Wilhelm von Leibniz called the mathesis universalis, a universal language built on mathematics for describing the scientific universe. He and other philosophers of the seventeenth century were laying foundation stones for the Enlightenment, imagining a future of consilient knowledge that was defined not just by a wide-ranging set of rational observations but by a rational structure for reality itself. A grammatically correct statement in the language of Leibniz would also be logically sound, mathematically true, and scientifically accurate. Fluency in that imagined tongue would make the speakers godlike masters of the space-time continuum, capable of revealing truths or perhaps even writing the world into being by formulating new phrases according to its grammatical rules.

Leibniz was among the thousands of scientists and philosophers who contributed to our modern understanding of scientific method. He imagined a set of instructions for building and extending a structure of human knowledge based on systematic observation and experimentation. According to his biographer, Maria Rosa Antognazza, Leibniz sought an “all-encompassing, systematic plan of development of the whole encyclopaedia of the sciences, to be pursued as a collaborative enterprise publicly supported by an enlightened ruler.”

I call Leibniz’s project, this recipe for the gradual accumulation and cross-validation of knowledge, the algorithm of the Enlightenment. Structured scientific inquiry is, after all, a set of high-level instructions for interpreting the universe. The ideal version of this idea is that the progress of civilization depends on a method, a reproducible procedure that delivers reliable results. Leibniz was echoed in the eighteenth century by Denis Diderot and Jean d’Alembert, the French creators of the first modern encyclopedia, who pursued their work not just to codify existing knowledge but to cross-validate it and accelerate future research. As these ambitious projects were realized, they articulated a vision of the world that unseated the prevailing orders of religiosity and hereditary feudalism. There is a reason many people credit Diderot and d’Alembert’s Encyclopédie française with helping to ignite the French Revolution. The Enlightenment has always contained within it a desire, a kind of utopian aspiration, that one day we will understand everything.

Leibniz was one of the chief advocates of that vision, not as a means of overthrowing a deity but of celebrating The Creation through human understanding. He was also a strong advocate of binary numerals and made an early attempt at building a calculating machine. In this effort, he attempted to represent existence and nonexistence, truth and falsehood, through ones and zeros: little wonder that so many of Leibniz’s ideas resurfaced in computation, where mathesis universalis is possible in the controlled universes of operating systems and declarative programming languages. The modern notion of the algorithm as a method for solving a problem or, more technically, a set of instructions for arriving at a result within a finite amount of time emerges from an extension of the Enlightenment quest for universal knowledge on the part of mathematicians and logicians.

The emergence of modern computer science in the 1940s and 1950s began with the proofs of computability advanced by Alan Turing, Alonzo Church, and others. It was, in many ways, a conversation about the language of mathematics, its validity and symbolic limits. This work created the intellectual space for computation as researchers such as Stephen Wolfram articulate it today, arguing not only that we can model any complex system given enough CPU cycles but that all complex systems are themselves computational. They are best understood, as Wolfram, a wildly inventive British-American scientist, mathematician, and entrepreneur, suggests in his book A New Kind of Science, as giant computers made up of neurons or molecules or matter and antimatter—and therefore they are mathematically equivalent. This is the “hard claim” for effective computability implicit in the most promethean work at the forefront of artificial intelligence and computational modeling: the brain, the universe, and everything in between is a computational system, and will ultimately be tractable with enough ingenuity and grant funding.

So the algorithm of the Enlightenment found its most elaborate expression in the language of code, and it has become increasingly commonplace, often necessary, to encode our understanding of the universe into computational systems. But as these systems grow more complex, and our capacity to represent data expands exponentially, we are starting to say things, computationally speaking, that we don’t fully understand—and that may be fundamentally wrong without us knowing it.

Consider the advanced machine learning systems Google has used to master Go, natural language translation, and other challenges: these algorithms depend on large quantities of data and training sets that reveal a correct answer or desired output. Rather than figuring out how to directly analyze input and deliver the required output, we are now starting with the input and a sampling of the desired output and black-boxing the middle. Simulated neural nets iterate millions of times to create a set of connections that will reliably match the training data, but offer little insight to researchers attempting to understand how it works. Just as with the algorithm analyzing the single-cell complex system, we can see that a law has been derived, but we don’t know how—we are on the outside of the laboratory, looking in.

Like computational algorithms, critical scientific inquiry depends on some important assumptions: that the universe is inherently causal, and relatively uniform in the application of that causality. But as we encode the method into the space of computation, our assumptions must change. The causality of computational systems is more specific and delimited than the causality of the universe at large: rather than a system of quantum mechanical forces, we have a grammatical system of symbolic representations. It is the difference between an equation that describes the curve of a line through infinite space, and a model that calculates the line’s position for a discrete number of points. (Want more points? Buy a better graphics card.) Or to take another example, it is the difference between a model of the weather that divides the sky into one-kilometer cubes and the actual weather, in all of its weird, still unpredictable mutability. When we encode our scientific work into computational representations, we introduce a new layer of abstraction that may warp our future understanding of the results.

The stakes are high because we are growing increasingly dependent on computational spectacles to see the world. The basic acts of reading, writing, and critical thinking all flow through algorithms that shape the horizons of our knowledge, from the Google search bar to the models assessing our creditworthiness and job prospects. For scientists, the exponentially increasing flood of information means that computational filters and abstractions are vital infrastructure for the mind, surfacing the right ideas at the right time. Experiments have shown how trivial it is to manipulate someone’s emotional state, or political position, using targeted computational messages, and the stakes only grow when we confront the desire inherent in computational thinking to build a black box around every problem. Right now we are accepting these intellectual bargains like so many “click-wrap” agreements, acting with very little understanding of what kind of potential knowledge or insights we might trade away for the sake of convenience or a tantalizingly simple automated solution.

In fact, we are far more to blame than our computational systems are, because we are so enamored of the ideal, the romance of perfect computation. I have seen many parents repeat sentences to their smartphones with more patience than they have for their children. We delight in the magic show of algorithmic omniscience even when we are bending over backwards to hide the flaws from view.

But of course there is no un-ringing the bell of computation, just as there is no reversing the Enlightenment. So what can we do to become more insightful users and architects of algorithms? The first thing is to develop a more nuanced sense of context about computation. The question of how the stakes of causality and observation change in computational models is fundamental to every branch of research that depends on these systems. More than once during the Cold War, US and Soviet officers had to see through a computer model showing enemy nukes on the wing and perceive what was really behind them: a flock of birds, perhaps, or a software glitch. As humans, we consistently depend on metaphor to interpret computational results, and we need to understand the stakes and boundary conditions of the metaphors we rely on to interpret models and algorithmic systems. That’s doubly true when the metaphors we are being sold are alluring, flattering, and simplifying. In short, we need a kind of algorithmic literacy, one that builds from a basic understanding of computational systems, their potential and their limitations, to offer us intellectual tools for interpreting the algorithms shaping and producing knowledge.

The second thing we need is to create better mechanisms for insight into computational systems. In a recent conversation on the topic someone half-jokingly proposed the example of the fistulated cow from veterinary science: a living animal with a surgically created hole in its flank that allows students to directly examine the workings of its stomach. There is a blunt analogy here to the “black box” metaphor so beloved of technologists and intellectual property lawyers—society needs new rules and procedures for prying open these black boxes when their operation threatens to perpetuate injustice or falsehoods. Elements of this are common sense: having a right to see what data a company has collected about us and how that data is used, and having easy access to a robust set of privacy and “opt-out” features that delineate what kinds of bargains we are making with these services. Perhaps more ambitiously, we need more open data standards so that models with real human significance—models of health care used to determine insurance benefits, for example—use definitions and calculations that are open to public review.

But on a deeper level, we need to think about fistulated algorithms as a means of regaining our capacity for insight and understanding. The collective computational platforms we have created, and are rapidly linking together, represent the most complicated and powerful achievement of our species. As such, it is a worthy object of study in its own right, and one we must attend to if we want to enjoy the thrill of discovery as well as the comfort of knowledge. These systems are not really black boxes floating in computerland—they are culture machines deeply caught up in the messy business of human life. We need to understand that causality works differently with algorithms, and that what you’re getting with a Google search is not just a representation of human knowledge about your query but a set of results tailored specifically for you. Our tools are now continuously reading us and adapting to us, and that has consequences for every step on the quest for knowledge.

Ed Finn is the author of What Algorithms Want: Imagination in the Age of Computing and a coeditor of Frankenstein: Annotated for Scientists, Engineers and Creators of All Kinds (forthcoming May 2017) and Hieroglyph: Stories and Visions for a Better Future. He is the founding director of the Center for Science and the Imagination at Arizona State University, where he is an assistant professor in the School of Arts, Media & Engineering and the Department of English.

Your participation enriches the conversation

Respond to the ideas raised in this essay by writing to [email protected]. And read what others are saying in our lively Forum section.

Cite this Article

Finn, Ed. “Algorithm of the Enlightenment.” Issues in Science and Technology 33, no. 3 (Spring 2017).

Vol. XXXIII, No. 3, Spring 2017