Why Data Isn’t Divine

Review of

God, Human, Animal, Machine: Technology, Metaphor, and the Search for Meaning

New York, NY: Doubleday, 2021, 304 pp.

GOD, HUMAN, ANIMAL, MACHINE by Meghan O'Gieblyn (2021)

Who, or what, can we put our faith in now? Many would say we should trust in technology, which delivers feats once deemed miraculous on a daily basis. But as Meghan O’Gieblyn spiritedly argues, our ungodliest techno-triumphalists often unwittingly resurrect old god tropes. This is just one of the big-picture themes animating her inventively invigorating book God, Human, Animal, Machine: Technology, Metaphor, and the Search for Meaning.  

The book has seven sections: Image, Pattern, Network, Paradox, Metonymy, Algorithm, and Virality. These weighty words form the conceptual skeleton on which our future is being fleshed out. By the third page, O’Gieblyn has readers pondering “ontologically thorny” issues involving $3,000 robot dogs and Rene Descartes’s modernity-founding musings on what makes humans more than animals. She tackles tricky thinking territory with élan, putting, in her words, “technological concepts in conversation with philosophy and religion” in ways that seem particularly relevant to our times: “All the eternal questions,” she writes, “have become engineering problems.”

O’Gieblyn’s erudition includes training in theology and deep learning in philosophy and technology. After dropping out of bible college and losing her religion, she fell into tech thralldom through “transhumanist” Ray Kurzweil’s books The Age of Spiritual Machines and The Singularity Is Near: When Humans Transcend Biology. In transhumanism, metaphysical mysteries such as resurrection and immortality morph into engineering challenges: tech to be built, code to be cracked. Yet O’Gieblyn shows how such techno-immortal longings resemble Christian eschatology gussied up in Silicon Valley vernacular. Tech’s relationship to resurrection isn’t new: eleventh century alchemists worked on elixirs of immortality, and the term “transhuman” entered English via translations of Dante’s fourteenth century Divine Comedy.  

In transhumanism, metaphysical mysteries such as resurrection and immortality morph into engineering challenges

O’Gieblyn maps a millennia-long habit of literalizing metaphors. Heaven and hell, immortality and resurrection all started as story devices, literary metaphors that were only taken literally much later. O’Gieblyn observes that “to discover truth, it is necessary to work within the metaphors of our own time, which are for the most part technological.”

In Descartes’s time of the 1600s, the hot technology was mechanical. For deeply soul-searching reasons, he wanted to nail down what made humans special as an immaterial essence, thereby firmly putting God’s business beyond materialist science. In his worldview, animals became cranked-up clockwork, and God-sourced souls elevated mankind above the beastly business of fleshy life. But the so-called Cartesian view discounts the essential social aspects of humanness. Artificial intelligence (AI) researcher Abeba Birhane explained this beautifully in an essay taking its title from a Zulu proverb: “A person is a person through other persons.”

Today’s computers, in contrast to the machinery of Descartes’s time, can harbor many animating logics through different software programs, but we’ve taken to imbuing information and data itself with divining powers. O’Gieblyn traces how such tech-metaphorizing has birthed “cosmic computationalism,” which casts the entire universe as information processing—demonstrated by the simulation hypothesis, in which the cosmos is an immersive video game overseen by programmer gods. O’Gieblyn debugs these lax computer metaphors by showing that those who believe that the “fabric of reality is informational” execute “a sleight of hand that discards the original [Cartesian] dichotomy by positing a third substance—information.”

But this narrow form of quantification-and-machine-friendly information is inherently lacking. As O’Gieblyn astutely notes, Claude Shannon, the father of information theory, redefined information to exclude semantic meaning “not amenable to quantification.” In so doing, she writes, Shannon “removed the thinking mind from the concept” so that his version of information “became purely mathematical.” Unlike human minds, which run on semantics and meaning, no machine yet really understands semantic meaning. Even the best natural language AI processors are just “stochastic parrots” (a great coinage of AI-realist Emily Bender).

Throughout the book, O’Gieblyn puts expert pressures on our language tools, interrogating words and metaphors with a jeweler’s loupe, using centuries of rigorous reasoning from the time when logic and precision were not confused with quantification. A key task of language (and cognition) is “carving nature at its joints,” to borrow Plato’s phrase, into qualitatively coherent categories that can be usefully reasoned with. Aristotle called well-constructed nouns “natural kinds,” without which numerical reasoning can’t be counted on to reveal clarity.

Unlike human minds, which run on semantics and meaning, no machine yet really understands semantic meaning.

Consider “pattern,” a word that does a lot of heavy (metaphysical) lifting despite its vagueness. A painting by Jackson Pollock, the 8 billion regimented transistors in your smartphone, and the 5 billion proteins buffeted by the “Brownian storm” in your trillions of cells are all patterns. But they have radically, categorically different animating logics. 

Kurzweil believes your essential soul/spirit/mind is literally an information pattern. Or, as O’Gieblyn puts it, he thinks your “soul is little more than a data set.” Like many bigwig geeks, he reveals a bad “philosophy of data” with a lax sense of the logical boundaries of what data can contain or realistically represent. Any given data pattern may arise from multiple data-genic processes. What makes data-to-reality mappings tractable is knowing enough about the structure of the causal processes involved.

As Judea Pearl, a Turing Award-winning computer scientist and coauthor of The Book of Why, notes, there’s a battle for “the soul of data-science” raging over the right data-versus-reality relationship. “The data” typically don’t capture much of the reality they come from. Sometimes that’s okay, especially in simple domains where what isn’t “in the data” isn’t causally significant (color, for instance, isn’t germane to the physics of billiard balls collisions). But often it is a problem because those uncaptured elements are important to the question at hand.

For instance, O’Gieblyn mentions the “unreasonable effectiveness of mathematics in physics”—that is, the remarkable ability of equations to accurately describe, say, the movement of planets. But a little-known extension of the phrase could prevent much grief: “there’s an unreasonable ineffectiveness of mathematics in biology.” That ineffectiveness extends to many fields outside physics. Nothing in physics chooses; planets don’t select their orbits. Yet every living thing must choose—and choose wisely constantly, or it’s kaput—and the uncaptured element of choice resists quantification.

Like natural selection itself, life’s logic is more algorithmic than algebraic. The historian Yuval Harari calls algorithms “arguably the single most important concept in our world.” And yet, amazingly, Aristotle wrote about algorithmic logic, declaring that rigid rules weren’t a reliable way to achieve justice. Typically, “algorithm” now means software logic running on a computing device. But the logic needed to model choices, and hence much of life, requires richer grammars than physics-fitting algebra (e.g., the if-then-else structures of programming languages). A little-known feature of choosey domain patterns, such as living things, is that they can exhibit what the computer scientist and physicist Stephen Wolfram calls “computational irreducibility,” whereby the only way to determine the future state of a system is to play it out: each element making its choices, with results that can’t be predicted.

Typically, “algorithm” now means software logic running on a computing device. But the logic needed to model choices, and hence much of life, requires richer grammars than physics-fitting algebra.

Computational irreducibility creates epic epistemic barriers, with hard conceptual limits on what is in the data or what one can “predict” from it. While O’Gieblyn doesn’t quite make her position on the logical limits of data explicit, contrast all her qualitatively skilled reasoning with an infamous data-dazzled put-your-faith-in-tech article that she references: Chris Anderson’s 2008 “The End of Theory: The Data Deluge Makes the Scientific Method Obsolete.” The former Wired editor-in-chief preaches that just as Google tamed the web without knowing why pages mattered (just count links and rank them), the whole business of “why” is obsolete. “No semantic or causal analysis is required,” Anderson writes, because “with enough data, the numbers speak for themselves.” He declares: “correlation is enough.” Tellingly, Anderson’s flagship example is genomics. As I’ve noted previously in Issues, much genomic thinking is prone to what I called “pop science malpractice” and errors enabled by metaphors, such as genetic “code” implying faux similarities to software.

This inept metaphorizing embodies the original sin of the Kurzweilian zeitgeist and is well captured by the phrase “tacit creationism,” coined by evolutionary scientist Randolph Nesse. This is the mistaken faith that biology embodies more complex forms of our tech—that brains are simply more advanced computers.

O’Gieblyn occasionally gives undue deference to tech titan prophecies. She writes too respectfully about IBM’s Jeopardy!-winning AI, called Watson, and about the prospects for using AI to diagnose cancer. Yet Watson’s healthcare-focused twin was recently euthanized, and much-ballyhooed proposals to “stop training radiologists” (in the words of AI evangelist Geoffrey Hinton) contrast starkly with the finding that 34 out of 36 AI programs developed to screen for breast cancer “were less accurate than a single radiologist,” according to a study published in The BMJ. (All 36 programs were less accurate than two radiologists.) Other breathlessly covered AI medical work has also flopped. For example, a survey of machine learning studies focused on detecting COVID-19 reveals that “none of the models identified are of potential clinical use due to methodological flaws and/or underlying biases.” In the realm of self-driving cars, a Tesla vehicle crashed into the limits of AI’s statistical reasoning in its first autonomous-mode pedestrian death. The victim had walked a bike into the road, a scenario not in the training data. However big the training data set, it risks being foiled by novelty.

Beyond this, the common techno-optimistic reflex (which O’Gieblyn avoids) to claim neutrality on ethical issues is untenable. It arises partly because squeezing ethics into “the numbers” is often hard. The fetish for data creates a reality distortion field where spreadsheet data seem more real than reality, and whatever isn’t in the numbers is pushed offstage. Yet, as a typically muscular quip by Nobel Prize-winning author Toni Morrison reminds us, an appeal to neutrality is the “most obviously political stance imaginable.”

This is the mistaken faith that biology embodies more complex forms of our tech—that brains are simply more advanced computers.

On ethics, O’Gieblyn benefits from centuries of deep theodicy—the problem of evil and suffering. But she resists the influential doctrine that mere mortals can’t grasp God’s must-be-for-the-best plan. Throughout the “bloody slog of the Old Testament,” she writes, only Job asks the question that “seems obvious to modern readers”: Why would a benevolent God permit such vast suffering? O’Gieblyn contrasts the murky moral of the Book of Job with “one of the most convincing articulations in Western literature of the problem of evil”: Fyodor Dostoevsky’s scene in The Brothers Karamazov in which Ivan Karamazov declares that all God’s glorious choruses and cathedrals aren’t worth “the tears of … one tortured child.” Surely the right thing to do is to always prevent avoidable suffering? Sadly, we don’t run the world that way. Nor does our technology. 

O’Gieblyn’s precision wavers in referring to the “curse of knowledge” without noting that the original concern was ethical, not epistemological; the forbidden fruit in the biblical Eden was from the “tree of the knowledge of good and evil.” This garden-variety slip misjudges the real serpentine risk. It isn’t knowledge per se, it’s about moral rules. As Silicon Valley insider Tim O’Reilly has noted, we’ve outsourced many moral choices to a form of artificial agency, an AI algorithm that’s operated for centuries: the profit-maximizing algorithm, also known as the invisible hand. Trusting the wisdom of markets is a secular reincarnation of exhortations to trust the inscrutable grand scheme, and it is popular dogma among techno-optimists. But it fails Dostoevsky’s “tears of one child” test millions of times over. Can market algorithms that allocate only 5% of global gross domestic product to the poorest 60% of people, that today leave 150 million kids stunted by malnutrition and 2 billion people food insecure, be for the best? As science fiction author Ted Chaing told journalist Ezra Klein, “most fears about AI are best understood as fears about capitalism.” People don’t fear tech itself; they recoil from what rapacious capitalists might do to us with greed-driven tech.

Trusting the wisdom of markets is a secular reincarnation of exhortations to trust the inscrutable grand scheme, and it is popular dogma among techno-optimists.

Rather than blindly put our faith in technology, we all—especially policymakers—must become more tech- and data-savvy. We must be crystal clear on the logical limits of data. Data from different domains have radically different properties and animating logics. More and bigger data may worsen the needle-to-hay ratio. In many patterns pertaining to human behavior, the data capture only a limited piece of reality.

Beyond data, let’s learn from O’Gieblyn’s thinkers of earlier eras, who reasoned well without math-filtered patterns, using rigorous non-numeric logic that certain parts of the humanities can still provide training in. Let’s learn to discern and avoid thinking styles that reduce rigor by too quickly jumping to whatever numbers we have (“premature enumeration,” in economist and journalist Tim Harford’s memorable phrasing). Such naivete is at the root of many data-driven distortions. Reality’s richness resists being boiled down to computationally tractable datasets.

Although O’Gieblyn repeats the old canard that precision requires quantification, her entire book serves as an impressive counterproof. The capacity for rigorous qualitative reasoning isn’t inferior to narrower number-filtered thinking. We need both. In fact, you can’t do a good job on the latter unless you’ve done the former long beforehand. A great deal of the richness of life and language resists quantification—as does O’Gieblyn’s marvelous word feast.

Cite this Article

Bhalla, Jag. “Why Data Isn’t Divine.” Issues in Science and Technology (October 28, 2021).