GILBERTO ESPARZA, Plantas autofotosinthéticas, 2013–2014 (detail). Courtesy the artist. Photo by Dario Lasagni.

Finding the Human in the Node

In 1949, in a garage near London, an economics student with a background in electrical engineering named Bill Phillips fashioned a collection of hydraulic pumps and pipes into a model of the United Kingdom’s economy. Seven feet high, five feet wide, and three feet deep, the Phillips machine represented money with colored water. National income flowed from a clear tank at the top through a series of valves that extracted government taxes; diverting some of the stream to government spending and allowing some to trickle toward household expenditures and saving. Before widespread use of electronic computers, the Phillips machine offered a dynamic model of an economy, where tightening the screws of a single variable such as interest rates could change the behavior of the whole.

Prominent faculty at the London School of Economics quickly adopted the machine—not only for its ingeniousness, but also because it made the concepts of Keynesian macroeconomics intuitive. In this, it was particularly valuable for policymaking because it presented a vision of the economy that could be quite literally fine-tuned, via valves and screws, bolstering the idea of “macroeconomist as engineer,” as economic historians Mary Morgan and Marcel Boumans wrote in a 1998 paper.

The Phillips machine never showed, for example, mothers deciding whether they could afford bread for their children—the human aspect of the economy is missing from the model—but Morgan and Boumans point out another oversight: it was later discovered that the engineering on the backside of the machine was far more complex and difficult to restore than the transparent tanks and tubes on the front. Out of sight, an elaborate network of pumps and kludges constructed from Phillips’ tacit knowledge maintained the appearance of the rational, circular flow on the front. Of the dozen or so Phillips machines that have been created, only a few are still in working order—but the idea of macroeconomic policymaking as a type of engineering has stuck.

I thought of the Phillips machine and its circulating flows often as we prepared a series of articles on problems in the global information ecosystem for this issue. The spread of false and harmful information is not new: within decades of the invention of the printing press, anti-Semitic propaganda and misogynist guides to witch hunting appeared in Germany. And, for the last 35 years, the Discovery Channel’s Shark Week has served up an entertaining mixture of science, fable, and conspiracy theory about our cartilaginous companions. But concern about the circulation of false information on social media has intensified with political polarization and the pandemic.

Out of sight, an elaborate network of pumps and kludges constructed from Phillips’ tacit knowledge maintained the appearance of the rational, circular flow on the front.

After losing the 2016 presidential election, where online conspiracy theories and malicious rumors were a significant political force for the first time in modern American memory, Hillary Clinton made a speech decrying “the epidemic of malicious fake news and false propaganda that flooded social media over the past year.” A month later, Donald Trump adopted the phrase, telling a CNN reporter, “You’re fake news.” By early 2017, social media expert Claire Wardle was fed up with the term. “I made a rallying cry to stop using fake news and instead use misinformation, disinformation, and malinformation under the umbrella term information disorder.” The terms stuck, although Wardle now regrets the way they focused academic attention on “labeling the dots” rather than seeing the larger pattern they made.

As misinformation and disinformation became an area of academic study, it was often described with a set of familiar metaphors: floods, circulating rumors, rising tides, treacherous sea, deluge, drowning. A complex sociotechnical phenomenon of information sharing came to be described as a hydraulic one: misleading information is seen to flow, like errant colored water, through the world’s information networks, memorably characterized by Senator Ted Stevens in 2006 as “a series of tubes.” The Phillips machine was long forgotten, but misinformation presented as a problem that could be managed through top-down engineering of the flows—regulating the pipes and valves of social media, fact-checking to identify bad information, and weeding out bad actors. Five years later, it has become clear that these actions have amounted to, well, a drop in a vast and expanding ocean.

Wardle proposes that what is missing from academic models of the information ecosystem is a woman she calls Lynda, a composite of people she has studied who fervently share information online. Lynda does not intend to be malicious; she sees herself as helpful, and so she searches earnestly for authoritative information (sometimes from scientific journals) which she shares in, say, anti-vaccine contexts and forums. “She is strategically choosing information to connect with people and promote a worldview. Her criteria for choosing what to post depends less on whether it makes sense rationally and more about her social identities and affinities.” Both researchers and communicators, Wardle argues, need to see beyond the facts and the flows and instead look more comprehensively at how people’s need for connection, community, and affirmation motivates them to spread narratives. Engineers may have built the internet’s great global series of tubes, but it’s now operated partly by the Lyndas, the humans in the nodes of the network.

In their study of how the scientific term mRNA became the subject of converging global conspiracy theories between 2020 and 2022, Marc Tuters, Tom Willaert, and Trisha Meyer generated maps of the connections between ideas on Twitter. Their focus is on the nodes where individuals use hashtags to glom the term #mRNA to “Anthony Fauci,” to #plandemic, to a superconspiracy theory called “The Great Reset.” Although trust in science may be eroded by these conspiratorial conglomerations, the authors caution that Twitter users do not necessarily take the narratives at face value; some may just be “trying on” different ideas and personas, and the platform may even make some people less likely to believe conspiracies.

Once you shift your attention from the global hydraulic model to the motivations of individuals in the network, new potential countermeasures appear. Emma Spiro and Kate Starbird write that research on rumors from the previous century is newly relevant because human beings share rumors as ways to deal with anxiety and uncertainty. For decisionmakers and officials hoping to communicate during a crisis, they say, “recognizing these informational and emotional drivers of rumoring can support more empathetic—and perhaps more effective—interventions.”

Human beings are also the focus of educator Kari Kivinen, who describes how Finland responded to Russian disinformation campaigns during the country’s 2014 elections by emphasizing fact-checking, before realizing that they needed to focus on how individuals decide what information is trustworthy. This led to programs that teach kindergarteners how to think about the motivations behind shared information. Beginning with conversations about how wily foxes in folklore trick people, Finnish kids finish high school with a sophisticated understanding of propaganda, as well as the culture of scientific research. The Finnish education system also invests in teachers—who have master’s degrees and are familiar with the conduct of research—as well as the formation of children’s identities. “Society must pay at least as much attention to children’s minds as to social media algorithms,” Kivinen writes.

A complex sociotechnical phenomenon of information sharing came to be described as a hydraulic one: misleading information is seen to flow, like errant colored water, through the world’s information networks.

In that vein, National Academy of Sciences president Marcia McNutt and Arizona State University president Michael Crow argue that political leaders and the scientific enterprise have a common interest in building trust in information and science among citizens. They note that Thomas Jefferson worried that citizens would be helpless against abuses of constitutional power if they are, in his words,“not enlightened enough to exercise their control,” concluding that “the remedy is not to take it from them, but to inform their discretion by education.” McNutt and Crow go on to discuss how institutions of science and education might take up that task.

The Phillips machine and US postwar science policy date from the same era, and both emphasize engineering outcomes from the top. In the case of the latter, this means directing flows of money, training streams of scientific and technical professionals, and generating rising tides of published papers, leading to economic spillovers. However, over the past few years, much more attention has been paid to the back of this science policy model—where pumps and kludges have played a significant role in creating a system reflecting inequalities and geographic concentration that is not as rational, fair, or productive as it could be.

Focusing on the human experience could help address these challenges. Right after World War II, American nurse educators built a science of nursing that distinguished itself from the reductionist model of biomedicine, writes nursing historian Dominique Tobbell. Expanding into a research-driven discipline that emphasized health and prioritized patients as actors shaped the kind of knowledge that nursing produced. At the same time, nursing preserved multiple pathways into the profession, which resulted in a more diverse workforce than other science, technology, engineering, and mathematics (STEM) disciplines. Tobbell sees lessons for the whole enterprise: “The way nurses defined their discipline—toward the agency of the patient—created an important model for focusing STEM disciplines on solving societal problems by understanding society itself.”

Similarly, focusing attention on individuals could help foster more productive interdisciplinary research. Annie Patrick, a science, technology, and society scholar, describes her experience as a social scientist brought into a federally funded project to revolutionize engineering education. Patrick did wide-ranging interviews with faculty and students, coming to understand them as diverse individuals within a community where multiple supports were required to graduate each student.

But when the study team moved on, Patrick found herself haunted by her interviews; her training as a nurse had emphasized creating interventions to solve patients’ problems. “When I saw something going wrong,” she writes, “my every professional instinct was to intervene.” Ultimately, trusting her trained instincts, Patrick designed three interventions (a podcast, a seminar, and a white paper) to help the engineering community see itself—and the needs and motivations of its members—more clearly. Interdisciplinarity, in her experience, is not only about bringing together specialties, but encouraging individuals to identify and unleash their own inner interdisciplinarity.

Exploring how individuals use their agency in complex systems could even lead to better practice—and more practitioners—of biosafety. Biosafety officer David Gillum explains how the accumulated tacit knowledge of a few thousand biosafety professionals forms a web of precaution that picks up where the written rules leave off. Understanding that tacit knowledge, and how the community generates it, could lead to better ways to reduce risks in biological research, better training of safety professionals, and even better rules. Building these systems, he argues, requires the active participation of individuals, who develop norms and knowledge that can be lost if not recognized. Or, to quote epistemologist Michael Polanyi: “Into every act of knowing there enters a passionate contribution of the person knowing what is being known, and that this coefficient is no mere imperfection but a vital component of his knowledge.”

The “vital component” is always the human aspect: the passionate, weird, creative, and unpredictable dynamics that complicate but also make possible our ongoing efforts to improve the world.

Cite this Article

Margonelli, Lisa. “Finding the Human in the Node.” Issues in Science and Technology 39, no. 3 (Spring 2023): 15–17. https://doi.org/10.58875/BQDH5214

Vol. XXXIX, No. 3, Spring 2023