Do-It-Yourself Pandemic: It’s Time for Accountability in Models

Lessons from real-world engineering can improve the design and standards of models being used in COVID-19.

The responsibility of building scientific models has much in common with the responsibility of sitting in the exit row on an airplane. One can enjoy the extra leg room of creating imaginative models, but it comes with a price—being “willing and able” to fulfill lifesaving duties. Modelers know well about GIGO: garbage in, garbage out. Now we need AIAO: accountability in, accountability out.

The COVID-19 pandemic has resulted in a buffet of epidemiological models, all you can consume. Some models predicting the spread of the disease are feats of statistical tuning and curve-fitting: data from one context redeployed for extrapolation elsewhere. More mechanistic models use a century-old compartment approach—categorizing people as susceptible, infected, or recovered—to track the moods and modulations of the marauding microbe. Other models are occupied with producing visually top-notch outputs; however, in a pandemic, duty must come first, and beauty next. Whatever their form, current pandemic models possess various worrisome features—chiefly, how they are promoted and proliferated without proper reflection about their quality, efficacy, and reliability, and with low or no accountability. As we use models to guide policies for COVID-19, we should also use COVID-19 to stimulate thinking about setting better standards for models. Lessons from real-world engineering can help.

Current pandemic models possess various worrisome features—chiefly, how they are promoted and proliferated without proper reflection about their quality, efficacy, and reliability, and with low or no accountability.

COVID-19 has many political faces: knowledge about the virus origin seems political, the interventions are political, and the consequences are certainly political and deadly. Practical engineering also has political dimensions. Nonetheless, the code of ethics from the National Society of Professional Engineers prescribes that engineers shall “acknowledge their errors” and “advise their clients or employers when they believe a project will not be successful.” Quality, certification, licensure, training, retraining, and failure analyses are routine protocols in the pursuit of skilled accountability. Could some of these engineering standards of practice be brought to bear for disease modeling and policy advice?

Portraits of accountability

Let’s consider three forms of accountability, from an engineering perspective, that could help professionalize modeling standards.

Call the first one effects accountability. Most models often start and end with their intents. What goals should they achieve, under what circumstances, and with what costs and sacrifices? The wider consequences of those models are all too often ignored. In a military context, consider when an air strike against an enemy headquarters hits a school or hospital or place of prayer instead. Unexpected civilian casualties, and the broader effects of those damages, need to be accounted for. “If the model only supports the evaluation of how often the air strike misses the headquarters, it is not sufficient in support of planning and training procedures,” the systems engineer Andreas Tolk and colleagues wrote in an analysis of such a scenario. “Unintended outcomes, side effects, and follow-on effects are normally not modeled. This is not sufficient.” That’s the prime shortcoming of intent-based models, and that’s why we need more models that are effects-based.

Imagine a clinician prescribing a medication without considering possible side effects such as interaction with other drugs. The notion of iatrogenesis—healing turning into unintentional harm—becomes relevant here. Nearly 5% of hospitalized patients in the United States, and even more outpatients, experience some adverse event with prescribed medications, at huge costs to society—approaching a trillion dollars as long ago as 2006. Medical decision models often don’t take the social costs of adverse drug impacts into account, but they ought to.

The point here is that effects accountability doesn’t mean legal liability, just as iatrogenesis doesn’t equate to gross negligence or malpractice. Effects accountability is above finger-pointing. It’s about collective safety and cumulative learning. If medical care that focuses only on intents and not on effects can have substantial social costs, then so can infectious disease models. Models frequently serve as policy medications, but we take them without necessary testing and warning labels.

Quality, certification, licensure, training, retraining, and failure analyses are routine protocols in the pursuit of skilled accountability. Could some of these engineering standards of practice be brought to bear for disease modeling and policy advice?

Second is explanation accountability, which places a premium on the cogency and usefulness of insights. In complex systems such as pandemics, an individual model is inevitably weaker, whereas ensembles bolster robustness. Consider a “model-of-models” built by teams at the National Institute of Standards and Technology to investigate why the World Trade Center collapsed during the 9/11 terrorist attacks. Between 2002 and 2005, engineers and analysts blended information from physical model testing, burn experiments, lab studies, statistical processing, and a plethora of visual evidence. Published in eight volumes over 10,000 pages, the analyses reported on aircraft impact, building and fire codes, structural steel failure, fire protection systems, heat release patterns, emergency responses, and people’s behavior.

The circumstances of the investigation were daunting: the original conditions of failure could not be reproduced in simulated test conditions. Instead, the investigators built a daisy chain of approximate models, which linked to one another yet could be separated for testing. The 9/11 disaster was one of the most photographed in human history, but therein lay the challenge. The investigators analyzed 7,000 photos (sifted from 10,000) and 75 hours of video clips (from a raw total of 300 hours). How to piece together all these images? How to verify a precise time sequence for the collapses? How to predict the fire behavior and the course of damage in retrospect?

Thousands of images had no time stamps. Some were mistimed because of camera settings, and some were delayed because of broadcast delays on live television. Like making a painstaking pointillistic portrait, the investigators meticulously constructed a window-by-window profile for the four faces of each of the twin towers. From airplane strike to building collapse, fragments of information were knitted together by the models. A consistent and cohesive timeline emerged. Though the modeling process led to more questions, it also yielded the capacity to answer them. The results fed into the other modules of the overall investigation, contributing to a global simulation that unraveled the mystery of why the towers collapsed as they did. And over the longer term, the scrupulous analyses of the 9/11 failure modes led to design improvements with skyscrapers and structural steel.

Similarly, it might never be possible to exactly predict earthquakes, but a conscious and direct engagement with reality can improve how to engineer buildings that are more resistant to earthquakes. To design against such diverse events as terrorist attacks, earthquakes, and epidemics, accountable models are essential components of disaster preparedness. The conditions, contingencies, and caveats of these models will change, but the explanation accountability should be held constant.

Third is enterprise accountability. This stems from the old idea that if a pet bites someone or if a restaurant serves food that sickens people, the owners are responsible. In such cases the social good derives from personal responsibility, the philosopher Helen Nissenbaum notes. Using that logic, looking at the software industry where model development is prevalent, there is a vast “vacuum in accountability” compared with settings where owners are held responsible, Nissenbaum wrote in 1996.

It gets worse, because there’s a “denial of accountability” seen in “written license agreements that accompany almost all mass-produced consumer software which usually includes one section detailing the producers’ rights, and another negating accountability.” Even in my own experience, one of the software tools I codeveloped came with a standard all-caps disclaimer absolving the corporation of any legal responsibility for damages its use might cause. And regularly, these sections are simply not read by the users.

There are big differences between this approach and what’s practiced in the construction industry. Using a case narrative from David McCullough’s 1972 classic The Great Bridge, Nissenbaum has discussed how engineering firms use extra precautions for safety. When the caissons for the Brooklyn Bridge were being built in the 1870s, a mysterious malady affected the workers. For the suspension to work properly, the caissons had to be sunk to the deepest bedrock, about 80 feet below the ground, and be filled with brick and concrete to provide a firm foundation for the neo-Gothic towers. Returning from the caissons, workers would report severe pain, mostly in the knees, that proved to be a medical mystery. The pain endured for hours, even days, and triggered complications such as convulsions, vomiting, dizziness, and double vision.

To design against such diverse events as terrorist attacks, earthquakes, and epidemics, accountable models are essential components of disaster preparedness.

Washington Augustus Roebling, the bridge’s chief engineer, stayed longer in the caissons to boost morale among workers, only to suffer problems himself. Over four months he trained his wife, Emily Warren Roebling, on the bridge’s details. (She became an exceptional engineer, and for the next 11 years directed construction of the bridge, which opened in 1883, while her husband was confined to a sick room, still retaining the “chief engineer” title.) Elevators were installed to replace spiral staircases to bring workers from the depths, but that exacerbated the calamity. It was later uncovered that the “caisson’s disease” was decompression sickness—the “bends”—resulting from significant pressure alterations due to altitude shifts. As the industrial practice improved, so did the enterprise accountability, hand-in-hand. Today, it is impossible to even imagine a modern bridge project that didn’t have a decompression chamber supplied by the builders.

Holding models accountable

Failures in engineering systems are judged unforgivingly, and rightly so. Yet similarly consequential planning models are rarely held accountable. One can build a device as a do-it-yourself hobby project, but during a health crisis if that device is claimed to be a “ventilator” for clinical use, then a very different set of expectations, responsibilities, and rules set in. The same sensibility should apply to models: we need to separate the drive-through concessions of research exploration from the practical consequences of public health.

An old saw holds that the best parachute packers are those who jump. Expertise, no matter how rigorous and rational, can lead to false confidence when accountability is lacking. Modelers—myself included—feel comfortable talking about how models are incomplete and uncertain abstractions of the real world. But just as with exit row seating, that comfort should come at the price of a key responsibility: being accountable for not just applying rigor but transparently communicating to the public the assumptions and limitations that undergird even the best of models and intentions.

Practical accountability drives practical standards, as can be seen with improved and reliable construction models. Each of the three forms of accountability—effects, explanation, and enterprise—can strengthen models and approaches for modeling. This is critical, as lives and livelihoods depend on them. During COVID-19, as more people prefer contactless delivery and payments, we must remember that we pay a hefty price if the models are contactless with reality. These are times when bats can turn our world upside down, models are clashing with slogans, and lives in the multiples of 9/11 have been lost. Even small accountabilities will help—and may well foster better appreciation of, and support for, models that inform public policy in uncertain times.

Your participation enriches the conversation

Respond to the ideas raised in this essay by writing to [email protected]. And read what others are saying in our lively Forum section.

Cite this Article

Madhavan, Guru. “Do-It-Yourself Pandemic: It’s Time for Accountability in Models.” Issues in Science and Technology (July 1, 2020).