Protect Information Systems to Preserve Attention
Already, content generated by artificial intelligence populates the advertisements, news, and entertainment people see every day. According to OpenAI’s cofounder Greg Brockman, the technology could fundamentally transform mass culture, making it possible, for example, to customize TV shows for individual viewers: “Imagine if you could ask your AI to make a new ending … maybe even put yourself in there as a main character.”
Brockman meant this as a sort of paradise of customization, but it’s not hard to see how such tools could also spew misinformation and other content that would disrupt civic life and undermine democracy. Bad content would drive out good, enacting “Gresham’s Law”—the principle that “bad money drives out good”—on steroids. Even top AI executives are begging for regulation, albeit at the level of individual products and their potential dangers. I think a more productive way to frame regulation is as a means of protecting the shared information environment.
In decades past, the rationale for regulating the information space pivoted on the limited availability of broadcast channels, or “channel scarcity.” Public attention can also be considered a finite resource, rationed by what information theorist Tiziana Terranova describes as “the limits inherent to the neurophysiology of perception and the social limitations to time available for consumption.” For democracy to function, people need to pay attention to matters of public import. In an information environment swamped with automatically generated content, attention becomes the scarce resource.
A world in which attention is monopolized by an endless flow of personalized entertainment might be a consumers’ paradise—but it would be a citizen’s nightmare. The tech sector has already proposed a model for dispensing with public attention, one that is far from democratic. In 2016, a team at Google envisioned a “Selfish Ledger”—a data profile that would infer individuals’ goals and then prompt aligned behavior, such as buying healthier food or locally grown produce, and seek more data to tweak the customized model. Similarly, physicist César Hidalgo suggested providing every citizen with a software agent that could infer political preferences and act on their behalf. In such a world, the algorithm would pay attention for us: no need for people to learn about the issues or even directly express their opinions.
Such proposals show how important it is for citizens to actively regulate the information commons. Preserving scarce attention is essential to recapturing an increasingly elusive sense of shared, overlapping, and common interests. The world is moving toward a state where the data we generate can be used to further capture and channel our attention according to priorities that are neither our own, nor those of civic life. Software, and whoever it serves, cannot be allowed to substitute for citizenship, and the economic might of tech giants must be balanced by citizens’ ability to access the information they need to exercise their political power.