A Growing Social Media Problem


Socrates Untenured: The Social Media Problem Is Worse Than You Think

Noam Chomsky and Edward Herman dedicated their 1988 book, Manufacturing Consent, to the late Alex Carey, an Australian sheep farmer turned lecturer whose research at the University of New South Wales focused on what he called “industrial psychology.” They credit Carey as inspiring the core idea of their book, the “propaganda model of communication,” which they argue is used by the corporate media to manipulate public opinion and manufacture consent for policy. Chomsky has also called Carey a “pioneer” of research on industrial propaganda. So, if Carey were here today, what would he say went wrong in the United States, when and why? His answers, I suggest, would be very different from those offered by Cailin O’Connor and James Owen Weatherall in “The Social Media Problem Is Worse Than You Think” (Issues, Fall 2019).

For starters, O’Connor and Weatherall proclaim that the theory of truth that grounds their analysis and their remedies is a variety of pragmatism. But Carey was deeply suspicious of pragmatism, perceiving it to be intimately related to propaganda. He wrote: “One general point should not escape notice. There is a remarkable correspondence in attitude to truth between pragmatists and propagandists. Both justify the promotion of false beliefs wherever it is supposed that false beliefs have socially useful consequences.”

Thus the first of two key issues I think Carey would have with O’Connor and Weatherall’s analysis and proposal is its confidence that we could make decisions about matters that concern truth and falsity without become propagandists ourselves, when our understanding of these concepts is grounded in a variety of pragmatism.

O’Connor and Weatherall are right that algorithmic decision rules are a worthy target for intervention.

His second complaint would take aim at the assumption that social media companies and legislators and bureaucrats—“elites”—are who should hold the power in a democratic society to make decisions about the management of misinformation. In a retrospective on the propaganda model 10 years on, Herman argued that it provided a good explanation of how the media covered the North American Free Trade Agreement (NAFTA). In their coverage, he argued, the “selection of ‘experts’ and opinion columns were skewed accordingly; their judgement was that the benefits of NAFTA were obvious, were agreed to by all qualified authorities, and that only demagogues and ‘special interests’ were opposed. The effort of labor to influence the outcome of the NAFTA debates was harshly criticized in both the New York Times and the Washington Post, with no comparable criticism of corporate or governmental (US and Mexican) lobbying and propaganda.”

It’s interesting to ponder, in the causal chain of events that put President Trump in office in 2016, whether homegrown managerial propaganda in the 1990s or recent Russian social media trolling was the weightier cause. Such historic matters should be a warning, at least, of the potential long-term consequences that could plausibly emerge when those who control the means of information production and distribution form a consensus of opinion about what’s “right,” and then go about systematically shutting out or deprioritizing the voices of “ordinary” people on that basis.

However, I do think that O’Connor and Weatherall are right that algorithmic decision rules are a worthy target for intervention. That they are presently constructed to maximize profits, often at the expense of other values, such as social media users’ autonomy and the functioning of a democracy, would undoubtedly disturb Carey. His solution, however, would put the many, not the few, properly in control of controlling the “mass mind.”

School of Humanities and Languages
University of New South Wales (Sydney)

Cite this Article

“A Growing Social Media Problem.” Issues in Science and Technology 36, no. 2 (Winter 2020).

Vol. XXXVI, No. 2, Winter 2020