Chesley Bonestell, “The Exploration of Mars” (1953), oil on board, 143/8 x 28 inches, gift of William Estler, Smithsonian National Air and Space Museum. Reproduced courtesy of Bonestell LLC.

Inviting Civil Society Into the AI Conversation

Karine Gentelet’s proposals for fostering citizen contributions to the development of artificial intelligence, outlined in her essay, “Get Citizens’ Input on AI Deployments” (Issues, Winter 2024), are relevant to discussions on the legal framework for AI, and deserve to be examined. For my part, I’d like to broaden the discussion on ways of encouraging the contribution of civil society groups to the development of AI.

The amplification or emergence of new social inequalities is one of the fears of those calling for more effective supervision of AI. How can we prevent AI from having a negative impact on inequalities, and why not encourage a positive one instead?

Involvement of civil society groups, notably from the community sector, that work with impoverished, discriminated, or vulnerable populations in consultations or deliberations about AI and its governance is currently very marginal, at least in Quebec. The same holds true for the involvement of individuals within these populations. But civil society groups, just like people, can be affected by AI—and as drivers of social innovation, they can also make positive contributions to the evolution of AI.

Even more concretely, the expertise of civil society groups can be called upon at various stages in the development of AI systems. This may occur, for example, in analyzing development targets and possible biases in algorithm training data, in testing technological applications against the realities of marginalized populations, and in identifying priorities to help ensure that AI systems benefit society. In short, civil expertise can help identify issues that those guiding AI development at present fail to raise because they are far too remote from the realities of marginalized populations.

The expertise of civil society groups can be called upon at various stages in the development of AI systems.

Legal or ethical frameworks can certainly make more room for civil society expertise. But for them to play their full role, civil society groups must have the financial resources to develop their expertise and dedicate time to studying certain applications. Yet very often, these groups are asked to offer in-kind contributions before being allowed to participate in a research project!

And beyond financial challenges, some civil society groups remain out of the AI conversation. For example, the national charitable organization Imagine Canada found that 61% of respondents to a survey of charities indicated that they didn’t understand the potential applications of AI in their sector. The respondents also highlighted the importance of and need for training in AI.

Legislation and regulation are often necessary to provide a framework for working in or advancing an industry or sector. However, other mechanisms—including recourse to the courts, research, journalistic investigations, and collective action by social movements or whistleblowers—can also contribute significantly to the evolution of practices and respect for the social consensus that emerges from deliberative exercises. Events of this kind concerning AI are still very fragmentary.

Executive Director

Observatoire Québécois des Inégalités

Montréal, Québec, Canada

Existing approaches to governance of artificial intelligence in the United States and beyond often fail to offer practical ways for the public to seek justice for AI and algorithmic harms. Karine Gentelet correctly observes that policymakers have prioritized developing “guardrails for anticipated threats” over redressing existing harms, especially those emanating from public-sector abuse of AI and algorithmic systems.

This dynamic plays out every day in the United States, where law enforcement agencies use AI-powered surveillance technologies to perpetuate social inequality and structural disadvantage for Black, brown, and Indigenous communities.

Police departments routinely use historically marginalized communities as testing grounds to experiment with controversial AI and big data surveillance technologies such as facial recognition, drone surveillance, and predictive policing. For example, reporters at WIRED magazine found that nearly 12 million Americans live in neighborhoods where police have installed AI audio sensors to detect gunshots and collect data on public conversations. They estimate that 70% of the people living in those surveilled neighborhoods are either Black or Hispanic.

As Gentelet notes, existing AI policy frameworks in the United States have largely failed to create accountability mechanisms that address real-world harms such as mass surveillance. In fact, recent federal AI regulations including Executive Order 141110 have actually encouraged law enforcement agencies “to advance the presence of relevant technical experts and expertise [such] as machine learning engineers, software and infrastructure engineering, data privacy experts [and] data scientists.” Rather than redress existing harms, federal policymakers are staging the grounds for future injustice.

Police departments routinely use historically marginalized communities as testing grounds to experiment with controversial AI and big data surveillance technologies.

Without AI accountability mechanisms, advocates have turned to courts and other traditional forums for redress. For example, community leaders in Baltimore brought a successful federal lawsuit to end a controversial police drone surveillance program that recorded the movements of nearly 90% of the city’s 585,000 residents—a majority of whom identify as Black. Similarly, a coalition of advocates working in Pasco County, Florida, successfully petitioned the US Department of Justice to terminate federal grant funding for a local predictive policing program while holding school leaders accountable for sharing sensitive student data with police.

While both efforts successfully disrupted harmful algorithmic practices, they failed to achieve what Gentelet describes as “rightful reparations.” Existing law fails to provide the structural redress necessary for AI-scaled harms. Scholars such as Rashida Richardson of the Northeastern University School of Law have outlined what more expansive approaches could look like, including transformative justice and holistic restitution that address social and historical conditions.

The United States’ approach to AI governance desperately needs a reset that prioritizes existing harm rather than chasing after speculative ones. Directly impacted communities have insights essential to crafting just AI legal and policy frameworks. The wisdom of the civil rights icon Ella Baker remains steadfast in the age of AI: “oppressed people, whatever their level of formal education, have the ability to understand and interpret the world around them, to see the world for what it is, and move to transform it.”

Senior Policy Counsel & Just Tech Fellow

Center for Law and Social Policy

Cite this Article

“Inviting Civil Society Into the AI Conversation.” Issues in Science and Technology 40, no. 3 (Spring 2024).

Vol. XL, No. 3, Spring 2024