Refik Anadol Studio, "Living Archive: Nature"

How to Procure AI Systems That Respect Rights

In 2002, my colleague Steve Schooner published a seminal paper that enumerated the numerous goals and constraints underpinning government procurement systems: competition, integrity, transparency, efficiency, customer satisfaction, best value, wealth distribution, risk avoidance, and uniformity. Despite evolving nomenclature, much of the list remains relevant and reflects foundational principles for understanding government procurement systems.

Procurement specialists periodically discuss revising this list in light of evolving procurement systems and a changing global landscape. For example, many of us might agree that sustainability should be deemed a fundamental goal of a procurement system to reflect the increasing role of global government purchasing decisions in mitigating the harms of climate change.

In reading “Don’t Let Governments Buy AI Systems That Ignore Human Rights” by Merve Hickok and Evanna Hu (Issues, Spring 2024), I sense that they are basically advocating for the same kind of inclusion—to make human rights a foundational principle in modern government procurement systems. Taxpayer dollars should promote human rights and be used to make purchases with an eye toward processes and vendors that are transparent, ethical, unbiased, and fair. In theory, this sounds wonderful. But in practice … it’s not so simple.

Hickok and Hu offer a framework, including a series of requirements, designed to ensure human rights are considered in the purchase of AI. Unsurprisingly, much of the responsibility for implementing these requirements falls to contracting officers—a dwindling group, long overworked and under-resourced yet subject to ever-increasing requirements and compliance obligations that complicate procurement decisionmaking. A framework that imposes additional burdens on these individuals is doomed to fail, despite the best intentions.

The authors’ suggestions also would inadvertently erect substantial barriers to entry, dissuading new, innovative, and small companies from engaging in the federal marketplace. The industrial base has been shrinking for decades, and burdensome requirements not only cause existing contractors to forego opportunities, but deter new entrants from seeking to do business with the federal government.

A framework that imposes additional burdens on these individuals is doomed to fail, despite the best intentions.

Hickok and Hu brush aside these concerns without citing data to bolster their assumptions. Experience cautions against this cavalier approach. These concerns are real and present significant challenges to the authors’ aspirations.

Still, I sympathize with the authors, who are clearly and understandably frustrated with the apparent ossification of practices and the glacial pace of innovation. Which leads me to a simple, effective, yet oft-ignored, suggestion: rather than railing against the existing procurement regime, talk to the procurement community about your concerns. Publish articles in industry publications. Attend and speak at the leading government procurement conferences. Develop a community of practice. Meet with procurement professionals and policymakers to help them understand the downstream consequences of buying AI without fully understanding its potential to undermine human rights. Most importantly, explain how their extensive knowledge and experience can transform not only which AI systems they procure, but how they buy them.

This small, modest step may not immediately generate the same buzz as calls for sweeping regulatory reform. But engaging with the primary stakeholders is the most effective way to create sustainable, long-term gains.

Associate Dean for Government Procurement Law Studies

The George Washington University Law School

Merve Hickok and Evanna Hu stage several important interventions in artificial intelligence antidiscrimination law and policy. Chiefly, they pose the question of whether and how it might be possible to enforce AI human rights through government procurement protocols. Through their careful research and analysis, they recommend a human rights-centered process for procurement. They conclude the Office of Management and Budget (OMB) guidance on federal government’s procurement and use of AI can effectively reflect these types of oversight principles to help combat discrimination in AI systems.

The authors invite a critical conversation in AI and the law: the line between hard law, such as statutory frameworks with enforceable consequences, and soft law, such as policies, rules, and procedures, and other executive and agency action that can be structured within the administrative state. Federal agencies, as the authors note, are now investigating how best to comply with the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (E.O. 14110), released by the Biden administration on October 30, 2023. Following the order’s directives, the OMB Policy to Advance Governance, Innovation, and Risk Management in Federal Agencies’ Use of Artificial Intelligence, published on March 28, 2024, directs federal agencies to focus on balancing AI risk mitigation with AI innovation and economic growth goals.

Both E.O. 14110 and the OMB Policy reflect soft law approaches to AI governance. What is hard law and soft law in the field of AI and the law are moving targets. First, there is a distinction between human rights law and human rights as reflected in fundamental fairness principles. Similarly, there is a distinction between civil rights law and what is broadly understood to be the government’s pursuit of antidiscrimination objectives. The thesis that Hickok and Evanna advance involves the latter in both instances: the need for the government to commit to fairness principles and antidiscrimination objectives under a rights-based framework.

What is hard law and soft law in the field of AI and the law are moving targets.

AI human rights can be viewed as encompassing or intersecting with AI civil rights. The call to address antidiscrimination goals with government procurement protocols is critical. Past lessons on how to approach this are instructive. The Office of Federal Contract Compliance Programs (OFCCP) offers a historical perspective on how a federal agency can shape civil rights outcomes through federal procurement and contracting policies. OFCCP enforces several authorities to ensure equal employment opportunities, one of the cornerstones of the Civil Rights Act of 1964. OFCCP’s enforcement jurisdiction includes Executive Order 11246; the Rehabilitation Act of 1973, Section 503; and the Vietnam Era Veterans’ Readjustment Assistance Act of 1974. OFCCP, in other words, enforces a combination of soft and hard laws to execute civil rights goals through procurement. OFCCP is now engaged in multiple efforts to shape procurement guidance to mitigate AI discriminatory harms.

Finally, Senators Gary Peters (D-MI) and Thom Tillis (R-NC) recently introduced a bipartisan proposal to provide greater oversight of potential AI harms through the procurement process. The proposed Promoting Responsible Evaluation and Procurement to Advance Readiness for Enterprise-wide Deployment (PREPARED) for AI Act mandates several evaluative protocols prior to the federal government’s procuring and deploying of AI systems, underscoring the need to test AI premises, one of the key recommendations advanced by Hickok and Hu. Preempting AI discrimination through federal government procurement protocols demands both soft law, such as E.O. 14110 and the OMB Policy, as well as hard law, such as the bipartisan bill proposed by Senators Peters and Tillis.

Professor of Law

Director, Digital Democracy Lab

William & Mary Law School

Merve Hickok and Evanna Hu propose a partial regulatory patch for some artificial intelligence applications via government procurement policies and procedures. The reforms may be effective in the short term in specific environments. But a broader perspective, which the AI regulatory wave generally lacks, raises some questions about widespread application.

This is not to be wondered at, for AI raises considerations that make it especially difficult for society to respond effectively. Eight problems in particular stand out:

  1. The definition problem. Critical concepts such as “intelligence,” “agency,” “free will,” “cognition,” “consciousness,” and even “artificial intelligence” are not well understood, involve different technologies from neural networks to rule-based expert systems, and have no clear and accepted definitions.
  2. The cognitive technology problem. AI is part of a cognitive ecosystem that increasingly replicates, enhances, and integrates human cognition and psychology into metacognitive structures at scales from the relatively simple (e.g., Tesla and Google Maps) to the highly complex (e.g., weaponized narratives and China’s social credit system). It is thus uniquely challenging in its implications for everything from education to artistic creation to crime to warfare to geopolitical power.
  3. The cycle time problem. Today’s regulatory and legal frameworks lack any capability to match the rate at which AI is evolving. In this regard, Hickok and Hu’s suggestion to add additional layers of process onto bureaucratic systems that are already sclerotic, such as public procurement, would only exacerbate the decoupling of regulatory and technological cycle times.
  4. The knowledge problem. No one today has any idea of the myriad ways in which AI technologies are currently being used across global societies. Major innovators, including private firms, military and security institutions, and criminal enterprises, are not visible to regulators. Moreover, widely available tool sets have democratized AI in ways that simply couldn’t happen with older technologies.
  5. The scope of effective regulation problem. Potent technologies such as AI are most rapidly adapted by fringe elements of the global economy, especially the pornography industry and criminal enterprises. Such entities pay no attention to regulation anyway.
  6. The inertia problem. Laws and regulations once in place are difficult to modify or sunset. They are thus particularly inappropriate when the subject of their action is in its very early stages of evolution, and changing rapidly and unpredictably.
  7. The cyberspace governance problem. International agreements are unlikely because major players manage AI differently. For example, the United States relies primarily on private firms, China on the People’s Liberation Army, and Russia on criminal networks.
  8. The existential competition problem. AI is truly a transformative technology. Both governments and industry know they are in a “build it before your competitors, or die” environment—and thus will not be limited by heavy-handed regulation.

AI raises considerations that make it especially difficult for society to respond effectively.

This does not mean that society is powerless. What is required is not more regulation on an already failing base, but rather new mechanisms to gather and update information on AI use across all domains; enhance adaptability and agility of institutions rather than creating new procedural hurdles (for example, eschewing regulations in favor of “soft law” alternatives); and encourage creativity in responding to AI opportunities and challenges.

More specifically, two steps can be taken even in this chaotic environment. First, a broad informal network of AI observers should be tasked with monitoring the global AI landscape in near real time and reporting on a regular basis without any responsibility to recommend policies or actions. Second, even if broad regulatory initiatives are dysfunctional, there will undoubtedly be specific issues and abuses that can be addressed. Even here, however, care should be taken to remember the unique challenges posed by AI technologies, and to try to develop and retain agility, flexibility, and adaptability whenever possible.

President’s Professor of Engineering

Lincoln Professor of Engineering and Ethics

Arizona State University

Cite this Article

“How to Procure AI Systems That Respect Rights.” Issues in Science and Technology 40, no. 4 (Summer 2024).

Vol. XL, No. 4, Summer 2024