To Ensure More Trustworthy AI, Use an Old Government Tool: Public Procurement
When the government uses AI tools for decisionmaking it needs to assure that they are trustworthy and fair. The Biden administration and Congress should work to update procurement standards for technology.
When powerful new technologies known as artificial intelligence are combined with the raw power of the US government, the results can be both unexpected and alarmingly bad. Consider recent controversies, including discriminatory algorithms that determined which defendants posed higher crime risks, law enforcement’s use of inaccurate facial recognition technology, and health care algorithms that contain racial biases. AI tools such as these pose serious risks when used for decision-making purposes. The federal government needs to change its procurement process to avoid buying AI tools from contractors and deploying them without key safeguards, regulations, or oversight regimes in place. The Biden administration and Congress should work to update procurement standards for technology acquisitions as a mechanism to ensure that to the extent the government uses AI tools, those tools are trustworthy.
Employing the federal procurement process as a lever could have widespread impact on AI tools. The tech community, civil society, and government largely agree on the need for fair, accountable, and transparent AI tools, but their opinions often differ on how to operationalize these principles. Most proposals from academics and civil society include ensuring some combination of transparency requirements, assessments of impact bias and statements, and regular independent audits. Some advocates maintain that the Federal Trade Commission already has adequate authority to conduct robust enforcement and pursue rulemaking related to algorithms under the Federal Trade Commission Act’s unfair and deceptive practices provisions, and they call for ramping these up to regulate AI. Federal agencies could and should also enforce existing discrimination laws even where that discrimination is caused by algorithms, and use their rulemaking authority to clarify and apply those laws to algorithmic systems if need be. Another approach has focused on antitrust enforcement, with some policymakers and advocates arguing that these bad data practices are rooted in tech companies’ outsized market power.
To reliably ensure that the technologies people interact with are trustworthy and fair, the nation needs a robust AI regulation strategy that involves a combination of these checks on AI, with an emphasis on impact assessments and rigorous algorithmic auditing. The federal procurement process provides the perfect opportunity to implement such controls. That process is highly formalized, and one that agencies are expected to carry out efficiently and with rigorous standards of conduct to ensure government purchases are based on high standards and serve the public’s interest; after all, these are taxpayer dollars being spent.
It is only appropriate that as the government increases its reliance on AI tools, it should take the lead in ensuring that it deploys only trustworthy AI that is subject to strong transparency and accountability measures. Where the federal government has its own capable personnel, it can and does develop AI tools internally. However, in many cases the government lacks the adequately trained personnel needed to do so, and agencies often turn to external vendors; stronger procurement practices are needed to govern such externally acquired tools.
In addition, such reforms would assure that the hundreds of billions of dollars spent each year on contracts for goods and services are allotted wisely and reinforce socially appropriate AI practices. In fiscal year 2019, the federal government spent over $20.7 billion on information technologies, computer software, and engineering-related services, including AI-powered technologies. Some of the biggest actors in the tech industry fulfill these contracts, including Microsoft, Amazon, and even Palantir, the last of which has become infamous for encoding discrimination against vulnerable groups via its algorithms. For example, police in many jurisdictions use Amazon’s facial recognition technology, Rekognition, which has been shown to contain bias against communities of color, and Immigration and Customs Enforcement deploys Palantir’s data mining systems to amplify the agency’s ability to identify undocumented immigrants for deportation. In this way, the federal government partially shapes the market. Thus changes to procurement practices will not only prevent government acquisition of inappropriate AI, they may also have the effect of influencing the largest players in the market to adopt better practices.
Current procurement strategies are not up to the task of vetting sophisticated technology. The procurement process begins when a government agency identifies the need for a good or service. The agency then issues a request for proposal (RFP) and seeks responses from companies until a closing date, at which time it may enter into a contract with the winning bidder. Companies that seek to win an agency’s contract must meet the basic quality standards required by law, in addition to the context-specific safety and performance requirements indicated in the RFP. Not only do these requirements fail to specify adequate standards for AI; until very recently, they generally required that contracts be given to the lowest qualifying bidders. Until December 2018, this was true even in the case of information technologies and knowledge-based contracts (a category that would include AI tools). Obviously, with emerging technologies such as AI, favoring the cheapest technologies over those that are more accurate, rigorous, and transparent may not be in taxpayers’ best interests, and may even cause dangerous and unjust outcomes.
Although the public procurement process could be used as a critical regulatory lever, researchers and policymakers have overlooked the transformative potential of procurement practices, especially when it comes to new technologies like AI.
A recent report commissioned by the Administrative Conference of the United States, Government by Algorithm, brought these issues to light by looking at the use of AI across the federal government. The report found that 53% of AI in use was developed in-house by agency technologists, and the remainder came from contractors. It concluded that AI, including AI that the government currently uses, poses deep accountability challenges. For example, a Customs and Border Protection biometric recognition system acquired from an external vendor demonstrated failure rates that the agency could not explain—because the company held that the underlying technology was proprietary. To mitigate such harms, the report recommended that federal agencies seek and embed more in-house personnel trained in AI to vet systems from contractors, while also calling on the agencies to do more on their own to develop AI that is better aligned with government policies, more tailored to meet agency needs, and more accountable. But the government struggles to compete for skilled AI professionals with the private and nonprofit sectors, where salaries for specialists are very high.
The procurement process is a crucial opportunity to set clear standards for trustworthy AI. Public authorities have the power to incentivize firms that sell AI-powered services to reform their products to meet strong fairness, accountability, and transparency requirements needed to protect the fundamental rights of citizens. To do this, they can either add standards in the RFPs or embed them in the procurement codes. Requirements specifying adequate auditing of AI models, disclosures of the data used to train the algorithms, and statements of impact on individual data privacy are just a few examples of general standards that could be highly effective in ensuring the government is using trustworthy AI.
In the global competition to develop smarter and stronger AI, the United States must recognize what Europe already seems to: promoting ethical AI is just as important as innovation, and procurement is a good way to achieve this goal. In 2019, a report by the European Commission’s AI High Level Expert Group recommended the strategic use of public procurement to fund innovation and develop trustworthy AI. The group called for the introduction of clear eligibility and selection criteria in the procurement rules and processes of European Union institutions; these criteria should reflect the need for AI systems to be lawful, ethical, robust, and protective of people’s personal data, privacy, and autonomy. In addition, the City of Amsterdam has drafted “standard clauses for municipalities for fair use of algorithmic systems” that seek to embed AI ethics principles into standard contract clauses. These standard clauses cover critical tenets of trustworthy AI, including data quality, transparency of the AI system, and risk management strategies.
The federal government can also create synergies by working in tandem with cities and states within the United States. In New York City, the AI Now Institute has issued a series of recommendations to provide meaningful oversight opportunities to the city’s acquisition and use of AI. Specifically, AI Now called for procurement contracts to require that vendors provide documentation on their automated decision systems’ training and input data, descriptions of the performance and other high-level characteristics of their models, and easy-to-understand, nontechnical explanations of how their models make determinations.
The federal government has already taken some steps in the direction of better AI. In December 2020, former President Trump signed an executive order setting out nine principles for the design, development, acquisition, and use of AI in government in an attempt to “foster public trust and confidence in the use of AI, and ensure that the use of AI protects privacy, civil rights, and civil liberties.” The order directs agencies to prepare inventories of AI use cases (a task that the Administrative Conference of the United States already largely completed in its Government by Algorithm report) and directs the White House to develop a plan for policy guidance for agency use of AI tools. The order lays out that AI use must be: (1) lawful; (2) purposeful and performance driven; (3) accurate, reliable, and effective; (4) safe, secure, and resilient; (5) understandable; (6) responsible and traceable; (7) regularly monitored; (8) transparent; and (9) accountable.” Again, while these are laudable goals, without clear and specific benchmarks it will be difficult to meaningfully achieve them. One way to set such benchmarks is through procurement practices.
The Biden administration should work toward promulgating objective, concrete standards for government use of AI, paying careful attention to procurement practices. In a promising sign, the Biden transition team embedded technologists on each agency review team, indicating that the new administration understands the growing and cross-cutting role that technology plays in governance, and plans to plant personnel with tech expertise throughout the government. As government entities continue to adopt AI-powered technologies to improve the efficiency of their services, procurement standards are a powerful demand-side policy instrument that should be harnessed to encourage responsible AI innovation for coming generations.