The US government has introduced new guidelines for artificial intelligence (AI) providers as tensions rise with the AI start-up Anthropic. According to draft guidance from the US General Services Administration (GSA), companies vying for federal contracts must permit the government to utilize their AI models for “any lawful purpose.” This guideline aims to clarify usage rights amid ongoing disputes surrounding AI technology applications.
The proposed rules, reported by the Financial Times, specifically target perceived biases in AI outputs. Under the new framework, contractors are required to provide “a neutral, non-partisan tool that does not manipulate responses in favour of ideological dogmas such as diversity, equity, and inclusion.” This directive reportedly follows an executive order issued by former President Donald Trump, which criticized what he termed “woke” AI models.
In addition, vendors will have to disclose whether their models have been altered to adhere to foreign regulations, such as the European Union’s Digital Services Act. The GSA has indicated plans to consult with industry stakeholders before finalizing these guidelines.
Conflict with Anthropic
The new measures come amid a dispute between the US Department of War (DoW) and Anthropic. Tensions escalated last week when the start-up declined to grant the Pentagon unrestricted rights to deploy its AI models, citing concerns over domestic surveillance and lethal autonomous weapons. In response, the Pentagon cancelled a $200 million contract with the company.
DoW Secretary Pete Hegseth publicly criticized Anthropic, stating on social media platform X that the company had demonstrated “a master class in arrogance and betrayal.” He further alleged that Anthropic’s “true objective” was to gain veto power over military operational decisions.
Following this clash, Trump directed the cancellation of all existing contracts with Anthropic and initiated a six-month phase-out across defense and other federal agencies, intensifying the scrutiny surrounding AI technologies used in national defense.
OpenAI’s Rapid Response
In a swift move, rival developer OpenAI secured a deal to deploy its AI models on the Pentagon’s classified networks shortly after the fallout with Anthropic. CEO Sam Altman emphasized that the agreement would include provisions ensuring the technology is not used for “intentionally” surveilling US citizens. He stated that the deal incorporates “red lines” against mass surveillance and the use of autonomous weapons.
The developments have prompted significant reactions within the industry. Caitlin Kalinowski, OpenAI’s hardware leader, announced her resignation on March 7, 2024. In her statement, she acknowledged AI’s critical role in national security but expressed concerns about surveillance practices lacking judicial oversight and the implications of lethal autonomy. She emphasized that decisions of this magnitude should not be rushed, highlighting governance issues that require thorough deliberation.
As the US government moves forward with these regulations, the implications for AI companies and their technologies are profound. The evolving landscape of AI governance is set to shape not only federal contracts but also the broader discourse surrounding the ethical use of AI in society.
