U.S. Military and Anthropic Clash Over AI Use and Oversight

A significant confrontation has emerged between the United States Department of Defense (DOD) and AI company Anthropic concerning the military use of artificial intelligence. The dispute raises critical questions about who should establish the parameters for employing AI in military contexts: the executive branch, private companies, or Congress through democratic processes.

The conflict escalated when Defense Secretary Pete Hegseth reportedly imposed a deadline on Dario Amodei, the CEO of Anthropic, to permit the DOD unrestricted access to its AI systems. When Anthropic refused, the DOD classified the company as a supply chain risk, instructing federal agencies to phase out its technology. This move significantly heightened tensions between the two parties.

Anthropic has drawn a firm line against two specific uses of its technology: domestic surveillance of U.S. citizens and fully autonomous military targeting. Hegseth has criticized what he calls “ideological constraints” in commercial AI systems, arguing that it is the government’s role to determine lawful military applications, not that of private vendors. He emphasized this stance in a recent speech at SpaceX, stating, “We will not employ AI models that won’t allow you to fight wars.”

Procurement Disagreement and National Security

At its core, this dispute resembles a procurement disagreement typical in a market economy. The U.S. military identifies products and services it wishes to acquire, while companies decide what they are willing to sell and under what conditions. This dynamic is not inherently flawed; if a product fails to meet operational needs, alternative vendors are available. Conversely, if a company deems certain uses of its technology unsafe or incompatible with its values, it can opt to withhold it.

A coalition of companies recently signed an open letter pledging not to weaponize general-purpose robots, reflecting the balance that exists within the free market. However, the situation complicates when the DOD designates Anthropic as a “supply chain risk.” This classification is intended to address genuine national security threats, not to penalize an American company for declining to meet government demands. The declaration by Hegseth that “effective immediately, no contractor, supplier, or partner that does business with the U.S. military may conduct any commercial activity with Anthropic” marks a significant shift from a procurement issue to a coercive tactic.

AI Governance and Civil Liberties

Two substantive issues underpin Anthropic’s resistance. The first concerns the company’s opposition to domestic surveillance of U.S. citizens, a matter deeply rooted in civil liberties. The U.S. government operates under constitutional and statutory restrictions regarding the monitoring of its citizens. By refusing to facilitate domestic surveillance, Anthropic aligns itself with established democratic principles, not new ones.

While the DOD has not claimed an intention to use technology for unlawful surveillance, it argues against procuring models that include restrictions that might preempt lawful government actions. In essence, the DOD believes compliance with the law should be the government’s responsibility, not embedded in a vendor’s code. Anthropic has invested in training its systems to decline tasks associated with harm or high risk, including surveillance. Thus, the disagreement revolves around institutional control over constraints—whether these should emerge from government oversight or be imposed by the developer.

The second issue, the rejection of fully autonomous military targeting, introduces additional complexity. The DOD maintains policies requiring human oversight in the use of force. Discussions about autonomy in weapons systems continue within military and international forums. Private companies may reasonably determine that their technologies are not yet reliable enough for certain battlefield applications. Meanwhile, the military may find such capabilities essential for deterrence and operational effectiveness.

This disagreement highlights a crucial point: the boundaries for military AI use should not be determined through ad hoc negotiations between a Cabinet secretary and a corporate CEO, nor should they be driven by which side can exert more contractual influence. If the U.S. government considers specific AI capabilities vital for national defense, it should articulate that position transparently, engage in debates within Congress, and reflect it in doctrine and statutory frameworks. Clear rules should be established, benefiting both companies and the public.

The U.S. distinguishes itself from authoritarian regimes by emphasizing governance within transparent democratic institutions. This distinction diminishes if AI governance is primarily decided through executive ultimatums behind closed doors. Additionally, if companies perceive that engaging in federal markets necessitates relinquishing all deployment conditions, some may opt to exit these markets. Others might weaken or remove model safeguards to qualify for government contracts, neither of which enhances U.S. technological leadership.

While the DOD’s concerns regarding “ideological constraints” are justified, rejecting arbitrary restrictions should not equate to disregarding corporate risk management in deployment conditions. In high-risk sectors, such as aerospace and cybersecurity, contractors typically impose safety standards and operational limitations as part of responsible commercial practices. AI should not be treated as an exception to this norm.

Moreover, built-in safeguards need not be viewed as barriers to military effectiveness. In many high-risk industries, layered oversight through internal controls, technical fail-safes, auditing mechanisms, and legal reviews work synergistically to ensure responsible use. Technical constraints can serve as an additional safeguard, mitigating the risks of misuse, error, or unintended escalation.

Congress currently appears inactive in this crucial discussion. While the DOD must retain ultimate authority over lawful use, it can also accept that certain design-level guardrails could enhance its oversight rather than undermine it. In some cases, redundant safety systems bolster operational integrity.

A company’s unilateral ethical commitments can never replace public policy, particularly when technologies carry national security implications. Ultimately, decisions regarding surveillance powers, autonomous weapons, and rules of engagement must reside within democratic institutions.

This scenario presents a pivotal moment in AI governance. AI systems are now powerful enough to impact intelligence analysis, logistics, cyber operations, and potentially battlefield decision-making. As such, they are too significant to be governed solely by corporate policies or executive discretion.

The solution requires strengthening the institutions mediating these discussions. Congress should clarify statutory boundaries for military AI use and assess existing oversight mechanisms. The DOD should provide detailed doctrine for human control, auditing, and accountability. Civil society and industry should engage in structured consultation processes instead of episodic confrontations.

If AI guardrails can be discarded through contractual pressure, they risk becoming negotiable. However, if grounded in law, they can establish stable expectations. Democratic constraints on military AI should be enshrined in statute and doctrine, not relegated to private contracts.

This article is an adaptation from Tech Policy Press, published with permission.