President Donald Trump ordered all federal agencies to cease using the AI products of Anthropic, designating the company a supply-chain risk and giving agencies a six-month phase-out window.
The move followed a public dispute between Anthropic and the Department of Defense over vendor-imposed restrictions. Anthropic says it will not allow its model, Claude, to be used for mass domestic surveillance or fully autonomous weapons without human oversight. Pentagon officials, led by Defense Secretary Pete Hegseth, argued the Department should not be constrained by vendor rules and warned partners to stop using Anthropic’s services. The administration said the classification will bar contractors and partners from commercial work with Anthropic, and Anthropic has said it will challenge that designation in court.
In the AI arms race, one company’s ethical stand can become another company’s strategic opportunity.
At the same time OpenAI announced a separate agreement with the Department of Defense that permits the use of its models inside the DoD’s classified network. Sam Altman said the deal clears OpenAI’s models for classified work. Reports say the OpenAI agreement includes safeguards similar to those Anthropic sought, a point that has intensified competition and political scrutiny in the sector.
The government’s action and OpenAI’s deal provoked a public response. A movement dubbed “QuitGPT” gathered momentum after news of OpenAI’s Pentagon agreement, with many users canceling ChatGPT subscriptions and urging alternatives. Despite the federal ban on Anthropic, consumer interest in Claude surged. Claude climbed from just outside the top 100 in the U.S. App Store at the end of January to a top-three free app position by late February. ChatGPT remained the top app and Google’s Gemini ranked third.
Legal and procurement consequences are unfolding
The dispute has immediate procurement and legal stakes. The Pentagon has said it does not currently use AI for mass surveillance or fully autonomous weapons and has no plans to do so. Still, officials have signaled they may use tools like the Defense Production Act to enforce compliance. Companies that provide classified services, including partners named in reporting such as Palantir, could face disruption if the supply-chain designation stands. Anthropic has announced plans to contest the classification in court, framing the issue as a test of whether vendors can set usage guardrails for government customers.
The combination of a federal ban, a new OpenAI-DoD agreement, a rising consumer backlash and an impending legal challenge will shape how U.S. government agencies and private AI vendors negotiate limits on military uses of advanced models. The outcome of Anthropic’s court challenge and the implementation of OpenAI’s access protocol will determine whether vendor-imposed safeguards survive in defense contracts and how commercial AI firms engage with national security customers.

Loading…