Anthropic “has not satisfied the stringent requirements” The temporary loss of the supply-chain-risk A US court of appeals in Washington, DC, decided on Wednesday that the Pentagon’s designation was valid. This decision contradicts another issued last month The preliminary rulings were given by San Francisco’s lower court, but it was not immediately apparent how they would be settled.
Anthropic has been sanctioned under two laws that govern supply chains. Both have the same effect, but each court in San Francisco or Washington, DC, is only ruling on one. Anthropic says it’s the first US firm to be sanctioned under two different laws that are usually used to punish businesses from abroad who pose a national security risk.
“Granting a stay would force the United States military to prolong its dealings with an unwanted vendor of critical AI services in the middle of a significant ongoing military conflict,” Three-judge appellate court wrote On Wednesday, they reported an unusual case. Panelists said Anthropic might suffer financial damage from continuing designation. However, they do not wish to take a risk. “a substantial judicial imposition on military operations” The following are some examples of how to use “lightly override” Military judgments about national security.
San Francisco Judge found that Department of Defense probably acted against Anthropic in bad faith, driven by its frustration with the AI firm’s proposed restrictions on its technology and public criticism of such restrictions. Last week the judge removed the supply-chain label, which the Trump administration then complied with by providing access to Anthropic AI within the Pentagon and across the entire federal government.
Danielle Cohen, Anthropic’s spokesperson, says that the Washington, DC, Court is grateful to the company. “recognized these issues need to be resolved quickly” You can remain confident “the courts will ultimately agree that these supply chain designations were unlawful.”
Department of Defense didn’t immediately reply to our request for a comment.
They test how much control the executive branch can have over technology companies. Anthropic’s battle with the Trump administration also plays out in the Pentagon’s war against Iran, which uses AI. The company has claimed that it was illegally penalized for insinuating that its AI-tool Claude lacked the accuracy necessary for some sensitive operations like carrying out drone strikes with no human supervision.
A number of experts on government contracts, corporate rights and other issues have been interviewed. told WIRED Anthropic may have a good case, but sometimes the courts refuse to overrule White House in matters of national security. AI Researchers have said Anthropic actions by the Pentagon “chills professional debate” AI performance is a key factor to consider.
Anthropic claims in court it has lost business due to the designation. Government lawyers claim that this prohibits the Pentagon and contractors from using Claude AI, a product of Anthropic’s company as part of military project. Anthropic could struggle to gain back its significant position in federal government as long as Trump is in office.
It could be several months before the final decisions are made in these two cases. Washington’s court will hear oral arguments May 19.
There are few details about the Department of Defense’s use of Claude and how it is transitioning its staff to alternative AI tools. Google DeepMindOpenAI or other. The Department of War (which is the name of the Department of Defense under Trump) has announced that they have taken measures to prevent Anthropic from deliberately trying to undermine its AI during the transition.

