Anthropic to challenge Pentagon’s ‘supply chain risk’ label in court
- In Reports
- 01:25 PM, Mar 06, 2026
- Myind Staff
Artificial intelligence company Anthropic has said it will challenge in court the decision by the United States Department of War to label the firm a “supply chain risk” to America’s national security. The company confirmed that it had received a formal letter from the Department notifying it of the designation and stated that it believes the action is not legally sound.
Anthropic CEO Dario Amodei said the company had been informed through an official communication that it had been categorised as a supply chain risk. He said the firm strongly disagrees with the decision and plans to contest it legally.
“Anthropic received a letter from the Department of War confirming that we have been designated as a supply chain risk to America’s national security,” Amodei said in a statement. “As we wrote on Friday, we do not believe this action is legally sound, and we see no choice but to challenge it in court.”
Amodei explained that the Department’s letter has a narrow scope because the law under which the action has been taken is also limited in its application. According to him, the statute referred to in the letter is meant to protect government supply chains rather than punish companies that provide services or technology.
“The Department’s letter has a narrow scope, and this is because the relevant statute (10 USC 3252) is narrow, too,” Amodei said. “It exists to protect the government rather than to punish a supplier; in fact, the law requires the Secretary of War to use the least restrictive means necessary to accomplish the goal of protecting the supply chain.”
He further clarified that even companies that work directly with the Department of War would not face restrictions on their broader business relationships with Anthropic. According to Amodei, the designation cannot limit the use of Anthropic’s AI model, Claude, in situations that are not directly connected to specific defence contracts.
“Even for Department of War contractors, the supply chain risk designation doesn't (and can't) limit uses of Claude or business relationships with Anthropic if those are unrelated to their specific Department of War contracts,” he said.
Amodei also addressed concerns related to the company’s stance on certain military uses of artificial intelligence. He said Anthropic’s objections focus mainly on two areas: mass domestic surveillance and fully autonomous weapons. Despite the disagreement, he noted that the company has been in discussion with the Department and that the talks had been constructive in recent days.
“I would like to reiterate that we had been having productive conversations with the Department of War over the last several days, both about ways we could serve the Department that adhere to our two narrow exceptions, and ways for us to ensure a smooth transition if that is not possible,” he said.
Amodei highlighted that Anthropic has already worked closely with the Department of War on several projects aimed at supporting military operations. He said the company is proud of the collaboration and the technological contributions it has provided.
“We are very proud of the work we have done together with the Department, supporting frontline war fighters with applications such as intelligence analysis, modelling and simulation, operational planning, cyber operations, and more,” Amodei said.
At the same time, he emphasised that Anthropic does not believe private companies should be directly involved in operational decision-making during military actions.
“As we stated last Friday, we do not believe, and have never believed, that it is the role of Anthropic or any private company to be involved in operational decision-making—that is the role of the military,” Amodei said. “Our only concerns have been our exceptions on fully autonomous weapons and mass domestic surveillance, which relate to high-level usage areas, and not operational decision-making.”
Amodei also stressed that the company’s immediate priority is to ensure that national security operations are not affected by the ongoing dispute. He said Anthropic is prepared to continue supporting the Department with its AI technology while the situation is resolved.
“Our most important priority right now is making sure that our war fighters and national security experts are not deprived of important tools in the middle of major combat operations,” he said. “Anthropic will provide our models to the Department of War and national security community, at nominal cost and with continuing support from our engineers, for as long as is necessary to make that transition, and for as long as we are permitted to do so.”
He added that despite the disagreement, the company shares common goals with the Department of War when it comes to strengthening national security.
“Anthropic has much more in common with the Department of War than we have differences,” Amodei said. “We both are committed to advancing US national security and defending the American people, and agree on the urgency of applying AI across the government. All our future decisions will flow from that shared premise.”
Meanwhile, reports indicate that Anthropic’s AI tools are currently being used during ongoing United States operations in the Middle East. According to a Reuters report, the Pentagon used artificial intelligence services from Anthropic, including its Claude tools, during strikes against Iran as part of Operation Epic Fury.

Comments