Anthropic refuses Pentagon’s demand to remove AI safeguards on Claude
- In Reports
- 03:07 PM, Feb 27, 2026
- Myind Staff
Anthropic has declined a request from the US Department of Defence to remove safety restrictions from its artificial intelligence system, Claude, saying it will not allow the technology to be used for mass domestic surveillance or fully autonomous weapons. The company’s CEO, Dario Amodei, said Anthropic could not agree to the Pentagon’s demand that AI contractors permit “any lawful use” of their systems, as some potential uses go beyond the company’s ethical and technical limits.
In a detailed public statement, Amodei said, “Regardless, these threats do not change our position: we cannot in good conscience accede to their request,” making clear that the company would not back down despite pressure from defence officials.
The disagreement centres on the Defence Department’s requirement that companies supplying AI systems must agree to unrestricted lawful use, including uses that Anthropic believes cross serious boundaries. Amodei stressed that Anthropic does support the use of artificial intelligence to protect national security.
He stated, “I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries.” He emphasised that the company is committed to helping the United States and its allies remain secure, but not at the cost of democratic principles or safety.
Anthropic has already played a role in US government operations. According to the company, it was the first frontier AI Company to deploy advanced models inside classified US government networks and at National Laboratories. Claude is currently used by defence and intelligence agencies for a range of purposes, including intelligence analysis, operational planning, cyber operations, and modelling. Amodei pointed out that the company has also taken steps to protect US national interests in other ways. He said Anthropic cut off access to firms linked to the Chinese Communist Party, even though doing so cost the company several hundred million dollars in potential revenue.
The current standoff focuses on two specific uses that Anthropic refuses to allow. The first is mass domestic surveillance. Amodei warned that using AI systems to monitor Americans on a large scale would pose serious risks and go against democratic values. “Using these systems for mass domestic surveillance is incompatible with democratic values,” he wrote.
He explained that advanced AI can gather and combine vast amounts of publicly available information, such as browsing history, movement data, and personal associations, to create a detailed picture of someone’s life. He noted that this could happen “automatically and at massive scale,” raising concerns about privacy and civil liberties.
The second red line involves fully autonomous weapons. While Amodei acknowledged that such systems might play a role in national defence in the future, he said current frontier AI models are not reliable enough to remove humans completely from targeting decisions. He made it clear that Anthropic would not provide technology that could endanger lives. “We will not knowingly provide a product that puts America’s warfighters and civilians at risk,” he said.
The company has offered to work with the Defence Department on research and development to improve the reliability of AI systems, but Amodei said that this offer has not been accepted.
According to Amodei, the Pentagon has responded with strong pressure. He claimed that the Defence Department has threatened to remove Anthropic from its systems if the company continues to maintain its safeguards. Officials have also allegedly warned that Anthropic could be labelled a supply chain risk. In addition, he said there have been threats to invoke the Defence Production Act to force the company to remove its restrictions.
Defence Secretary Pete Hegseth reportedly gave Anthropic a direct ultimatum on Tuesday, telling the company to open its artificial intelligence technology for unrestricted military use by Friday or risk losing its government contract.
Amodei described these actions as contradictory. “One labels us a security risk; the other labels Claude as essential to national security,” he wrote, pointing out the mixed signals coming from officials. Despite the tension, he said Anthropic still hopes to continue working with the Defence Department. If the company is removed from government systems, he said it would cooperate to ensure a smooth transition so that military operations are not disrupted. “We remain ready to continue our work to support the national security of the United States,” he added, reaffirming the company’s willingness to help within the limits it has set.
The dispute highlights the growing debate over how artificial intelligence should be used in military and security settings. Anthropic maintains that while AI can play an important role in defending democracies, there must be clear limits to prevent harm and protect core values. By refusing to remove safeguards on Claude, the company has drawn a firm line on mass domestic surveillance and fully autonomous weapons, even as it faces the possibility of losing a major government contract.

Comments