Defense Secretary Bans Military Contractors from Using Anthropic Over National Security Concerns
Defense Secretary Pete Hegseth has declared artificial intelligence company Anthropic a "supply chain risk to national security," banning U.S. military contractors, suppliers, and partners from doing business with the firm. The decision, announced on X, takes immediate effect and could significantly disrupt the Pentagon’s network of defense vendors, many of whom rely on federal contracts. Hegseth stated that America’s warfighters must not be held hostage by the "ideological whims of Big Tech" and called the decision final. The move follows escalating tensions between Anthropic and the Pentagon over the use of its AI model, Claude, in military operations. President Trump had earlier ordered all federal agencies to stop using Anthropic’s services immediately, though the Defense Department and select agencies are allowed to continue using the technology for up to six months during a transition to alternative systems. The conflict stems from a fundamental disagreement over the boundaries of AI use in national defense. Anthropic, the only AI company whose model is deployed on the Pentagon’s classified networks, has insisted on guardrails to prevent its technology from being used for mass surveillance of Americans or in fully autonomous weapons systems without human oversight. The company argues that such applications pose serious risks to privacy and democratic values. In contrast, the Pentagon demanded that any agreement allow the use of Claude for “all lawful purposes,” asserting that existing federal laws and internal military policies already prohibit mass surveillance and the deployment of autonomous weapons. The Pentagon gave Anthropic a deadline of Friday at 5:01 p.m. to reach a deal or lose its military contracts. Pentagon officials accused Anthropic of attempting to impose its own ethical standards on the military, calling the company’s stance “sanctimonious” and arrogant. Hegseth claimed Anthropic sought to gain “veto power” over military operations, a move he deemed unacceptable. Anthropic CEO Dario Amodei pushed back, emphasizing that the company has never sought to control military decisions. He stated that while the military retains full authority over operational choices, certain AI applications—particularly those involving autonomous weapons or mass surveillance—are beyond the safe and reliable capabilities of current technology. “Some uses are simply outside the bounds of what today's technology can safely and reliably do,” Amodei said. On Thursday, the Pentagon’s chief technology officer, Emil Michael, said the department had made concessions, offering written acknowledgments of legal and policy restrictions on surveillance and autonomous weapons. “At some level, you have to trust your military to do the right thing,” Michael said, adding that the military would never commit to not defending itself. Anthropic rejected the offer, calling the proposed language “paired with legalese that would allow those safeguards to be disregarded at will.” The company maintains that meaningful guardrails are essential to prevent misuse of powerful AI systems, even as it acknowledges the military’s need for operational flexibility. The standoff underscores the growing tension between AI developers and government agencies over the ethical deployment of advanced technology in national security.
