A United States judge has issued a temporary injunction against the Pentagon’s blacklisting of Anthropic, marking a significant development in the company’s ongoing dispute with the military regarding AI safety in combat situations. The lawsuit filed by Anthropic in a California federal court contends that U.S. Secretary of War Pete Hegseth exceeded his authority by classifying Anthropic as a national security supply-chain risk. This designation is typically applied to companies that may expose military systems to potential infiltration or sabotage by hostile entities.
Anthropic argues that the government violated its First Amendment right to free speech by retaliating against its stance on AI safety without providing an opportunity for the company to challenge the classification, thereby infringing on its Fifth Amendment right to due process. U.S. District Judge Rita Lin, appointed by former President Joe Biden, supported Anthropic’s claims in a 43-page ruling, though the injunction will not take immediate effect to allow the administration time to appeal.
The dispute arose following Anthropic’s opposition to the military using its AI chatbot, Claude, for U.S. surveillance or autonomous weaponry, resulting in the Pentagon blocking Anthropic from certain military contracts. The company anticipates substantial financial losses and damage to its reputation due to this action. Anthropic maintains that AI models are not sufficiently reliable for use in autonomous weapons and rejects domestic surveillance as a violation of rights. The Pentagon argues that private entities should not dictate military operations but clarifies that it only intends to deploy such technology within legal boundaries.
Judge Lin’s ruling indicated that the government’s actions seemed more punitive towards Anthropic rather than driven by national security interests as stated. Anthropic’s spokesperson, Danielle Cohen, expressed satisfaction with the decision, emphasizing the company’s commitment to collaborating with the government for the benefit of all Americans through safe and dependable AI solutions.
Notably, Anthropic’s classification as a supply-chain risk under a government procurement statute aimed at safeguarding military systems from potential foreign sabotage marks the first public designation of its kind for a U.S. company. The company’s lawsuit challenges the legality and factual basis of this decision, contradicting the military’s previous positive assessments of Claude. The Justice Department contends that Anthropic’s refusal to comply with contractual terms could introduce uncertainty within the Pentagon regarding the use of Claude and pose risks to military operations.
Moreover, Anthropic faces another lawsuit in Washington related to a separate Pentagon supply-chain risk classification that could lead to its exclusion from civilian government contracts. The ongoing legal battles underscore the complex interactions between private entities, government agencies, and emerging technologies in the realm of national security.
[End of rewritten article]

