Federal judge blocks Pentagon's "supply chain risk" label on Anthropic, cites First Amendment retaliation
A federal judge on 26 March 2026 granted Anthropic a preliminary injunction in its lawsuit against DOD, temporarily reversing the government's designation of the AI company a supply chain risk, reports the Verge. US District Judge Rita F. Lin of the Northern District of California wrote that the Department of War's own records showed the designation was motivated by Anthropic's "hostile manner through the press," calling it "classic illegal First Amendment retaliation."
The dispute traces to a 9 January 2026 memo from Defense Secretary Pete Hegseth directing that "any lawful use" language be inserted into every AI services procurement contract within 180 days, a change that would affect existing agreements with Anthropic, OpenAI, xAI, and Google. Anthropic, whose Claude model was the first frontier AI approved for use on classified government networks under a July 2025 contract, refused to drop two "red lines": prohibitions on domestic mass surveillance and lethal autonomous weapons.
When negotiations collapsed, the Department of War on 4 March 2026 formally designated Anthropic a supply chain risk — a label historically reserved for foreign adversaries suspected of potential sabotage, never before publicly applied to a US company. Hegseth amplified the action with an X post declaring that "no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic."
Anthropic's court filings allege the designation has already caused substantial commercial harm. Dozens of outside partners have expressed "confusion about what was required of them," and the company estimates revenue at risk ranging from hundreds of millions to multiple billions of dollars. The designation drew bipartisan concern, including a letter from Senator Elizabeth Warren questioning whether the action constituted retaliation against contractors that seek contractual guardrails.
At the heart of the case is a question Judge Lin framed plainly during Tuesday's hearing: whether the government "violated the law when it went beyond" simply choosing a different AI vendor. DOD argued in a court filing that Anthropic could theoretically "disable its technology or preemptively alter the behavior of its model … during ongoing warfighting operations," an "unacceptable risk to national security." Judge Lin's prereleased questions challenged that assertion, asking what evidence showed Anthropic retained any access to or control over Claude after delivering it to the government. In her order, she found no "legitimate basis to infer from Anthropic's forthright insistence on usage restrictions that it might become a saboteur." Citing an amicus brief's description of the government's actions as "attempted corporate murder," Judge Lin observed, "I don't know if it's 'murder,' but it looks like an attempt to cripple Anthropic."
The injunction takes effect in seven days. A final verdict could be weeks or months away.
Comments ()