Institutions Built for Steel Mills Are Governing AI
The Anthropic-Pentagon standoff, the epistemology of “all lawful use,” and what happens when governance infrastructure can’t keep pace with the technology it governs
At 5:01 PM Eastern today, a deadline set by Defense Secretary Pete Hegseth expires. Anthropic, the AI company behind Claude, must either accept the Pentagon’s demand for unrestricted “all lawful use” access to its AI model on classified military networks or face contract termination and designation as a supply chain risk.
Anthropic CEO Dario Amodei rejected the Pentagon’s final offer last night, stating the company “cannot in good conscience accede to their request.”
This is the most significant confrontation between a private AI company and the U.S. government to date. And the way it’s unfolding tells us more about the state of AI governance than either side probably intends.
What’s actually at stake
The dispute is narrow in scope but enormous in precedent.
Anthropic has two red lines. It does not want Claude used for mass surveillance of Americans. It does not want Claude used in fully autonomous weapons without human involvement. The company has maintained these restrictions since it signed a contract worth up to $200 million with the Pentagon last summer, making Claude the first AI model deployed on the military’s classified networks.
The Pentagon’s position is that the end user, not the vendor, should determine how a licensed technology is used. As a senior Pentagon official told CNN: “You can’t lead tactical ops by exception. Legality is the Pentagon’s responsibility as the end user.”
Pentagon spokesperson Sean Parnell has insisted the military has “no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement.” But the Pentagon has consistently refused to put those assurances into contractual language. Anthropic says the latest proposed contract revisions, which the Pentagon framed as a compromise, were “paired with legalese that would allow those safeguards to be disregarded at will.”
This is not a dispute about what the Pentagon currently plans to do. It’s a dispute about who gets to set boundaries on what AI can be used for in national security contexts, and whether those boundaries can be contractually binding.
The epistemology of “all lawful use”
The Pentagon’s demand that AI tools be available for “all lawful purposes” sounds like a reasonable standard. It implies that legality itself is a sufficient guardrail. But this framing deserves more scrutiny than it’s getting.
Mass surveillance of Americans is not clearly illegal. It exists in a legislative gap. The legal frameworks governing domestic surveillance (FISA, Executive Order 12333, the Fourth Amendment) were built for wiretaps and phone records, not for AI systems capable of processing and cross-referencing vast datasets of behavioral, biometric, and communications data in real time. As a source familiar with Anthropic’s position told CNN, “there are no laws or regulations yet that cover how AI could be used in mass surveillance.”
Fully autonomous weapons systems are not prohibited by U.S. statute either. Department of Defense Directive 3000.09, which requires “appropriate levels of human judgment” in the use of force, is a policy document, not a law. It can be revised or rescinded by any defense secretary.
So “all lawful use” is not a safety framework. It’s the absence of one. It defines the acceptable boundary of AI deployment as “whatever isn’t currently prohibited” in domains where almost nothing is currently prohibited.
Amodei’s position is fundamentally epistemic. He is making a knowledge claim: that some capabilities are outside what today’s technology can safely and reliably support. In a statement Thursday, he wrote that Anthropic “believes deeply in the existential importance of using AI to defend the United States and other democracies.” But the company’s position is that current AI systems are not reliable enough for certain applications, regardless of their legality.
This distinction matters. The Pentagon is arguing from authority (”legality is our responsibility”). Anthropic is arguing from capability (”the technology isn’t ready for this”). These are fundamentally different kinds of claims, and they require different institutional mechanisms to adjudicate.
We do not currently have those mechanisms.
A Korean War statute for a 2026 problem
If Anthropic does not comply by 5:01 PM today, the Pentagon has threatened two actions: terminating the contract and designating Anthropic a supply chain risk, and potentially invoking the Defense Production Act to compel Anthropic’s cooperation.
The DPA is a Korean War-era statute signed by President Truman in 1950. It gives the executive branch broad authority to direct private industry in the name of national defense. It was designed for steel mills, tank factories, and industrial supply chains.
Legal scholars are divided on whether it can be used this way.
Alan Rozenshtein, writing in Lawfare, published the most thorough legal analysis to date. He identifies two possible demands the government might make under the DPA: requiring Anthropic to provide Claude without its contractual usage restrictions (a “same product, different terms” argument), or compelling Anthropic to retrain Claude to strip safety guardrails from the model itself. The first is legally contested. The second, Rozenshtein argues, would more clearly constitute demanding a new product, which sits on much weaker legal ground.
As Charlie Bullock of the Institute for Law & AI told the Associated Press, neither side’s legal argument is “a slam dunk.” If neither backs down, the most likely outcome is litigation between Anthropic and the federal government, testing the application of a 75-year-old industrial production statute to AI safety policy for the first time.
Rozenshtein’s conclusion cuts to the heart of the problem: “this fight is happening because Congress hasn’t set substantive rules for military AI.” He argues that if Congress had legislated guidelines on autonomous weapons and surveillance, Anthropic would likely be comfortable selling to the military without restrictions, and the DPA threat would never have arisen. The DPA itself is scheduled for reauthorization by September 30, 2026. Depending on how this dispute unfolds, its renewal could become a legislative flashpoint for AI governance.
The contradiction that reveals everything
Amodei identified perhaps the sharpest analytical point in his Thursday statement: the Pentagon’s two threatened actions are inherently contradictory. A supply chain risk designation labels Anthropic as too dangerous to work with. A DPA invocation labels Claude as too essential to lose.
You cannot be both a threat and a necessity. As former DOJ-DOD liaison Katie Sweeten told CNN: “I would assume we don’t want to utilize the technology that is the supply chain risk, right? So I don’t know how you square that.”
This contradiction reveals that the real dispute isn’t about legality or even capability. It’s about control. The Pentagon wants to establish the principle that no private company can set terms of service that constrain government use of a licensed technology. The Center for American Progress described the supply chain risk designation as potentially “existential for Anthropic” and the DPA invocation as “the quasi-nationalization of a frontier lab.”
Gregory Allen, a senior advisor at the Center for Strategic and International Studies, offered important context on Bloomberg Radio. The actual users of Claude within the Defense Department, he said, “love Anthropic, love Claude” and report that the company’s usage restrictions “have never been triggered.” The dispute is not being driven by an operational problem. It’s being driven by a principle.
The race to the bottom
The Anthropic standoff does not exist in isolation. It’s unfolding alongside several parallel developments that, taken together, paint a concerning picture.
xAI’s Grok is already on classified networks. Elon Musk’s xAI has signed a Pentagon contract under the exact “all lawful use” terms that Anthropic is refusing. This is the same model that has generated approximately 3 million deepfake images, including an estimated 23,000 depicting minors, and whose offices were raided by French prosecutors. The Pentagon has confirmed that Grok is “on board with being used in a classified setting,” though officials acknowledge it is not viewed as being as advanced as Claude.
The Pentagon’s AI strategy explicitly omits ethical AI language. A January 2026 strategy memorandum from the Department of War directed all Defense Department AI contracts to incorporate “any lawful use” language within 180 days. The same memorandum bans models with DEI-related “ideological tuning” and replaces “responsible AI” with “hard-nosed realism.”
OpenAI and Google are being pressured next. An open letter published Thursday night by tech workers from Anthropic’s top rivals urged their companies to hold the same red lines. The letter states: “The Pentagon is negotiating with Google and OpenAI to try to get them to agree to what Anthropic has refused. They’re trying to divide each company with fear that the other will give in.”
Bipartisan congressional concern is growing. Retired Air Force General Jack Shanahan, who led the Pentagon’s original AI initiatives, wrote that “painting a bullseye on Anthropic garners spicy headlines, but everyone loses in the end.” Both Republican and Democratic lawmakers have raised questions about the Pentagon’s approach.
The dynamic is clear: if any one company caves, the pressure on the rest becomes overwhelming. This is the classic collective action problem in governance, and it is playing out in real time with technology that could reshape the relationship between state power and individual rights.
The voluntary safety framework problem
There is an uncomfortable irony in the timing of this standoff. On the same day Hegseth issued his ultimatum, Anthropic published a major revision to its Responsible Scaling Policy. The company replaced its binding commitment to pause training if model capabilities outstripped safety controls with a nonbinding “Frontier Safety Roadmap.” The new framework describes its safety goals as “public goals that we will openly grade our progress towards” rather than hard commitments.
Anthropic says the change is unrelated to the Pentagon dispute. Chief Science Officer Jared Kaplan told TIME that the company spent nearly a year deliberating. The core reasoning: voluntary commitments don’t work when competitors ignore them. Anthropic argued that “if one AI developer paused development to implement safety measures while others moved forward training and deploying AI systems without strong mitigations, that could result in a world that is less safe.”
Chris Painter, director of policy at METR (a nonprofit focused on evaluating AI models), reviewed an early draft of the new policy and offered a mixed assessment. He praised the emphasis on transparent reporting. But he warned: “This is more evidence that society is not prepared for the potential catastrophic risks posed by AI.”
The practical implication for anyone relying on AI vendor safety commitments is sobering. If the most safety-focused AI company in the world has concluded that binding self-regulation is competitively untenable, the market alone will not produce adequate governance. The question of who sets the rules, and whether those rules can be enforced, becomes not a theoretical debate but an operational necessity.
What this means going forward
Whatever happens at 5:01 PM today, several things are already clear.
First, AI governance cannot be resolved through contract disputes, social media ultimatums, and Korean War-era production statutes. These are institutional tools built for a different century, and they are producing outcomes that nobody designed or intended. Lawfare’s argument that Congress should set the rules for military AI is compelling precisely because the alternative, which is what we’re watching right now, is governance by improvisation.
Second, the “all lawful use” standard will cascade beyond the Pentagon. If this framing becomes the default for government AI procurement, it will shape expectations for enterprise contracts more broadly. Every organization with an AI vendor relationship should be thinking about what happens when your vendor’s relationship with other clients, including governments, changes the terms of what your tools can do.
Third, voluntary safety commitments are structurally fragile. Anthropic’s own policy shift this week demonstrates that even companies founded on safety principles will adjust those principles under competitive pressure. Organizations relying on vendor self-regulation for AI governance need their own frameworks.
Fourth, the supply chain risk designation, if invoked, would have cascading effects well beyond the $200 million Pentagon contract. It would require every company doing defense work to prove it doesn’t use Anthropic products. For a company with a $14 billion revenue run rate and a potential IPO on the horizon, the enterprise implications could be severe.
And fifth, this is not about one company. The open letter from OpenAI and Google employees makes clear that the same pressure is being applied across the industry. Today’s outcome will shape whether any AI company can maintain red lines on military use of its technology
The institutional mismatch is the story
I want to close with what I think is the real takeaway, because it’s easy to get lost in the drama of ultimatums and deadlines.
The mismatch between our institutional infrastructure and the technology it governs is the defining challenge of this decade. A defense secretary is using a 1950 statute to govern 2026 AI capabilities. A Pentagon CTO is calling a CEO a “liar” with a “God-complex” on X. Contract negotiations that should be resolved through clear legislative frameworks are instead proceeding through ad hoc threats, public pressure campaigns, and legal theories that have never been tested in court.
This is not an AI problem. It is an institutional design problem. And it is a problem that the anticipatory governance community, the futures studies community, and the technology policy community have been warning about for years.
The question has never been whether AI would be used in national security. Of course it will be. The question is whether we have the governance infrastructure to make those decisions wisely, transparently, and with appropriate accountability.
Today, at 5:01 PM, we get one answer to that question. It is unlikely to be a reassuring one.
This post represents the author’s analysis and does not constitute legal advice.


