A steel mill the president might not be able to seize
DOD, Anthropic, and the history of presidential attempts to control industry
Multiple American presidents have at some point discovered the same temptation: the strategic industry too important to national security to leave in private hands. The modern version of this story has a three-act structure that begins with Harry Truman, reaches a surprising climax with Donald Trump, and may be approaching its denouement with artificial intelligence.
The arc traces a question long disputed: What happens when a president decides a private industry is too vital to national security to operate beyond his control? Truman tried brute force with the steel industry and was told he couldn’t. Trump found cleverer mechanisms and, so far, has succeeded. But AI may represent something new — an industry where the legal tools available to the executive branch are real and formidable, and yet where exercising them might not produce the desired outcome — and may reveal something profound about where future power lies.
In short, AI is not like a steel or munitions industry. It’s not just capital, infrastructure, and processes that can be captured and taken over by the government. It’s not even just companies like Anthropic or others. Instead, the power at the center of the AI industry today is a small number of leading AI scientists.
If the Trump administration tries to take over their employer by force and coerce them into maintaining a product for uses they fundamentally oppose, those researchers are likely to simply walk away or find other ways to frustrate the government’s goals. Unlike a steel mill, they cannot be forcibly commandeered into the U.S. national security apparatus. This means the present contest between the Defense Department and Anthropic has contours unlike prior showdowns between the president and industry.
When Harry Truman seized the steel industry
In April 1952, with a steelworkers’ strike threatening Korean War production, Truman issued an executive order seizing the nation’s steel mills in the interest of national defense. Two months later, the Supreme Court struck the order down. In Youngstown Sheet & Tube Co. v. Sawyer, Justice Jackson’s concurrence established the enduring framework: Presidential power is at its “lowest ebb” when the president acts against the will of Congress, which had previously considered and rejected seizure authority. The case seemed to establish a firm principle: The president cannot commandeer private industry without congressional authorization, even when national security is genuinely at stake.
But notice what Youngstown quietly assumed. The constraint on Truman was juridical, not operational. No one argued the government lacked the practical capacity to run steel mills — in fact, it had done so during World War II. If the law had been on Truman’s side, the mills would have been seized and run. For 70 years, that assumption held: The only meaningful check on presidential power over industry in the context of national security was the legal one.
U.S. Steel under Biden and Trump
When Japan’s Nippon Steel bid $14.9 billion for U.S. Steel in late 2023, both Republicans and Democrats opposed the deal. In the final days of his administration, President Biden blocked the deal, citing national security concerns. During the 2024 presidential campaign, Trump had vowed to do the same, but after being sworn in, he reversed course — but with a twist. He ultimately approved Nippon’s investment in U.S. Steel as part of a deal that gave the U.S. government a more extraordinary level of control over a private corporation than almost any president had sought or achieved since Truman tried and failed.
The restructured deal for U.S. Steel’s purchase, finalized in June 2025, required Nippon to make $11 billion in new investment, install an American CEO and a majority-American board, and — most remarkably — create a “golden share” granting the U.S. government (the filed paperwork actually names Donald Trump specifically) a board seat and veto power over corporate decisions the president deems relevant to national security. By September, Trump had already exercised it to block a plant closure in Illinois.
The legal architecture was creative. Through the Committee on Foreign Investment in the United States (CFIUS), Trump used authority that Congress had actually granted, setting conditions on a voluntary transaction rather than seizing property. Under Justice Jackson’s framework, he was arguably operating within congressionally granted presidential power.
But even the “golden share” model rested on the same assumption as Truman’s intervention: that with the right legal authority, the state could keep the furnaces burning. Truman’s 1952 order was sparked by a looming strike, but he wasn’t seizing the mills to suppress the workforce. In fact, the steelworkers largely supported the move, seeing the federal government as a more favorable arbiter than the steel executives.
In that era, the struggle was ultimately over the governance of the machinery, not the coercion of the men. Because the workers were willing to keep the mills running under federal stewardship, Truman’s gamble was a matter of law. At the end of the day, when it came to steel, the fundamental thing being fought over was control of the plants, not the workforce.
Either way, the Anthropic-Defense Department showdown will end very differently
In the summer of 2025, the Pentagon signed a contract worth up to $200 million with Anthropic, maker of Claude — the first frontier AI model authorized on classified U.S. networks. By early 2026, the partnership was in crisis. The Pentagon demanded that four of its AI providers — Anthropic, OpenAI, Google, and xAI — allow their tools to be used for “all lawful purposes,” including weapons development, intelligence collection, and battlefield operations. Three showed flexibility. Anthropic drew two bright lines: no mass surveillance of Americans, no fully autonomous weapons.
The confrontation came to a head this week. Defense Secretary Pete Hegseth summoned Anthropic CEO Dario Amodei to the Pentagon for what officials described, without diplomatic softening, as a “shit-or-get-off-the-pot meeting.”
Hegseth reportedly issued his ultimatum. Amodei arrived and, by all accounts, was cordial and unmoved in roughly equal measure. He thanked Hegseth for his service. He reiterated Anthropic’s red lines. He left.
Within hours, Hegseth’s ultimatum leaked. Anthropic has until Friday evening to grant the military “unfettered access” to Claude. If it refuses, the Pentagon will pursue one of two paths: declare Anthropic a “supply chain risk” — effectively blacklisting it from the entire Pentagon contracting ecosystem — or invoke the Defense Production Act (DPA) to compel the company to give the government unfettered access to its products on the government’s terms.
“The only reason we’re still talking to these people is we need them and we need them now,” a DOD official told reporters. “The problem for these guys is they are that good.”
That last sentence is the most important one in this story.
But let’s start with the Pentagon’s threat, which is internally inconsistent. Designating Anthropic a “supply chain risk” suggests that the company or its product are too unsafe to be within the federal government’s procurement chain. Whereas invoking the DPA suggests Anthropic’s product is so essential that the government must be able to use it. Needless to say, that’ll be a difficult split-perspective to sustain in court.
But that doesn’t mean the Pentagon can’t use these tools to cause Anthropic serious harm.
The national security apparatus can designate companies as posing unacceptable risks to U.S. information and communications technology supply chains. Such a designation would not merely void Anthropic’s Pentagon contract; it would require any company with Pentagon contracts to certify that Claude is not used in their government contract work. Amazon, Google, and Palantir, among hundreds of other Pentagon contractors, would face the choice of dropping Claude or losing their own government business.
The specific legal challenges would be novel, and Anthropic would have strong arguments that it is arbitrary and capricious under the Administrative Procedure Act: The law was designed to address security threats, not to punish domestic companies for safety policies the government finds inconvenient. But national security deference is powerful, and courts might well allow the designation to stand pending full litigation, meaning the practical damage could be severe and durable regardless of the ultimate legal outcome. The government doesn’t need a clean legal win to make this hurt.
At the same time, the Pentagon could well end up losing this game of chicken. Not only could it lose access to Claude, but other tech companies could decide maintaining Claude in their own workflows is more important than their access to government business.
Either way, expect a prolonged, high-stakes standoff.
The DPA, on the other hand, authorizes the president to require companies to accept government contracts, prioritize government orders, and allocate materials. It has been used to ensure that manufacturers produce ventilators, semiconductors, and military equipment. What it has never been used to do is compel a company’s workers to build and maintain products against their own judgment or even their conscience.1
The statute was written for mines, farms, mills, and factories, not laboratories. “Accepting a contract” means something intelligible when applied to a widget manufacturer; it is considerably more ambiguous when applied to a model that requires thousands of researchers making millions of incremental decisions to build and maintain. Even if the government could compel a license to the current version of Claude, a model frozen in time may be worth little given how rapidly the technology evolves. What the Pentagon needs is ongoing cooperation, which the law does not obviously authorize compelling.
There is also a hard statutory wall that goes to the heart of what the government actually needs: The DPA expressly exempts employment contracts from its compulsory authority. Congress, when it wrote the law, drew an explicit line between commandeering production and conscripting labor. The government can perhaps compel a license to Claude’s current weights; it cannot, under the DPA’s own terms, compel the engineers to keep building the next version. That is not an oversight. It reflects a constitutional judgment, embedded in the statute, about the limits of compelled labor — a judgment the Thirteenth Amendment places beyond Congress’s power to override in the first place. Any legal theory that tries to reach the engineers directly through some other authority runs into that amendment head-on.
Ordering Anthropic to maintain and update a product for uses it objects to also raises First Amendment concerns; it looks less like commandeering a factory and more like compelling a publisher to rewrite its editorial standards under threat of nationalization. The government can seize a building. It cannot conscript a mind. An effort to essentially do so will likely face serious questions from courts.
Artificial intelligence is about people, not companies
Which underscores the fundamental difference between AI companies and steel companies: that the obstacles to government control are more than just legal.
For the government to succeed, it doesn’t just need the courts to agree, but it will ultimately need Anthropic’s leading scientists and engineers — or if it tries to substitute in Google or OpenAI or xAI, their scientists and engineers — to design and maintain a product they may ethically object to supporting.
And here’s what Silicon Valley already knows, even if Washington hasn’t caught up: Especially in the field of AI where the leading scientists and engineers themselves are among the most concerned about the product’s ultimate capabilities, those people will walk or potentially find other ways to frustrate the government’s goals.
We’ve seen it happen in vivid, documented detail — and the story starts with Anthropic itself. Dario Amodei and several of his colleagues founded Anthropic in 2021 after leaving OpenAI over concerns about the company’s approach to safety and the pace of AI development. That departure wasn’t an anomaly. It was the first chapter of a pattern that has only accelerated since. When OpenAI’s direction drifted further from its founding safety principles, the departures didn’t trickle — they cascaded. Ilya Sutskever, OpenAI’s co-founder and chief scientist, left in May 2024. Jan Leike, who co-led the Superalignment team, followed days later, writing publicly that safety had “taken a backseat to shiny products.” By August 2024, Fortune reported that nearly half of OpenAI’s AGI safety researchers had quietly resigned. Researcher Daniel Kokotajlo walked away from $1.7 million in equity — the vast majority of his family’s net worth — because he’d lost confidence the company would behave responsibly. Most recently, just weeks ago, OpenAI researcher Zoë Hitzig quit in a New York Times op-ed over the company’s move toward advertising, warning that ChatGPT’s archive of intimate user conversations created manipulation risks that “we don’t have the tools to understand, let alone prevent.”
And it isn’t only OpenAI. Mrinank Sharma, the head of Anthropic’s own Safeguards Research team, resigned the same week with a public letter warning that “the world is in peril” — a stark signal that even inside Anthropic, the researchers most responsible for its safety culture are watching closely and willing to leave when they feel values are being compromised.
These aren’t isolated incidents. They are a live, recurring pattern showing that the researchers who build these systems have values and they act on them. If Anthropic caves to the Pentagon’s pressure, it risks triggering the same dynamic from the inside out. A $200 million contract for a company valued in the hundreds of billions is recoverable. Losing the people who made Claude the current leader in the field is not.
There’s also a concern that goes beyond engineers simply refusing, and that’s engineers who might decide they have an ethical duty to prevent the kinds of misuses Anthropic is trying to hold the line to prevent. You might call this the Galen Erso problem, named after the fictional scientist in the Star Wars movie Rogue One who, forced by the Empire to build a weapon, embeds in it a fatal flaw unknown to his employers. It’s a safe bet many Anthropic engineers are familiar with Galen Erso, a sort of cult hero in the tech sci fi fan circles from which many come. Like Erso’s situation, if the government essentially bullies frontier labs into enabling unethical things, the risk that those engineers respond by trying to sabotage or just frustrate the product in some way — and doing so in service of what they see as a higher ethical mandate — is nonzero.
The national security establishment has all manner of protocols to prevent situations in which someone might feel pressured or incentivized to act in a manner inconsistent with the government’s interests. By doing what the Pentagon is presently doing, they are arguably creating just that situation. And that says something profound about where the real power presently lies.
The Anthropic-Defense Department fight could reorient our understanding of where power lives
For most of the twentieth century, the hierarchy between government and strategic industry was not in doubt. Governments could tax, regulate, nationalize, and seize. The assets that mattered — steel, oil, uranium — existed in physical space, subject to physical control. AI sits somewhere between those old categories and something genuinely new. The government has real and formidable tools: the DPA, supply chain designation authority, export controls, and the ability to shape the market through the sheer scale of federal procurement. None of those tools are illusory, and it would be a mistake to conclude that because AI is cognitively complex, the government is necessarily powerless.
But the government’s tools are better suited to controlling access to AI than to controlling what AI does — and the distinction matters enormously.
Cutting Anthropic off from federal contracts would hurt the company and might, over time, accelerate the adoption of more compliant alternatives. It would not give the government full control over Claude. Invoking the DPA might compel cooperation but would do so through legal mechanisms whose scope is untested, against a company that would litigate vigorously and could draw on the most sophisticated legal talent in the country. And whatever mechanism the government eventually chose, it would ultimately find itself not in a standoff with a corporation but with a specific group of highly skilled scientists who have already demonstrated, repeatedly and at personal cost, that they will choose their values over their paychecks.
Hegseth walked into the Pentagon meeting with a deadline and two threats. Amodei walked out having conceded nothing. The deadline is tomorrow. The real deadline — the moment at which AI becomes so central to military operations that the government’s dependence is total and its leverage is gone — may already be in the past.
The blast furnaces never got a vote. The scientists do. Anthropic and its peers understand that. It’s not clear the Pentagon does, but it’s soon likely to find out.
Perhaps the better analogy for the AI industry today is to the Manhattan Project, which was similarly defined by a small number of scientists and engineers whose cooperation with the military was far from guaranteed. Famously, Oppenheimer, Fermi, Bohr, Lawrence, Feynman, and others all had to be persuaded to join the U.S. national security effort — ultimately convinced that doing so was in service of their country, of democracy, and of humanity as a whole. Even with the stark moral contrast of World War II, though, this was a fraught moment, with several scientists expressing deep hesitations and concerns.





