OpenAI Wins Defense Deal Hours After Gov Drops Anthropic

What to Know
- OpenAI struck a Department of Defense deal to deploy AI models on classified Pentagon networks, announced by CEO Sam Altman on Friday
- $200 million — Anthropic's prior Pentagon contract collapsed after the company demanded restrictions on autonomous weapons and mass surveillance
- Defense Secretary Pete Hegseth labeled Anthropic a Supply-Chain Risk to National Security, a tag normally reserved for foreign adversaries
- President Trump ordered all federal agencies to halt Anthropic technology use, granting a six-month transition period
OpenAI defense contract news rattled the artificial intelligence sector on Friday after the company reached a deal with the US Department of Defense to deploy its AI models on classified military networks — hours after the White House directed federal agencies to stop using rival Anthropic's technology. CEO Sam Altman revealed the agreement in a late Friday post on X, saying OpenAI would supply its models inside the Pentagon's classified network and praising the department's respect for safety.
Why Did the US Government Ban Anthropic?
The US government banned Anthropic because Defense Secretary Pete Hegseth classified the firm as a Supply-Chain Risk to National Security on Friday, a designation typically reserved for foreign adversaries. The ruling forces all defense contractors to certify they are not using Anthropic's models. President Donald Trump simultaneously directed every federal agency to halt use of Anthropic technology, with a six-month transition window for agencies already relying on its systems.
The clash traces back to a $200 million contract Anthropic signed in July that made it the first AI lab to operate across the Pentagon's classified environment. Negotiations fell apart after Anthropic sought guarantees its software would never power autonomous weapons or domestic mass surveillance. The Defense Department insisted the technology be available for all lawful military purposes.
OpenAI Defense Contract Provisions
OpenAI's Pentagon deal fills the gap left by Anthropic's ouster from classified networks. Altman wrote that the Defense Department showed deep respect for safety and agreed to work within OpenAI's operating limits. He added that the company prohibits domestic mass surveillance and requires human responsibility in decisions involving the use of force, including automated weapons systems — provisions written directly into the new agreement, according to his post.
Anthropic Vows Legal Challenge
Anthropic said in a statement that it was deeply saddened by the designation and intends to challenge the decision in court. The company warned the move could set a precedent affecting how American technology firms negotiate with government agencies as political scrutiny of AI partnerships continues to grow.
Public Backlash on Social Media
Reaction on X was sharply divided. Christopher Hale, an American Democratic politician, wrote that he canceled his ChatGPT subscription and purchased Claude Pro Max instead, declaring that one company stands up for Americans' rights while the other folds to tyrants.
Another crypto user noted that in 2019 OpenAI pledged never to help build weapons or surveillance tools, yet in 2026 the company was handing a classified cloud instance to the Department of War — calling it an integrity arc gone wrong. The remark captured widespread frustration among users who had long viewed OpenAI as a safety-focused organization.
Stay ahead of the market.
Crypto news and analysis delivered every morning. Free.
More from Bitcoinomist
About the Author
Senior Analyst
Kevin Giorgin is an award-winning crypto journalist with over five years of experience covering Bitcoin, DeFi, and blockchain technology at Bitcoinomist.
View all contributorsFollow bitcoinomist.io on Google News to receive the latest news about blockchain, crypto, and web3.
Follow us on Google News