AI goes to war
The new contractual architecture between the AI big tech firms and the heart of the United States military-industrial complex is a way of keeping suppliers aligned with the Trump administration’s ‘AI-first fighting force‘ doctrine.
When the United States government decided, in February, that Anthropic was a supply-chain risk, it deployed for the first time against a domestic firm a designation hitherto reserved for foreign adversaries. When, two months later, it tentatively moved to bring the company back – only to block the expansion of its most advanced model days later and, simultaneously, use that same model as justification for reversing its policy of non-regulation of AI – it showed that the original verdict had never been about risk. It was about who decides what may be done with a frontier AI model. After a court fight with the maker of the Claude chatbot, the Department of Defense now seeks to exclude the company once and for all, while officially bringing eight competitors to the front. More than that, three days later it announced the creation of a working group to consider the possibility of government vetting before the release of frontier technology models.
With a state in permanent culture-war mode, having all of the leading generative-AI model-makers working for the government makes it easier to swap one piece on the board for another without further tight spots, using the redirection of contracts to competitors as a tool of persuasion. In Trumpian warcraft, where frontier AI is treated as a national-security asset, every effort is worthwhile to keep one’s friends close and one’s enemies closer still.
The doctrine behind this new architecture has an administrative name. It is called ‘all lawful purposes’, and it was the very point that broke Washington’s relationship with Anthropic in February. The company demanded explicit safeguards against the use of Claude in fully autonomous weapons and in mass domestic surveillance. Faced with CEO Dario Amodei’s refusal to yield, the Pentagon designated Anthropic a ‘supply chain risk’, a category historically reserved for foreign adversaries. On 1 May, eight tech giants (Google, OpenAI, Microsoft, Amazon Web Services, Nvidia, SpaceX, Reflection AI and Oracle) signed agreements to deploy AI on classified Impact Level 6 and 7 networks under the formula “lawful operational use”, confirming the clause as a contracting standard. Anthropic is explicitly off the list. The Pentagon has become the rule-setter for AI across the federal apparatus.
The financial scale of the operation explains the Department of Defence’s pull on these firms. While the AI big tech companies are expected to pour roughly US$600 billion into AI infrastructure in 2026, the same year’s defence budget set aside US$54 billion for AI and autonomy alone, with a more than hundredfold increase planned for the newly created Defence Autonomous Warfare Group in 2027. The convergence is not a metaphor. It is the financial heart of the relationship. The Pentagon is simultaneously a customer, a regulator, a source of security clearances and a dispenser of geopolitical prestige to a sector that, given its capex requirements, must turn government contracts into structural revenue. Ask Palantir.
Friendly fire
Amodei’s company has been simultaneously punished, treated as indispensable, and once again, left out in the cold. It lost investors such as the 1789 Capital fund, linked to Donald Trump Jr., and watched competitors race to occupy its space, but until the past week, the National Security Agency (NSA) carried on incorporating Mythos, a frontier model launched in April, and the Pentagon never stopped running Claude in the war with Iran. Injunctions and appeals waged a legal battle around the ‘supply chain risk’ designation, ultimately upheld on national-security grounds. The rapprochement being stitched together via a White House executive action intended to ‘save face and bring them back’ coexists with signs that the friction persists. At a Senate hearing on 30 April, Hegseth called Amodei an ‘ideological lunatic’ and likened Anthropic’s stance to ‘Boeing giving us airplanes and telling us who we can shoot at’. The day before, the White House had rejected the expansion of Mythos access from around fifty to one hundred and twenty institutions. Trump says the company is ‘sorting itself out’. Despite the contradictory signals, every indication suggests the bargain remains open.
The rhetorical packaging of this arrangement also merits attention. Deputy Secretary Emil Michael, a former Uber executive turned Pentagon CTO, has christened military AI America’s next ‘Manifest Destiny’. Defence Secretary Pete Hegseth, in the GenAI.mil launch video, declared that the future of American warfare is spelt ‘A-I’. In January, Hegseth brought in Grok, Elon Musk’s xAI chatbot, coining the term ‘woke AI’ as its antonym, in a veiled reference to Anthropic. The point is not merely to arm the troops; it is to frame the AI frontier as contested cultural territory, where ethical caution is read as ideological suspicion. Anthropic was punished, in part, for taking its own ‘safety-first’ rhetoric seriously, the rhetoric that justified its founding in 2021 by dissidents from OpenAI. The message to the sector is clear. Defending restrictions on use has become a political gesture, not a technical one.
About face
Google’s return to its relationship with the Pentagon illuminates the message from another angle, that of the failure of internal compliance as a brake. In December, the Department of Defence launched GenAI.mil, a platform meant for three million service members, civilians and contractors, with Gemini for Government as its inaugural engine. In April, Google signed an expansion of the contract for classified networks, hours after more than six hundred employees, including directors and senior researchers from DeepMind, sent an open letter to CEO Sundar Pichai, urging him to reject the deal. The mobilisation was simply outpaced by the speed of the signing. Its organisers used a phrase that may yet become a refrain: ‘Maven isn’t over’.
The historical reference is unavoidable. In 2018, around 4,000 Google employees signed a petition against Project Maven, and the company let its Pentagon contract expire, adopting ‘AI Principles’ that ruled out weapons and surveillance. Palantir took up the work. In February of last year, Google quietly deleted those commitments, with Demis Hassabis citing ‘the global competition for AI leadership’. At the time, AI was a contestable military experiment; now it is a declared geopolitical asset. The mobilising capacity that killed Maven has evaporated. The cycle of capture of what one might call the AI-military-industrial complex has closed.
All quiet on the Western Front
History offers clear precedents for this realignment. In moments of direct rescue, the state guaranteed private loans to prevent the collapse of strategic suppliers, as with Lockheed in 1971 and Boeing in 2020. In moments of redesign, it forced the consolidation of an entire sector, as in the ‘Last Supper’ of July 1993, when the Defence Secretary summoned CEOs to the Pentagon and triggered the merger wave that reduced the number of prime contractors from 51 to five. In moments of social or internal resistance, it rarely backed down. Dow Chemical kept producing napalm until losing the contract bid in 1969, Honeywell only divested from cluster munitions after more than two decades of pressure and regulatory change, and Microsoft preserved its IVAS contract worth nearly half a billion dollars even after its employees’ open letter in 2019. The exception was Project Maven in 2018, neutralised in the years that followed. The pattern is structural. When a technology is deemed strategic, the state keeps the firm on its budget even against market logic, against the company’s wishes, or against the protest of its workers.
The Anthropic case is a Lockheed in reverse, set in an era of frontier technology. In 1971, the state found itself obliged to keep alive a company that wanted to leave. In 2026, the state may find itself obliged to tolerate a company that does not want to leave but wishes to dictate the terms of use. Even with President Donald Trump posting an order that the use of the technology cease immediately, the Pentagon was running Claude in the war with Iran, and the NSA was using Mythos. The bargain has changed in nature. In the era of consolidated primes, it was about money and industrial capacity; now it is about contractual rules. The Pentagon is trying to replicate with the chosen eight the arrangement it built over the past three decades with Lockheed Martin and Boeing, large dependent suppliers without clauses limiting what the client may do with the product. The public defence of this logic was made by Hegseth himself before the Senate, with a revealing comparison. In his words: ‘It would be like Boeing giving us airplanes and telling us who we can shoot at’. The image is striking, but Boeing never needed such a clause because it sold finite platforms with bounded functions and no civilian applications. Anthropic does not want to define targets. It is refusing to sell a platform unless the buyer agrees to contractually binding restrictions, including a ban on fully autonomous weapons and mass domestic surveillance.
Oppenheimer
The rhetoric of a ‘new Manhattan Project’ for AI, intoned by the bipartisan congressional commission in November 2024 and endorsed by Trump’s Energy Secretary in early 2025, is less a plan of action than a compensatory fantasy. Former Google CEO Eric Schmidt, who helped to construct the framing of AI as a national-security issue while chairing the National Security Commission on Artificial Intelligence between 2019 and 2021, today warns in the paper Superintelligence Strategy that attempting to replicate Oppenheimer’s atomic bomb programme with frontier AI models would provoke pre-emptive Chinese retaliation and destabilise the very equilibrium the programme claims to safeguard.
In the 1940s, the Manhattan Project required a single objective, centralised control and enforceable secrecy, conditions that AI simply does not reproduce, with talent that migrates between private firms, papers that cross the Pacific in hours and annual capex exceeding any conceivable budget line for a state-run programme. The multivendor arrangement the Pentagon is building is what is left when Manhattan turns out to be structurally impossible to replicate in AI. It is a Los Alamos without Los Alamos, executed by a contractual clause rather than an electric fence.
Apocalypse Now
There is an additional twist that confirms the argument. According to the New York Times, the Trump administration’s non-interventionist stance on AI began to shift in April after Anthropic announced Mythos. The model is so powerful at identifying software vulnerabilities that the company decided not to release it to the public. The White House is now discussing an executive order to create a working group to vet AI models before their release, possibly following a model similar to that of the United Kingdom.
The administration that was elected promising not to regulate AI is now studying how to regulate AI precisely because the company it tried to expel produced the very model that has made regulation unavoidable. Trump’s chief of staff Susie Wiles and Treasury Secretary Scott Bessent, the same operators negotiating Anthropic’s return to the Pentagon, now lead the formulation of this new regulatory policy following the departure of AI czar David Sacks in March. Military capture and civilian regulation of AI share the same political architects, which suggests that both obey a single strategy for the governance of the technological frontier.
The thin red line
There is, however, a qualitative difference between the Pentagon’s historical suppliers and the AI big tech firms that changes the nature of what is being contracted. Lockheed, Boeing, Raytheon, Dow Chemical and Honeywell delivered to the Department of Defence finite products, with bounded military application and a production chain separated from civilian life. A Poseidon missile does not touch the everyday life of the ordinary citizen, a gallon of napalm does not route private conversations, a cluster munition does not decide which news reaches the voter. The damage these technologies inflicted was tragic but geographically confined to the theatre of war. And the supplying firm could be replaced without the country’s informational fabric suffering any blow. The ethical barrier between civilian and military use was sharp, and it was precisely that sharpness which allowed mobilisations such as the campaign against Dow’s napalm or the two decades of the Honeywell Project to have a clear target.
Generative-AI companies operate in another register. With small adjustments, the same models that run on the Pentagon’s classified networks answer questions from students, draft commercial contracts, mediate customer service, suggest medical diagnoses and, increasingly, mediate access to public information. When Google delivers Gemini to GenAI.mil under an “any lawful governmental purpose” clause, it is not selling a specific military product, it is opening the same cognitive infrastructure that serves billions of civilian users to use on networks not connected to the internet (air-gapped) where public oversight is non-existent by definition. The Claude that operates in the war with Iran is the same Claude that writes code for programmers in Buenos Aires or São Paulo. The boundary between civilian and military, between domestic and foreign, between sovereign and transnational, ceases to be geographical and becomes contractual, defined in clauses to which the public has no access.
This affects informational sovereignty not only of the United States but of any country whose citizens, governments, journalists and businesses depend on these same tools. A cluster munition manufactured in the suburbs of Minneapolis had no way of modulating the public discourse of an entire country, a frontier model trained to serve both a Brazilian teenager and an NSA analyst does. It is this indistinction between global civilian infrastructure and national military arsenal that makes the current capture more grave than any previous episode of the military-industrial complex, and that makes Anthropic’s refusal to yield on the contractual clause less an ethical tantrum than the last trace of a boundary being erased.
To the last man
The piece that articulates the present arrangement is the multivendor concept. On 1 May, when formalising the agreements with the eight companies, the Pentagon announced its intention to build ‘an architecture that prevents AI vendor lock’, in the words of Cameron Stanley, the department’s chief digital and AI officer. The phrase sounds like bureaucratic banality, but it is a political doctrine. Distributing contracts among eight companies under the same contractual clause allows the state to discipline each one through the credible threat of redirecting the budget. When Anthropic refused, OpenAI signed. When the encirclement closed in May, Microsoft, Amazon, Nvidia, SpaceX, Reflection AI and Oracle joined. When it becomes expedient to bring Anthropic back, a presidential nod is all it takes. No company has individual bargaining power. All operate within the same legal framework. The Pentagon has fragmented the sector in order to capture it whole.
The result is a new architecture of the relationship between the American state and US tech capital. It is not the classical militarisation of Silicon Valley of the Eisenhower years, when DARPA contracts financed pure research. Nor is it the distant civic-mindedness of the early 2010s, when Google could afford ‘don’t be evil’ and still be worth trillions. It is something in between, and more opaque. Contracts large enough to condition corporate strategy, but dispersed enough never to look like dependency. Ethical principles were erased from corporate documents the moment they became expensive or began to compromise new revenue streams. Employees mobilise, but the decisions are out of their hands. Courts limit executive actions but uphold designations in the name of “national security”.
There is a financial layer to this architecture worth noting. AI big tech firms are burning cash at a scale unmatched in the technology sector, with operational losses in the billions and none of the leaders projecting profitability before the end of the decade. In October last year, the International Monetary Fund and the Bank of England issued public warnings about the risk of an “AI bubble” comparable to the dotcom crash of 2000. In this context, signing a contract with the Pentagon ceases to be merely structural revenue and becomes something more valuable, an insurance policy. Lockheed in 1971, rescued by the Emergency Loan Guarantee Act because it was a strategic Department of Defense contractor, was the concrete invention of “too big to fail” years before the term became financial jargon in the 2008 crisis.
When an AI company signs a “lawful operational use” clause for an Impact Level 7 classified network, it is not merely selling a model, it is becoming a critical national-security infrastructure. If the bubble bursts, and the sector’s history suggests it will, the eight firms that signed on 1 May will have legal and political grounds for a federal bailout that Anthropic, expelled from the club, may not. Capture, viewed from this angle, is also a reverse insurance scheme. The Pentagon bought ideological alignment, and the companies bought an implicit guarantee of rescue written into the contract.
The definition of ‘lawful use’ has become an object of dispute between corporate lawyers and Pentagon prosecutors. ‘When you’re regulating by contract‘, said Jessica Tillipman, associate dean for government procurement law studies at George Washington Law School, ‘it’s basically creating a huge amount of power in the agency that’s negotiated that contract and then becomes effectively the de facto policy of the administration’. Tools that three years ago were sold as research assistants today operate on classified networks, inside active wars, under clauses the public cannot read. Anyone who still wishes to speak of AI ethics from this point forward will have to start by reading contractual provisions.
About the author
James Görgen has been a Public Policy and Government Management Specialist since 2008. He holds a Master’s degree in Communication and Information from UFRGS. He is currently an advisor at the Ministry of Development, Industry, Trade, and Services of Brazil and a member of the Brazilian Internet Steering Committee.










