Defence AI: Anthropic awarded DoD agreement for frontier AI
In another development for defence AI, Anthropic has been awarded a two-year prototype other transaction agreement by the US Department of Defence (DoD), through its Chief Digital and Artificial Intelligence Office (CDAO). This agreement, with a value ceiling of $200 million (£148 million/€171 million), was announced on 14 July 2025, and focuses on developing frontier artificial intelligence (AI) capabilities to support US national security.
A prototype other transaction agreement is a flexible, legally binding instrument used by the DoD to fund research and development for new technologies and prototypes, without being subject to all the regulations of traditional contracts. This approach aims to attract innovative companies, including those not typically defence contractors, by streamlining processes and allowing more agile development.
Under the agreement, Anthropic will work directly with the DoD to identify areas where frontier artificial intelligence can provide significant impact for defence AI. This includes developing working prototypes, which will be fine-tuned using DoD data. The collaboration will also involve defence experts to anticipate and mitigate potential adversarial uses of AI, leveraging Anthropic’s risk forecasting capabilities. Furthermore, there will be an exchange of technical insights, performance data, and operational feedback to accelerate the responsible adoption of artificial intelligence across the defence enterprise, the press release explains.
Thiyagu Ramasamy, Anthropic’s Head of Public Sector, stated that this award signifies a new phase in the company’s commitment to US national security, building on earlier federal deployments. He noted the company’s intent to deepen collaboration across the Department to address mission challenges through its technical expertise and products such as its Claude Gov models and accredited Claude for Enterprise offerings, as well as its leadership in responsible AI.
Anthropic highlights its commitment to responsible AI deployment, including safety testing, collaborative governance development, and strict usage policies, which it states make its Claude models suitable for sensitive national security applications. This perspective aligns with the company’s belief that powerful technologies carry responsibility, particularly in government contexts where decisions have significant implications. The company asserts that democracies must collaborate to ensure AI development strengthens democratic values globally, maintaining technological leadership against authoritarian misuse. A sentiment echoed by many others in the defence AI field.

This shows the functionality of Anthropic’s Claude AI. Enterprise versions – which are different to those shown – are used in the defence and security sector. Credit: Anthropic
This agreement with the CDAO builds on Anthropic’s expanding partnerships within the public sector. Last week, it was announced that Lawrence Livermore National Laboratory would provide Anthropic’s advanced AI capabilities to over 10,000 scientists and researchers to bolster research across nuclear deterrence, energy security, and materials science, the press release states.
Anthropic has also integrated its Claude models into mission workflows on classified networks with partners such as Palantir, enabling defence and intelligence organisations to process and analyse large amounts of complex data. Claude Gov models, specifically designed for national security customers, are already deployed by agencies within the national security community, running on Amazon Web Services (AWS) infrastructure, the press release adds.
Defence AI partnerships, a growing trend
Anthropic’s latest agreement is part of a wider trend within the defence AI field, whereby major artificial intelligence companies are partnering with defence companies, or defence tech companies partner with those in the commercial field. In mid-June 2025 for example, Anduril and OpenAI announced a collaboration focused on strengthening counter-drone capabilities and developing AI tools for security missions. Furthermore, Anduril and Meta announced a partnership in late May 2025 to develop and field Extended Reality (XR) products for military use, aiming to enhance warfighter perception and control of autonomous platforms by leveraging Meta’s AI and AR investments. Separately, OpenAI itself received a $200 million contract from the DoD for generative AI prototypes. Also in mid-June 2025, Applied Intuition, a developer of autonomous vehicle software, raised $600 million in funding and is increasingly providing its AI services to the defence industry, collaborating with the US Army on autonomous technology for instance.
Earlier this year, in late April 2025, Helsing established a maritime partnership with Blue Ocean Marine Tech Systems, Ocean Infinity, and QinetiQ to accelerate the deployment of AI and autonomy in naval operations, specifically addressing subsurface threats and the digitisation of oceans.
Company profile: Anthropic

Anthropic founders Dario and Daniela Amodei both worked at Open AI before establishing Anthropic in 2021. Credit: Anthropic.
Anthropic is a private Public Benefit Corporation (PBC) headquartered in San Francisco, CA, founded in 2021 by a group of former OpenAI researchers, including Dario Amodei and Daniela Amodei. The company’s primary focus is on artificial intelligence research and development, with a stated core mission to advance AI safety and alignment. Anthropic aims to build AI systems, notably its Claude family of large language models, that are reliable, interpretable, and steerable, particularly for sensitive applications. It is known for pioneering “Constitutional AI,” a training method that guides AI behaviour using ethical principles, thereby promoting the responsible deployment of AI technologies. Despite producing products that compete with Google, the company has received $2 billion in funding from Google, and $4 billion from Amazon.
Tech explainer: Frontier AI
Frontier AI is defined as the most advanced and capable artificial intelligence systems currently in existence or under development, pushing the boundaries of what AI can achieve, by Google Gemini. Frontier AI will generally:
- Outperform existing models across a range of tasks and even interact independently with other pieces of software using application programming interfaces.
- They may also be general purpose models as opposed to narrow ones. For instance, a set of algorithms might be trained specifically for terrain navigation, which would be a narrow application. Whereas a general purpose model might be able to accomplish a range of tasks including terrain navigation.
- Some research highlights that frontier AI can produce new behaviours that were not programmed or expected by the developers like advanced reasoning or deception.
In a nutshell, frontier AI tends to push the boundaries of what is possible with AI and lead in the development of models and algorithms. With that, however, it is resource intensive and does create some ethical concerns around what might become possible through its use. This of course carries a lot of potential for defence AI, which has made some rapid progress in terms of capability development in the past five years.
Calibre comment
The US armed forces are increasingly concerned that China is or will gain an edge in terms of defence AI. Although it has a number of projects underway and has tested some AI-enabled targeting systems during Project Convergence, it is not yet fielding AI at scale. Some systems, like the Tactical Intelligence Targeting Access Nodes (TITAN), which have been developed through a partnership including Palantir, Anduril, and Oshkosh, are entering service and use AI. But the US services have much greater ambitions for AI in terms of its scale of use and the different tasks that it is applied to. Contracts like this one with Anthropic will likely form a small stepping stone within that much larger overall picture.
By Sam Cranny-Evans, published on July 15, 2025. The lead image a representation of frontier AI created by Google Gemini. Credit: Google Gemini.

Get insider news, tips, and updates. No spam, just the good stuff!




