Can Governments stop AI models?

Well yes. As the US Government has demonstrated with the latest Anthropic Mythos release.

Not because it is an AI product, but because of what it is capable of doing, which they believe is a national security risk.

Mythos has the ability to autonomously find and exploit thousands of high‑severity software vulnerabilities, across operating systems, browsers, and critical infrastructure software.

Something that can be used for good (to improve infrastructure) and bad (to exploit the loopholes).

Anthropic, even though they have a fraught relationship with the US Military right now, having been removed from the approved AI supplier list, showcased and gave Mythos to the US military for trial.

US Law has provisions which allow the government to restrict who can receive technologies that materially affect national security, even if they are privately developed.

They also have agreements with Anthropic that allow them to evaluate pre-deployment models for potential security risks.

The request from Anthropic to expand their early user base from 50 to 120 customers was rejected, in part due to the potential loss of compute power available to the US government.

Other governments, like India, are keen to get their hands on Mythos too.

Will we see a restriction by governments in AI availability? Unlikely, and many Frontier firms have multiple models that are available, with different capabilities.

Those that are deemed to be critical or dangerous to national security may continue to be restricted, or adjusted.

Meanwhile, it is worth remembering that AI is open source. You can, if you have the technical understanding, set up your own AI server and language model. Some are able to run on a single, standalone PC.

SOURCE

https://www.bloomberg.com/news/articles/2026-04-30/white-house-ai-memo-hits-issues-driving-anthropic-pentagon-feud

https://www.msn.com/en-us/news/other/white-house-blocks-anthropics-mythos-ai-expansion-over-risks/gm-GM005D3FD7

https://www.nytimes.com/2026/05/04/technology/trump-ai-models.html (paywall)

BESCI AI OPINION

It is worth remembering that governments are and will be heavy AI users, and it is in the commercial interests of Antrhopic to repair the relationship that they have with the US government.

Meanwhile Anthropic has release Claude Security as a Beta for any Claude Enterprise customers: https://claude.com/product/claude-security

Previous
Previous

AI better in the ER room

Next
Next

Will the US midterms be driven by AI?