The price of ethics

This has been a big week for the discussion of ethics, AI and the Military.

Anthropic refused to lift protections on Claude by the US Pentagon so it can be used for “all legal purposes”. The Pentagon has been pushing for many of the safeguards that exist to be removed.

Claude was the only model to used by the US military, in a $200m deal, through their link to the data company Palantir Technologies, and was allegedly used to support the recent Venezula raid.

Anthropic CEO Dario Amodei said the company “cannot in good conscience” agree to the removing the safeguards.

Anthropic had wanted assurance that its models would not be used for fully autonomous weapons or mass surveillance of Americans, while the government pushed for the military to use the models across all "lawful use cases."

Things got messy with the Defense Secretary designating Anthropic a “Supply-Chain Risk to National Security” a label typically reserved for foreign adversaries. On Feb 27, President Trump directed every federal agency in the U.S. to “immediately cease” all use Anthropic’s technology.

In response OpenAI quickly struck a deal to deploy their models on the governments classified network.

Ironically, the deal includes the safeguards that the government was unhappy to agree to with Anthropic.

Sam Atman reassured the team at OpenAI saying “Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems" and that "The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement."

This move follows month of criticism of Anthropic, from government officials, for allegedly being overly concerned with AI safety.

Meanwhile xAi, the makers of Grok have also been negotiating with the military.

Whilst the xAI model is not considered by officials to be as cutting-edge or reliable as Anthropic's Claude, xAI agreed to a standard that would allow the Military to employ its AI for any purpose it deems "lawful."

SOURCE
Articles
Use of Claude during the Venezula Raid: https://www.reuters.com/world/americas/us-used-anthropics-claude-during-the-venezuela-raid-wsj-reports-2026-02-13/
Use of xAI in the US military: https://www.msn.com/en-us/news/technology/the-us-military-will-reportedly-use-elon-musks-grok-ai-in-its-classified-systems/ar-AA1WXMcn
Open AI strikes a deal : https://www.cnbc.com/2026/02/27/openai-strikes-deal-with-pentagon-hours-after-rival-anthropic-was-blacklisted-by-trump.html

BESCI AI OPINION

This is the Anthropic Blog post that lays out their position: https://www.anthropic.com/news/statement-department-of-war

The sheer move of wealth to a small number of Silicon Valley Tech Bro's and those who invest iin them is huge Matt. It is all about money.

Antrhopic, who broke away from OpenAI because they saw their 'ethical beliefs' being compromised seemed to take the moral high ground - but if you read Dario's essay - he is fearful of what AI could be used for - but smug that he is 'protected' as one who the value will come to. It sits uneasily.

He doesn't blink at the jobs Claude will displace, the governments who will have to fund food, housing, medical bills, with no recall because the income that their country used to have is now being paid to Silicon Valley for token usage.

Ironically, OpenAI stole a deal when it fell apart - on the same terms that Anthropic wanted. The one to watch is Grok, who are 'flexible' in their usage - just not smart enough to be relied on, yet.

Can we do anything about it ... not really. GenAI is open source, you can build your own model, if you have the time, resources and skills.

If you have money to spend, you can choose to spend it with those who you believe have better ethics, squeezing out those that don't.

Essay: https://www.linkedin.com/feed/update/urn:li:activity:7424825784386781184

Previous
Previous

Is imitation the sincerest form of flattery?

Next
Next

Will monitoring AI adoption improve leadership selection?