It’s a standoff that pits tech ethics against state interests. The artificial intelligence company Anthropic is reportedly in a tug-of-war with the White House. The issue at hand: their refusal to allow law enforcement agencies like the FBI to use its AI, Claude, for surveillance purposes.
Amazon co-founder MacKenzie Scott has donated over $19 billion to charity in just five years
Diamond batteries powered by nuclear waste promise 28,000 years of clean energy
“Prohibition of Domestic Surveillance”
This controversy, unveiled yesterday by the news site Semafor, originates from Anthropic’s stringent usage policies. These policies explicitly ban the use of their AI models for “domestic surveillance” activities.
As a result, the company has recently denied requests from contractors working for U.S. federal agencies, including the FBI and the Secret Service. These agencies wanted to employ Claude AI for tasks involving the surveillance of American citizens but were met with an unequivocal refusal from Anthropic.
A “Moral” Stance That Irks the Authorities
According to sources within the Trump administration quoted by Semafor, this denial has led to a “growing hostility” towards the company. The White House views Anthropic’s decision as a “moral judgment” on law enforcement work and believes the company applies its rules selectively, based on its political views.
The conflict is intensified because, in some of the U.S. government’s highly secure cloud environments, Anthropic’s Claude AI is the only one available, making its non-cooperation problematic for the agencies involved.
Anthropic’s Dilemma
This situation places Anthropic in a precarious position. The company, founded by former OpenAI executives, has always prided itself on its commitment to “safe and ethical” AI—indeed, it’s their hallmark.
While already collaborating with the U.S. government on other fronts like defense, the company has clearly drawn a red line when it comes to mass surveillance. It’s a challenging balance to maintain between its founding principles and the pressure from a client as powerful as the U.S. government.
What’s the Verdict?
All this underscores the newfound power of AI companies. Anthropic is no longer just a technology provider; it positions itself as an ethical player capable of saying “no” to the state, even on matters of national security. This is a bold stance, but also a risky one, especially under a Trump administration that expects “loyalty” from its tech champions.
This conflict highlights that the question is no longer just about what AI can do, but what it should be allowed to do. And on that front, the debate is just beginning. What’s your take on this issue?
