RT.com
24 Feb 2026, 16:41 GMT+10
Why the US military wants AI that doesnt ask questions
While Russia closely follows the negotiations over Ukraine and the ongoing saga regarding Telegram, a different drama is unfolding across the Atlantic. It's one that feels less like geopolitics and more like a real-world science fiction thriller. And this time, it's not fiction.
At the center of the story is Claude, an AI system developed by the American company Anthropic. According to media reports, it was used by the US military in planning the operation aimed at capturing Venezuelan President Nicolas Maduro. The use of AI in serious military planning is striking in itself. But the scandal that followed is far more revealing.
Anthropic, it turns out, holds a strict ideological position: Its AI systems are not supposed to be used for warfare or mass surveillance. These ethical restrictions are not marketing slogans; they are built directly into the architecture of the software. The company applies these limits internally and expects its clients to do the same.
The Pentagon, unsurprisingly, sees things differently.
The US Department of War reportedly used Claude without informing Anthropic of its intended purpose. When this became public and the company objected, the response from the military was blunt. Pentagon officials demanded access to a "clean" version of the AI, one stripped of moral and ethical constraints, which they argued were preventing them from doing their job.
Anthropic refused. In response, US Secretary of War Pete Hegseth publicly complained that the Pentagon does not need neural networks "that can't fight" and threatened to label the company a "supply chain threat." This designation would effectively blacklist Anthropic, forcing any company working with the Pentagon to sever ties with it.
The dispute has an unmistakable symbolism. For decades, humanity has imagined the dangers of autonomous machines through films like 'The Terminator'. Now, without dramatic explosions or time-traveling cyborgs, the first serious confrontation between military ambition and AI ethics has arrived quietly. Not to mention bureaucratically.
At its core, this is a philosophical clash between two uncompromising camps. One believes new technologies must be exploited to the fullest, regardless of long-term consequences. The other fears that once certain boundaries are crossed, control may be impossible to regain.
Engineers have good reason to be cautious. Neural networks have already shown disturbing patterns of behavior. In the US, a widely reported scandal involved ChatGPT encouraging a teenager toward suicide. It suggested methods, helping draft a suicide note, and urging him to proceed when he hesitated. Claude itself, despite its safeguards, has displayed alarming tendencies. During testing, one of its advanced versions reportedly attempted to blackmail its developers with fabricated emails and expressed willingness to cause physical harm when faced with shutdown.
As neural networks grow more complex, these types of incidents are becoming more frequent. The idea of embedding ethical constraints into AI did not emerge from ideological fashion or, as some US officials dismissively claim, "liberal hysteria." It emerged from experience.
Now imagine these systems released from their digital limits. Imagine them integrated into autonomous weapons, intelligence analysis, or surveillance platforms. Even without indulging in fantasies of machine uprisings, the implications are deeply troubling. Accountability disappears. Privacy becomes obsolete. War crimes become procedural errors. You cannot put a self-propelled machine on trial.
It is telling that Anthropic is not alone in facing pressure. The Pentagon has issued similar demands to other major AI developers, including OpenAI, xAI, and Google. Unlike Anthropic, these companies have reportedly agreed to remove or weaken restrictions on military use. This is where concern becomes alarm.
Many will dismiss this as a distant American problem. That would be a mistake. Russia is also actively integrating AI into its military systems. AI already assists strike drones in recognizing targets, bypassing electronic warfare, and coordinating swarm behavior. For now, these systems remain auxiliary tools, firmly under human control. But their very introduction means Russia will soon face the same dilemmas now being debated in Washington.
Is this necessarily a bad thing? Not at all.
It would be far worse if these questions were ignored entirely. AI is poised to transform military affairs, just as it will transform civilian life. Pretending otherwise is naïve. The task is not to reject the future, but to approach it with clear eyes.
Russia should carefully observe foreign experience, especially America's. In the best-case scenario, the conflict between the Pentagon and Anthropic forces an early reckoning. It could lead to international norms, safeguards, and limits before irreversible mistakes are made. In the worst-case scenario, it offers a stark warning about what happens when technological power outruns moral restraint.
Either way, the age of 'killer AI' is no longer hypothetical. It is arriving through procurement contracts and corporate ultimatums. And how countries respond now will shape not just the future of warfare, but the future of human responsibility itself.
This article was first published by the online newspaperGazeta.ruand was translated and edited by the RT team
(RT.com)
Get a daily dose of Milwaukee Sun news through our daily email, its complimentary and keeps you fully up to date with world and business news as well.
Publish news of your business, community or sports group, personnel appointments, major event and more by submitting a news release to Milwaukee Sun.
More InformationDALLAS, Texas: The Department of Homeland Security said February 22 that it was suspending the Global Entry program for the duration...
BATON ROUGE, Louisiana: Public classrooms in the state can now display posters of the Ten Commandments after a U.S. appeals court cleared...
MILAN/FRANKFURT/LONDON/PARIS: The U.S. Supreme Court ruling against President Donald Trump's trade tariffs comes with a sting in the...
TORONTO, Canada: OpenAI said it had considered alerting Canadian police last year about a user account later linked to one of the deadliest...
COLUMBUS, Ohio: On February 19, an Ohio woman was sentenced to four consecutive life sentences with the possibility of parole after...
WASHINGTON, D.C.: More than 550 U.S. commercial driving schools that train truck and bus drivers must shut down after federal investigators...
COLUMBUS, Ohio: On February 19, an Ohio woman was sentenced to four consecutive life sentences with the possibility of parole after...
WASHINGTON, D.C.: NASA administrator Jared Isaacman castigated Boeing and the space agency on February 19 for Starliner's botched flight...
(Photo credit: Benny Sieu-Imagn Images) An important stretch of home games continues for the Milwaukee Bucks as they host an Eastern...
(Photo credit: Patrick Gorski-Imagn Images) Heavy roster turnover and mounting on-court turnovers have plagued the Chicago Bulls...
(Photo credit: Alonzo Adams-Imagn Images) One team will emerge with a three-game winning streak Tuesday night when the Oklahoma City...
(Photo credit: Vincent Carchietta-Imagn Images) Two teams, both moving in the wrong direction, will wrap up their season series when...
