In one of the scariest moments in modern history, we're doing our best at ScheerPost to pierce the fog of lies that conceal it but we need some help to pay our writers and staff. Please consider a tax-deductible donation.
Joshua Scheer
The war that Washington and Tel Aviv unleashed on Iran has already produced its first defining image: the shattered remains of a girls’ elementary school in the southern Iranian city of Minab.
On February 28, as U.S. and Israeli forces launched the opening wave of strikes across Iran, missiles slammed into the Shajareh Tayyebeh primary school during the school day. By the time the dust settled, as many as 170 people lay dead — most of them children.
At first, the Trump administration tried to distance itself from the carnage. President Donald Trump publicly suggested that Iran might somehow be responsible for the explosion, despite the growing mountain of evidence pointing in another direction. But investigations by journalists and analysts soon told a different story: video evidence, satellite imagery, and weapons experts all pointed to a U.S. Tomahawk cruise missile striking the compound near the school. Including a Pentagon preliminary report that concluded the United States was at fault.
Now a new report in the Washington Post reveals something even more disturbing. According to officials familiar with the investigation, the school may have appeared on a U.S. military targeting list generated in part by artificial intelligence systems designed to help identify and prioritize targets. In other words, somewhere inside a machine-assisted war planning process, a building filled with children became just another data point — a potential “target.”
One source even admitted there was “some confusion on why it was on the target list”—the school, in this case, that ended up in ruins. And according to exclusive reporting from the Post, Israel now claims it had nothing to do with the strike at all, suggesting the information was not fed into its AI‑driven kill chain was never properly checked before the bombs fell. It’s a chilling glimpse into what happens when militaries outsource life‑and‑death decisions to systems they barely understand. There are many reasons armed forces are embracing AI, but as we’ve seen in Gaza—and again in the past few days—the collateral damage is already far too high. At some point, the world has to confront the truth that no algorithm will make war humane. The only real solution is the one leaders keep refusing to face: war itself has to end.
According to that Pentagon preliminary review, the United States had created the target coordinates for the strike using outdated intelligence supplied by the Defense Intelligence Agency, people briefed on the investigation said. Officials stressed that the findings are still early, but the central question remains unavoidable: why was such stale information never double‑checked before being fed into an AI‑driven kill chain?
The implication is chilling. The first major massacre of the Iran war may not simply be the result of bad intelligence or battlefield chaos. It may also be a glimpse of the future of warfare — where algorithms help decide who lives and who dies. And we just allow it to happen.
And in Minab, that future arrived in the middle of a school day.
AI Is Already Embedded in Military Targeting
Modern militaries collect immense amounts of surveillance data: satellite imagery, drone video, intercepted communications, radar, and signals intelligence. Human analysts cannot process it all fast enough during a war.
That’s where AI systems come in.
Programs developed by the U.S. Department of Defense and allied militaries use machine learning to:
- Scan satellite imagery for suspicious patterns
- Identify vehicles, buildings, and missile launchers
- Detect patterns of movement
- Rank potential targets by “priority”
One early example was Project Maven, launched by the Pentagon in 2017. Maven used computer vision algorithms to analyze drone footage and automatically identify objects like trucks, weapons systems, or human activity.
In theory, AI only assists analysts. A human still signs off on a strike.
But critics argue the reality is more complicated. When the machine produces thousands of possible targets and labels them with probabilities, human operators often defer to the algorithm’s judgment.
The Rise of “Algorithmic Target Banks”
Modern wars increasingly rely on what military planners call target banks.
These are massive databases of locations considered legitimate military objectives. AI systems help populate them by scanning intelligence sources and suggesting targets.
The process typically works like this:
- AI scans surveillance data
- It flags potential targets (vehicles, compounds, infrastructure)
- The system ranks them by threat level
- Human analysts review the list
- Targets move into the “bank” for potential strikes
The danger is scale.
A human analyst might carefully vet dozens of targets per day.
An AI-assisted system can generate thousands.
That speed can push militaries toward high-volume automated targeting, especially in fast-moving conflicts.
We’ve already seen how this plays out in Gaza. Reporting from +972 Magazine, Local Call, and The Guardian—based on interviews with current and former Israeli intelligence officials—revealed the existence of an AI‑driven intelligence system known as Habsora, or “the Gospel,” which analyzes vast streams of data and recommends targets inside Gaza. Israel’s use of this platform has radically expanded its target‑banking capacity, turning what was once a slow, human‑driven process into what sources describe as a high‑speed “factory” for generating strike lists. According to The Guardian, the IDF historically produced roughly 50 targets per year in Gaza; with systems like the Gospel, it can now generate around 100 targets per day, contributing to the 12,000‑plus targets the military said it had identified by early November. This surge is powered by automated intelligence extraction, algorithmic strike recommendations, and a database of 30,000–40,000 individuals flagged as suspected militants—alongside a shift toward bombing the private homes of lower‑rank Hamas members. Multiple intelligence officials told The Guardian the system has effectively become a “mass assassination factory,” where volume is prioritized over verification, raising profound concerns about civilian harm and the ethics of AI‑accelerated warfare
Autonomous Weapons: The Next Step
AI-assisted targeting is only one part of the picture. The next stage is autonomous weapons—machines that can select and attack targets without direct human control.
Examples include:
Loitering “Kamikaze” Drones
These drones circle an area searching for targets using onboard AI vision systems.
They can:
- Detect vehicles
- Recognize radar signatures
- Dive into the target automatically
They have already been used in conflicts involving Russia and Ukraine.
The New York Times reporting “Most drones require a human pilot. But some new Ukrainian drones, once locked on a target, can use A.I. to chase and strike it — with no further human involvement.”
Kamikaze drones have become the frontline symbol of a new, destabilizing era in warfare, where cheap hardware fused with increasingly autonomous software is reshaping the kill chain. In Ukraine, these small quadcopters—once fully dependent on human pilots—are now being upgraded with “last‑mile” A.I. that allows them to lock onto a target and complete the strike even after losing contact with their operator. The New York Times reporting describes Ukrainian systems like the Bumblebee and NORDA Dynamics’ “Underdog” module, which let a pilot guide a drone toward a target and then hand off the final attack to onboard computer vision, enabling the drone to chase moving vehicles or strike fortified positions despite jamming or signal loss. What emerges is a battlefield where semiautonomous weapons are no longer experimental but routine, a live‑fire laboratory in which militaries, private tech firms, and venture capitalists are accelerating the transition from human‑directed drones to machines capable of selecting and pursuing targets on their own. For all the talk of “humans in the loop,” the line between piloted weapon and autonomous killer is eroding fast—and with it, the already fragile norms meant to protect civilians from the next generation of mechanized warfare.
AI Swarms
military researchers are experimenting with swarms of hundreds or thousands of drones.
Instead of one large aircraft:
- Many small drones coordinate using AI
- They overwhelm defenses
- They distribute targeting tasks among themselves
The U.S. Department of Defense and militaries in China are both investing heavily in swarm systems.
There are countless reasons militaries are racing toward AI‑driven warfare, but the lesson from Gaza—and from the autonomous drone strikes unfolding in Ukraine—is brutally simple: the collateral damage is already far too high. When machines accelerate the kill chain, civilians pay the price. And while generals and tech billionaires promise “precision,” the reality on the ground shows that automation only widens the battlefield and lowers the threshold for violence. At some point, we have to stop pretending that better algorithms will make war humane. The only real safeguard against the next wave of AI‑powered destruction is the one humanity keeps refusing to take seriously: ending the wars themselves. Now we have the latest example the murder of 175 children because AI thought it was a military site.
Editor’s Note: At a moment when the once vaunted model of responsible journalism is overwhelmingly the play thing of self-serving billionaires and their corporate scribes, alternatives of integrity are desperately needed, and ScheerPost is one of them. Please support our independent journalism by contributing to our online donation platform, Network for Good, or send a check to our new PO Box. We can’t thank you enough, and promise to keep bringing you this kind of vital news.
You can also make a donation to our PayPal or subscribe to our Patreon.
