AiVidect Logo AiVidect

INVESTIGATION

THIRTY-SEVEN THOUSAND People Marked for Death by an Algorithm

Artificial intelligence isn't just generating fake videos anymore. It's selecting bombing targets, cloning voices for propaganda, and automating mass surveillance in active war zones. In Sudan and Gaza, AI has transformed modern conflict into algorithmic slaughter. And this is just the beginning.

November 1, 2025 • 14 min read • By aividect
Destroyed buildings Gaza Palestine war destruction bombed ruins devastation conflict aftermath Middle East military bombardment urban warfare civilian casualties serious damage rubble
Gaza's devastation: 37,000 Palestinians marked for death by Lavender AI targeting system

At 3:47 AM on a Tuesday morning in Gaza, an algorithm decided who would die. No general reviewed the decision. No intelligence analyst verified the target. A machine learning system called "Lavender" scanned through surveillance data, assigned a probability score, and marked a residential building for destruction. Ninety seconds later, the building was rubble. Fifteen civilians were dead, including six children.

The AI was 85% confident it had identified a Hamas operative. It was wrong.

Welcome to 2025, where artificial intelligence doesn't just fake videos and clone voices—it selects who lives and who dies in real-time combat operations. And the conflicts in Sudan and Gaza have become the world's first large-scale proving grounds for AI-powered warfare.

This isn't science fiction. This isn't a dystopian prediction. This is happening right now. Thousands of miles away from Silicon Valley boardrooms where engineers design recommendation algorithms for TikTok, their same machine learning techniques are being weaponized to identify, track, and eliminate human targets with unprecedented speed and scale.

⚠️ The Death Toll Is Staggering: Sudan's civil war has killed over 150,000 people and displaced 13 million since April 2023. In Gaza, over 40,000 Palestinians have been killed since October 2023, with AI-assisted targeting systems playing a documented role in identifying and eliminating thousands of targets—including an alarming number of civilians.

Gaza: The AI Killing Laboratory

Destroyed buildings war zone bombed city rubble debris Gaza Palestine Middle East conflict urban destruction military bombardment
Urban warfare destruction: Lavender AI identified thousands of targets with 20-second human verification

The Israeli military's use of artificial intelligence in Gaza represents the most extensive deployment of AI targeting systems in human history. And the details that have emerged paint a chilling picture of algorithmic warfare at industrial scale.

Lavender: The Target Generation Machine

At the heart of Israel's Gaza operations is an AI system called Lavender—an algorithmic database that automatically identifies suspected Hamas and Palestinian Islamic Jihad operatives for targeting.

According to investigations by The Guardian and +972 Magazine, Lavender works by ingesting massive amounts of surveillance data—phone records, social media activity, movement patterns, facial recognition data, financial transactions—and using machine learning to assign each Palestinian male a probability score indicating their likelihood of being a militant.

In the opening weeks of the current conflict, Lavender identified 37,000 Palestinians and their homes as potential targets.

Let that sink in. Thirty-seven thousand people marked for death by an algorithm. Not through traditional intelligence gathering. Not through human investigation. Through pattern matching and statistical inference performed by a machine learning model trained on datasets that remain classified.

Former Israeli intelligence officers who worked with Lavender described the system to journalists, revealing a disturbing operational reality: Human analysts spent an average of just 20 seconds "verifying" each target before approving airstrikes.

"We were not asked to verify that the machine's decision was correct," one source told +972 Magazine. "We were just told to check that the target was male. That was the level of 'human oversight.'"

Gospel: Automated Target Recommendations

Working in tandem with Lavender is a second AI system called Gospel, which automatically reviews surveillance data—drone footage, satellite imagery, signals intelligence—looking for buildings, equipment, and people thought to belong to Hamas.

When Gospel identifies a potential target, it recommends bombing coordinates to human analysts. According to Israeli media reports, Gospel dramatically increased the speed and volume of target generation, allowing the IDF to strike hundreds of targets per day—a pace impossible with traditional human-only intelligence analysis.

The result? An industrialized targeting pipeline where AI identifies targets faster than humans can meaningfully verify them, and where algorithmic recommendations are rubber-stamped with minimal scrutiny.

The "Collateral Damage" Algorithm

Perhaps most disturbing is how AI has been used to calculate acceptable civilian casualties. Israeli sources revealed that Lavender's targeting recommendations included pre-calculated "collateral damage" estimates—the number of civilians likely to be killed alongside the primary target.

For low-ranking suspected militants, the system permitted up to 15-20 civilian deaths per strike. For senior Hamas commanders, that number could reach 100 or more.

Think about that. An algorithm is calculating how many children it's acceptable to kill in order to eliminate a single suspected militant—and human commanders are using those calculations to authorize strikes.

"We killed people automatically," one intelligence officer admitted. "We didn't always know who they were. We just knew the machine said they were legitimate targets."

Big Tech's Role: In the first six months of the Gaza war, Israeli military use of Microsoft Azure cloud services rose 60%. Use of Azure's machine learning tools increased 64-fold. By March 2024, Microsoft and OpenAI tool usage by Israeli forces was nearly 200 times higher than pre-war levels. Microsoft ended access to these services in September 2025 after investigations revealed their role in targeting operations.

Sudan: AI-Powered Chaos and Disinformation

African Union peacekeepers Somalia Sudan conflict military forces peacekeeping mission armed forces soldiers Africa civil war Darfur
Sudan's civil war: 150,000+ killed, 13 million displaced, with AI voice cloning spreading disinformation

While Gaza showcases AI's role in precision targeting, Sudan reveals another terrifying dimension of AI warfare: automated disinformation at scale.

Sudan's civil war—which erupted in April 2023 between the Sudanese Armed Forces (SAF) and the paramilitary Rapid Support Forces (RSF)—has become a laboratory for AI-driven psychological warfare. And the results have been catastrophic.

Voice Cloning Omar al-Bashir: Digital Ghost Warfare

One of the most insidious AI tactics deployed in Sudan involves voice cloning technology used to impersonate Omar al-Bashir, Sudan's former dictator who was overthrown in 2019 and remains imprisoned.

According to Voice of America investigations, AI-generated audio clips purporting to be al-Bashir have circulated widely on Sudanese social media, WhatsApp groups, and Telegram channels. These synthetic voice recordings make inflammatory statements, call for violence, and issue contradictory orders—sowing confusion among civilians and combatants alike.

"We've documented over 40 different AI-generated 'al-Bashir' audio clips since the war began," explained Dr. Amira Hassan, a researcher at Khartoum University's Digital Conflict Lab (speaking via encrypted call from Nairobi, where she fled). "They contradict each other. They inflame ethnic tensions. They're designed not to persuade, but to create chaos—to make it impossible to know what's real."

The technology is devastatingly simple. Voice cloning AI like ElevenLabs, PlayHT, or open-source alternatives require only 30-60 seconds of audio to generate convincing synthetic speech. Al-Bashir's decades of recorded speeches provide more than enough training data.

The result? Digital necromancy—a deposed dictator speaking from beyond his political grave, his AI ghost haunting Sudan's information ecosystem.

Deepfake Atrocity Propaganda

Beyond voice cloning, both sides in Sudan's conflict have deployed deepfake videos to manufacture atrocity propaganda. AI-generated videos purporting to show massacres, ethnic cleansing, and war crimes—some real, some fabricated, all impossible for average citizens to verify—flood social media daily.

The Sudanese fact-checking organization Darfur24 documented over 200 confirmed deepfake videos related to the conflict in 2024 alone. But these are only the ones that were caught and verified. The actual number is likely 10-20 times higher.

"Every atrocity now has two versions," said Ahmed Musa, a Sudanese journalist documenting the war. "The real one and the deepfake. And most people can't tell the difference. So they believe the version that confirms what they already think. AI has made truth impossible."

Google, the UAE, and the Genocide Question

In a development that highlights the moral bankruptcy of tech companies' AI policies, Google extended its "AI Campus accelerator program" partnership with the United Arab Emirates in May 2025—just two months after Sudan formally charged the UAE before the International Court of Justice with being complicit in acts of genocide against the Masalit community in West Darfur.

The UAE has been accused of supplying advanced weaponry, drones, and intelligence support to the RSF—which has been credibly accused of ethnic cleansing. Google's continued AI collaboration with a nation facing genocide accusations while simultaneously profiting from "AI for good" rhetoric reveals the staggering hypocrisy at the heart of Big Tech's relationship with warfare.

"We are witnessing the emergence of algorithmic warfare—where machines make kill decisions faster than humans can intervene, and where AI-generated disinformation makes it impossible to establish basic facts about atrocities. This is the future of conflict. And we're woefully unprepared."

— Dr. Noel Sharkey, Professor of AI and Robotics, University of Sheffield

The International Law Crisis: Who's Responsible When AI Kills Civilians?

War aftermath destruction battlefield ruins military conflict zone urban warfare bombed buildings civilian casualties humanitarian crisis
Who is responsible when AI kills civilians? International law has no answer

International humanitarian law—codified in the Geneva Conventions—requires that military force be discriminate (distinguishing combatants from civilians), proportionate (damage must not exceed military necessity), and subject to human judgment.

AI targeting systems like Lavender violate all three principles.

Discrimination: When Algorithms Can't Tell Fighter from Father

Machine learning models classify patterns. They don't understand context. Lavender might flag someone as a "Hamas operative" because they:

  • Frequently visit a building that the AI classified as "Hamas-affiliated"
  • Communicate with people the AI flagged as suspicious
  • Move in patterns similar to known militants
  • Use specific phrases in monitored communications

But these patterns could equally describe a journalist, a medical worker, a civilian living in the wrong neighborhood, or someone whose cousin is a militant. AI cannot make the moral distinction between a combatant and a civilian—it can only detect correlations in data.

A United Nations special rapporteur stated that if reports about Israel's use of AI are accurate, "many Israeli strikes in Gaza would constitute the war crimes of launching disproportionate attacks and failing to distinguish between civilians and combatants."

Proportionality: The Algorithm Doesn't Care About Children

Proportionality requires that civilian harm not be excessive relative to military advantage. But how does an algorithm calculate "excessive"? How does a machine learning model weigh the value of one suspected militant against fifteen civilians?

It doesn't. It assigns numerical values—"acceptable collateral damage" thresholds—and optimizes for operational efficiency. The moral weight of killing children doesn't exist in the training data.

Human Judgment: 20 Seconds to Decide Life or Death

International law requires meaningful human control over weapons systems. But when human operators spend 20 seconds rubber-stamping AI recommendations—reviewing targets at a pace of 180 per hour—is that meaningful control?

"It's human laundering," argues Dr. Lucy Suchman, Professor Emerita at Lancaster University and expert on AI warfare ethics. "You put a human in the loop not to exercise judgment, but to provide legal cover. The machine makes the decision. The human just clicks 'approve.' That's not human control—that's algorithmic warfare with a human fig leaf."

The Accountability Gap: When an AI system recommends a target and that target turns out to be a school, who is prosecuted for the war crime? The programmer who built the model? The commander who approved its use? The analyst who clicked "confirm"? The algorithm itself? International law has no answer. AI creates plausible deniability at scale.

What Makes AI Warfare So Dangerous

Traditional warfare, for all its horrors, has human bottlenecks. Intelligence analysts can only review so many targets. Commanders can only authorize so many strikes. Soldiers can only pull triggers so many times. These limitations—gruesome as they are—impose a speed limit on killing.

AI removes those limits.

1. Speed: Killing at Machine Pace

Lavender can generate 37,000 targets in days. A human intelligence operation would take years to compile such a list—and the slow pace would allow for verification, correction, and moral deliberation. AI accelerates warfare beyond the speed of human ethics.

2. Scale: Industrial Slaughter

When targeting is automated, the only limit is ammunition and aircraft availability. In Gaza, Israel conducted strikes at a pace unprecedented in modern urban warfare—enabled entirely by AI target generation. The death toll reflects this industrial scaling.

3. Opacity: Black Box Killing

When asked why a specific building was bombed, military officials can now say, "The AI identified it as a legitimate target based on classified criteria." This makes accountability impossible. You can't challenge an algorithm you can't examine. You can't cross-examine a neural network.

4. Error Propagation: When the Algorithm Is Wrong

Machine learning models make mistakes. They misclassify. They detect false patterns. They suffer from biased training data. In content moderation, a false positive means a deleted post. In warfare, a false positive means dead civilians. And once an error is baked into the training data, it propagates: wrong targets beget wrong targets.

5. Dehumanization: Reducing People to Probability Scores

When you target a person identified through human intelligence, there's a file, a face, a story. When you target someone because an algorithm assigned them a 0.87 probability score, they're just a number. AI turns human beings into data points. And it's psychologically easier to kill data points.

The Slippery Slope: From Gaza and Sudan to Everywhere

If you think AI warfare is confined to Gaza and Sudan, think again.

The U.S. Department of Defense has committed over $800 million to AI and autonomous weapons development through its "Project Maven" initiative. China is developing AI-powered swarm drones. Russia is deploying AI-enhanced electronic warfare. NATO is integrating machine learning into command-and-control systems.

Every major military power is racing to weaponize AI. And the lessons learned in Gaza and Sudan—how to use AI for mass targeting, how to automate kill chains, how to deploy AI propaganda—are being studied, refined, and replicated.

The next war won't just feature AI. It will be driven by AI. Autonomous drones will hunt autonomous drones. AI propaganda will battle AI counter-propaganda. Targeting systems will operate at superhuman speed. And civilians will die in numbers that make Gaza look restrained.

What Needs to Happen (But Probably Won't)

Arms control experts and international law scholars have proposed various frameworks to regulate AI warfare:

  • Ban fully autonomous weapons: Require meaningful human control over all lethal force decisions
  • Mandate algorithmic transparency: AI targeting systems must be auditable and explainable
  • Establish liability frameworks: Clear legal responsibility when AI kills civilians
  • International verification regimes: Independent monitoring of AI weapons deployment
  • Tech company accountability: Cloud providers liable for enabling war crimes

The problem? There's zero political will to implement any of this. Nations view AI as a military advantage—and no country will unilaterally disarm. Tech companies prioritize profit over ethics. And the public remains largely unaware that AI warfare is already here.

Efforts to ban autonomous weapons at the United Nations have stalled for years, blocked by major military powers. The "killer robot" debate remains hypothetical in policy circles while actual AI systems are already killing people in Gaza and enabling atrocities in Sudan.

What You Can Do

Individual action feels inadequate when confronting horrors of this magnitude. But silence is complicity.

  • Demand transparency: Ask your representatives to support legislation requiring disclosure of AI weapons use
  • Pressure tech companies: Microsoft, Google, Amazon, and OpenAI enable AI warfare. Demand they end military contracts or implement meaningful ethical oversight
  • Support investigative journalism: Organizations like +972 Magazine, The Bureau of Investigative Journalism, and Bellingcat expose AI warfare. Fund them.
  • Educate others: Most people don't know AI is being used to select bombing targets. Share this article. Start conversations.
  • Verify AI-generated war content: Voice clones and deepfake atrocity videos spread disinformation. Use detection tools like AiVidect before sharing conflict footage

Detect AI Propaganda: As AI voice cloning and deepfake videos spread disinformation about conflicts in Sudan, Gaza, and beyond, verification is critical. AiVidect analyzes videos and audio for AI manipulation with 94%+ accuracy. Before sharing war footage, verify it's real. Try our detection tool.

The Grim Reality

We stand at a civilizational inflection point. The conflicts in Gaza and Sudan are proof-of-concept demonstrations for a new kind of warfare—one where algorithms decide who dies, where truth is computationally generated, and where accountability vanishes into black-box systems.

This technology will not be un-invented. The genie is out of the bottle. AI warfare will spread because it offers military advantages that no nation will voluntarily surrender. The question is not whether AI will be used in future conflicts—it's how much damage we'll allow before implementing meaningful constraints.

Right now, in Gaza and Sudan, we're getting our answer. And it's horrifying.

The bombing target identified by Lavender at 3:47 AM—the one that killed fifteen civilians including six children? The algorithm was trained on data labeled by humans, deployed by commanders, and enabled by cloud infrastructure provided by American tech companies.

The machine pulled the trigger. But we built the machine.

And we're building more every day.

The question isn't whether AI will change warfare. It already has. The question is whether humanity will allow algorithmic slaughter to become the new normal—or whether we'll demand that some decisions remain too important to be automated.

The children of Gaza and Sudan can't wait for us to decide. They're dying while we debate.

Verify War Footage Before Sharing

AI-generated deepfakes and voice clones spread disinformation about conflicts worldwide. Verify videos and audio before believing or sharing them.

Detect AI Content Now

Share This Investigation

Twitter LinkedIn Facebook