In one of the scariest moments in modern history, we're doing our best at ScheerPost to pierce the fog of lies that conceal it but we need some help to pay our writers and staff. Please consider a tax-deductible donation.

Why the 2026 Middle East Crisis Demands Critical AI Literacy

Nolan Higdon Substack

“Tel Aviv, stripped of illusion, as you have never witnessed it,” read the caption above a viral March 2026 video showing missiles hammering the Israeli city as explosions burst across the night sky. To the casual scroller, it appeared to be a harrowing document of modern conflict. The problem, however, was that the video was a deepfake.

Deepfakes are synthetic media edited or generated using Artificial Intelligence (AI). According to the New York Times, a “cascade of A.I. fakes about war with Iran” have proliferated across social media since the United States (U.S.) and Israel reignited military actions with Iran on February 28, 2026. Indeed, the digital landscape is increasingly saturated with synthetic fabrications, as false videos of boisterous celebrations, frantic airport evacuations, devastating bombings, and graphic casualties flood users’ feeds in a relentless stream of misinformation.

As these digital fabrications blur the line between reality and simulation, the necessity for Critical Artificial Intelligence Literacy (CAIL) has moved from an educational luxury to a vital requirement. We are currently navigating a landscape where the “fog of war” is no longer just a metaphor for confusion on the battlefield, but a literal description of an information environment choked by “AI slop.” Indeed, one study found that more than 20% of the content on YouTube is AI generated. Without a robust, systemic effort to instill CAIL, the public remains defenseless against sophisticated psychological operations. We must understand not just how to use these tools, but the socio-political structures that own them and the inherent biases they encode.

From Trojan Horses to Tonkin

The deployment of false information is not a modern phenomenon; it has been a foundational staple of conflict since the ancient world. From the Greeks’ legendary construction of a hollow wooden horse to infiltrate Troy, to Genghis Khan’s Mongol cavalry utilizing feigned retreats to lure enemies into fatal disarray, strategic deception has always defined the battlefield.

In modern democracies like the U.S., leaders have frequently refined these tactics into “false news” designed to manufacture public consent for intervention. This pattern of deception is evident in the “phantom” attack in the Gulf of Tonkin used to escalate the Vietnam War and the infamous claims of “phantom” Weapons of Mass Destruction (WMDs) that prefaced the 2003 invasion of Iraq. Beyond initiating conflict, misinformation serves to artificially sustain public morale and project an illusion of progress. This was notoriously exemplified by the White House during the Vietnam War, where official reports continuously claimed the U.S. was winning even as internal assessments acknowledged a deepening quagmire. Similarly, President George W. Bush’s  “Mission Accomplished” declaration, delivered from the deck of an aircraft carrier just weeks into the 2003 invasion of Iraq, provided a false sense of finality to a war that would ultimately span decades.

The Architecture of Synthetic Media

While the intent to deceive is ancient, AI and social media have complicated these issues by allowing anyone to create slick, convincing content at scale. Even before the recent escalation, the Russia-Ukraine war and the geopolitical tensions between Israel and Bahrain were already inundated with AI-generated misinformation.

The proliferation of deepfakes does more than just spread lies; it erodes the very foundation of objective truth by fostering universal skepticism. This phenomenon allows genuine evidence of suffering to be dismissed as mere simulation. For instance, NBC News reported on a grueling investigation confirming that a video of starving Gazans awaiting food in May 2025 was entirely authentic; nonetheless, a barrage of social media users reflexively dismissed the footage as a deepfake. When the public can no longer distinguish between a sophisticated fabrication and a documented reality, the truth becomes a matter of partisan convenience rather than empirical fact.

In high-stakes environments, the fog of war creates panic and visceral reactions where people feel their decision-making is a matter of life or death. If the information they consume is incorrect, it could be the difference between a peaceful protest and an individual becoming radicalized toward violence.

For content creators and platform algorithms, the incentives are skewed toward chaos. Social media platforms are designed to amplify content that triggers intense emotional reactions. Because fake news is often more sensational than the nuanced truth, it spreads faster and wider.

While the ideal response is for the public to wait and investigate before passing judgment, this is a tall order when individuals believe they are witnessing an active massacre. Some deepfakes can be debunked quickly, such as the video of Israeli Prime Minister Benjamin Netanyahu which showed him with six fingers. In many cases, verifying information takes time; one must geolocate footage, check metadata, and often accept the uncomfortable conclusion that there is not yet enough evidence to draw a certainty. AI has made this truth-finding mission exponentially harder for the average citizen who lacks the resources for deep digital forensics.

Ironically, many people now rely on AI to tell them if content is AI-generated. This reliance illustrates a profound lack of AI literacy. What we commonly call AI today is more accurately described as Large Language Models (LLMs). These are not “intelligent” in any human sense; they are pattern-recognition engines that memorize and predict sequences of data. They are only as good as the data fed into them, and as a result, they reflect human biases, often amplified to a dangerous degree.

Studies consistently show that AI responses can be factually inaccurate about half the time. These models frequently “hallucinate,” fabricating information and citations that do not exist. A study by The Intercept highlighted this absurdity, showing how Google Gemini gave conflicting responses about whether a specific text was AI-generated, even when the text in question was something Gemini itself had produced. When news outlets cite AI detectors as definitive proof, they are often building their conclusions on a foundation of sand.

The CAIL Framework: Interrogating Power

This AI illiteracy compounds decades of neglected media literacy. While many nations have made media literacy a compulsory part of their national curriculum, the U.S. has largely left it to the discretion of local communities. Media literacy is the ability to access, analyze, evaluate, create, and act using all forms of communication, from print to digital media. Without this foundation, the public is ill-equipped to handle the nuances of the algorithmic age.

Critical AI Literacy is an evolving framework that goes beyond simply knowing how to prompt a chatbot. It teaches students to interrogate ownership: who owns the AI, and how does that ownership shape its bias, ideology, and purpose? If a corporation owns the model, will it prioritize profit over democratic stability?

A critical approach also examines representation. We must ask how AI-generated images reflect the biases of their training data, such as the white supremacist or extremist content occasionally surfaced by unmoderated models like Grok AI. Furthermore, it reminds us that the Big Tech industry is often fundamentally anti-human in its philosophy, viewing human beings as buggy systems that need to be fixed or optimized by code.

Choosing Our Reality: A Mandate for the Common Good

As researcher Gary Smith suggests, AI will only surpass human intelligence if humans continue to use it in ways that degrade our own cognitive abilities. Studies show that prolonged, uncritical reliance on AI and screens contributes to a decline in cognitive abilities, memory and focus. CAIL points out that humans are the smart ones; the platforms are merely tools.

In a time of war, the absence of this literacy has deadly consequences. If deepfakes and hallucinating bots are shaping our emotions and our interpretations of international conflict, we are living in a state of perpetual, manufactured crisis. We cannot afford to repeat the mistakes of previous decades, where we naively assumed that simply having access to technology would make the world more connected and smarter.

The goal of Critical AI Literacy is not to make us run from technology, but to understand it so it can be harnessed for the common good. We must decide if AI will be a partner in automating meaningless tasks to improve the human condition, or an exploitative force that dictates the citizenry’s reality. That is a decision for an informed public to make, not for Big Tech executives. If the public remains AI illiterate, they will remain dependent on the very narratives designed to exploit them.

Nolan Higdon is a founding member of the Critical Media Literacy Conference of the Americas, Project Censored National Judge, author, and university lecturer at Merrill College and the Education Department at University of California, Santa Cruz.

You can also make a donation to our PayPal or subscribe to our Patreon.

Please share this story and help us grow our network!

Subscribe
Notify of

2 Comments
Most Voted
Newest Oldest
Inline Feedbacks
View all comments