In one of the scariest moments in modern history, we're doing our best at ScheerPost to pierce the fog of lies that conceal it but we need some help to pay our writers and staff. Please consider a tax-deductible donation.

By Nolan Higdon Substack

“Many professors cite the rising impact of AI and the speech of some prominent politicians as reasons to inoculate students against propaganda and falsehoods being mass produced and spread on social media,” according to a December 2025 report from Government Technology. This report highlights a nationwide push to integrate media literacy into educational curricula as a response to the pervasive use of AI.

Deborah Lee, Professor and Director of Research Impact and AI Strategy at Mississippi State University, noted in The Conversation that over the past three years, people have integrated ChatGPT into their daily lives as a quick way to look things up. For example, in December 2025, the New York Times reported that users are increasingly uploading their medical records to AI platforms to receive medical advice. So-called artificial intelligence (AI) has become so integrated into some users’ lives that a woman in Japan entered into a non-legally recognized marriage with an AI-generated character. AI is being used to create alternative sources of information, such as Tesla founder Elon Musk’s AI-driven Wikipedia, which The Intercept describes as “the anti-woke Wikipedia alternative” aiming to produce a right-wing version of the truth. This platform reportedly refers to Adolf Hitler as “The Führer,” highlighting the dangerous potential of AI to distort historical narratives.

Media literacy, which is broadly defined as “the ability to access, analyze, evaluate, create, and act using all forms of communication,” is often cited as a necessary part of education to prepare students for engaging with dominant technologies such as AI. However, not all media literacy education is equally effective. Big Tech companies like Meta and Microsoft forge partnerships with faculty unions and schools that effectively turn media literacy curricula into corporate indoctrination.

As an alternative, educators must prioritize critical media literacy, which interrogates the power dynamics behind media, including who owns it and whose stories are told. To protect public understanding from a flood of corporate algorithmic misinformation and historical revisionism, society must go beyond basic digital skills and develop critical media literacy that exposes the power, ownership, and biases behind these technologies. Consequently, the integration of AI into the classroom must not be viewed as a neutral technological upgrade, but as a fundamental pedagogical crisis that requires educators to reclaim the curriculum from corporate indoctrination and prioritize the human intellectual labor necessary to sustain a functional democracy.

In July 2025, the American Association of University Professors (AAUP) warned that administrators are hastily introducing AI into classrooms. This rush is part of a broader corporate indoctrination effort, driving schools to adopt AI rapidly without a clear understanding of the technology. Silicon Valley has a long history of overpromising and underdelivering. For example, Facebook promised to bring the world together, and Theranos claimed it would create a medical lab in a box, both ultimately failing to deliver. This is because they market themselves as a new kind of industry, but in reality they are the age-old rapacious corporations that those living in capitalist societies have become accustomed to for centuries. Big-tech is not benevolent or altruistic, it is filled with people like Peter Thiel and Curtis Yarvin who oppose democracy and privacy and prefer to engender a techno-authoritarian surveillance state. It has been caught knowingly profiting from hurting and exploiting children. In fact, this is so well known within the industry that many insiders do not allow their own children to use the tools and platforms they create.

The advent of so-called AI has changed none of this. Let us not forget that OpenAI, which brought us ChatGPT, was originally supposed to be a nonprofit that would share its profits with the public through open software until techno-capitalists saw the lucrative opportunity and turned it into a private enterprise. It is currently battling Google’s Gemini, which has surpassed it in the capitalist arms race, aiming to corner the market rather than improve users’ lives.

It is these tech-oligarchs who are convincing schools, teachers, parents, and students to believe that Big-tech has succeeded in creating AI tools that are tantamount to human intelligence. However, study after study reveals that AI is not intelligent. At best, it can follow orders well for menial tasks, but because it cannot adapt to a situation, it fails to perform complex tasks. For example, The Wall Street Journal reported that an AI vending machine failed to adapt properly to manipulation and gave away free items. Similarly, the driverless Waymo cars cannot deal with issues such as flooding and stop as a result. Gary Smith consistently presents new cases illustrating the limited intelligence of AI.

Worse, AI systems are programmed to seem objective, but in reality they reflect the bias of their human creator and the data they are trained upon. This was made abundantly clear when Musk’s GrokAI bot spread baseless white supremacist conspiracies, including the white genocide in South Africa, and referred to itself as “Mecha Hitler.” Relatedly, AI lacks a moral compass, aiding teens in suicide and helping expand child pornography use.

Furthermore, AI gets things wrong, with one study finding that AI bot summaries of news content were inaccurate 45% of the time. Similarly, The Washington Post has begun using AI to generate customized podcasts for users based on their interests and preferred voices. In addition to worsening the problem of confirmation bubbles, research revealed that the AI-generated podcasts were rife with fabrications and errors.

AI systems are also known to fabricate information, known as hallucinations, which has gotten users in trouble, such as the lawyer who used it to make a brief that was full of fake case law and lost their license, or the ethics in education committee who used it to draft their report only to find fabricated studies. It is for these reasons that philosopher Michael Clune describes the integration of AI into education as institutions “preparing to self-lobotomize.”

In response, Big-tech has been engaged in an aggressive public-relations campaign to conceal these problems and convince the public that AI is the future, and that we must integrate it quickly and everywhere before the public realizes how unwise this move is. This is signaled with hip commercials and huge billboards that read “Stop Hiring Humans.” Indeed, it feels like the early 2000s, when corporate social media and smart devices were integrated everywhere, and only a decade or so later, when the deleterious effects became obvious (decline in social skills, algorithmic censorship, lack of privacy, habitual screen use/screen addiction, mental health and development effects etc), did people begin to question why we had embraced them so fully. But by then it was too late, the public had become ignorant of a world without these corporate devices and platforms, and the companies had grown too economically and politically powerful to be removed from the public space.

Many AI apologists argue that the problems with AI are worth the hassle because it will make human lives easier and wealthier, but recent reporting casts doubt on that claim. Reuters has noted that companies investing heavily in AI are not yet seeing significant returns. What they are finding is that beyond menial tasks, it is wiser and more profitable for them to rely on human employees.

AI has facilitated new forms of market manipulation by enabling industries to circumvent anti-collusion laws. Reports suggest that meat producers have used AI-driven pricing models to raise prices through coordinated layoffs and plant closures. Similarly, landlords have deployed algorithmic tools to systematically increase rents.

Given these trends, economists like Ruchir Sharma warn that AI resembles a speculative bubble fueled by the hope of future profits rather than a sustainable long-term boom. Sharma compares current big tech AI spending to the dot-com overbuild before the 2000 crash. This was echoed by economist Gary Smith, who warned in late 2025 that the OpenAI bubble is set to burst just like Netscape did in the year 2000.

The Illusion of Knowledge: Cognitive Decline in the AI Era

Despite mounting research, educational institutions remain captivated by the corporate narrative that AI represents the inevitable future. They are rapidly establishing and promoting new AI programs , initiatives, and positions without sufficient scrutiny. Yet, some of the negative impacts of AI on learning are only now being understood. A recent study out of MIT reveals that as compared to non-AI users, ChatGPT users have lower brain engagement and decreasing brain activity over time when completing cognitive tasks. Another study suggests that AI users often mistake linguistic plausibility for evaluation, creating the illusion of knowledge without engaging in the intellectual labor necessary to determine whether they actually know anything. Furthermore, a recent Los Angeles Times reported that universities are struggling because “AI is rapidly eroding their monopoly on instruction, while young adults are experiencing historically high levels of loneliness.” Consequently, instructors are struggling to foster social connections alongside academics, both of which are critical to the learning process.

Students are not solely to blame, however; recent reporting reveals that academic journals are publishing studies containing fabricated citations generated by AI chatbots. This suggests that even scholars are taking shortcuts, using AI tools, which are known to not only provide misinformation but actively fabricate content and citations.

Amid this uncertainty, researchers and educators remain divided on how to respond to these disruptions. For example, recently, Australia took the bold step of banning social media for people under the age of 16. Australia’s decision sparked a heated debate in the United States (U.S.), exemplified by a controversial headline from Taylor Lorenz: “Social Media Bans Aren’t About Protecting Kids. They’re About Censorship and Surveillance.” This perspective exudes media illiteracy as it seems largely unaware that Big-tech tools and platforms are derived from the military and intelligence communities, and are fundamentally built upon a surveillance economy, often referred to as surveillance capitalism. Furthermore, there are decades’ worth of research about censorship by social media companies ranging from WikiLeaks and Twitter Files to hate speech and political dissent.

Setting aside the vapid reporting, the debate in the U.S. over the ban revealed that two decades later, scholars are still debating whether social media was good for the classroom. For example, a recent paper claiming to outline a “consensus” among researchers on the impact of social media on students was met with anger by scholars who rejected the idea that any such consensus, negative or otherwise, exists.

Scholarly division is nothing new. Knowledge production depends on debate, which inherently involves disagreement. However, when such division creates inertia that hinders academia from serving the public interest, a shift in authority becomes necessary. Practical expertise should take precedence over endless theoretical debates. Educators are on the front lines of the classroom; they are the true professionals when it comes to facilitating learning.

Educators have long expressed concern over the detrimental impact of corporate tools on education. This issue was starkly highlighted during the COVID-19 pandemic, when digital and remote learning frequently became a subject of criticism among parents, teachers, and students. Similarly, many educators view the introduction of AI in the classroom as diminishing the development of critical thinking skills. One teacher quit the profession in a viral rant about how AI was destroying education. Another admitted that, despite describing themselves as an “AI optimist,” they now feel that technology is burning them out in their profession. Even students know, with one writing The Atlantic “I’m a High Schooler. AI Is Demolishing My Education.” A survey found that students believe the use of AI undermines the student-teacher relationship. These are just a few examples.

Rather than engaging in deep debates and movements to limit AI in education to appropriate contexts, if anywhere at all, the discourse has largely focused on combating cheating. While AI certainly creates new opportunities for cheating, that concern ranks low compared to the negative cognitive impacts researchers are finding. Nonetheless, in an almost embarrassingly shallow debate, the cheating discussion has centered on blue books, also known as exam books, where students write answers in class instead of submitting digital assignments that can be manipulated with AI. Indeed, the Wall Street Journal captured this panic with the headline: “They Were Every Student’s Worst Nightmare. Now Blue Books Are Back.”

Supporters of exam books believe it will help improve the cognitive skills and memorization of students, as studies show handwriting has a better impact than digital learning. However, opponents are concerned that the era of rote memorization in education was abandoned for good reasons in the digital age, and this is a poor reversal.

Rather than commenting on individual teaching styles, I fully support academic freedom. As professionals, educators should employ the methods that work best for their classrooms. However, those who oppose a return to exam blue books should provide a reasoned explanation that includes a defense of the cultural and educational developments that have taken place.

Over the past decade or two, handwriting has been largely replaced by corporate for-profit screens and digital media. It is unclear how opponents of blue books demonstrate that today’s corporate shaped society produces smarter and better-educated critical thinkers. While the decline of blue books is not solely responsible, it is part of a broader shift toward corporate technology-driven education in recent decades. Therefore, to oppose a return to some of these traditional methods, one must offer a clear defense of the current intellectual culture that corporatism has wrought It must engage with the research of writers such as Morris BermanChris HedgesSusan Jacoby, Ross DouthartDaniel BoornsteinCornel West, and Kurt Andersen, among others, who have warned about America’s intellectual decline. Or at least discuss the cultural resonance of the dystopian comedy Idiocracy or the NOFX song “The Idiots Are Taking Over.”

Regardless of where one stands on the blue books debate, the digital age has taught us one thing: waiting decades to make a determination about something like AI in education is a mistake because it allows corporations to shape the process and integrate themselves so that their tools become indispensable by the time people realize the problem.

The Silicon Shield: Deregulation, Speculative Bubbles, and the Corporate State

These aren’t just the gripes of out-of-touch folks stuck in the past. Even researchers at institutions such as the Machine Intelligence Research Institute (MIRI) have written that, across all lab simulations thus far, the emergence of superintelligence AI (ASI) leads to human extinction. ASI is a hypothetical level of artificial intelligence that would far surpass the brightest human minds in virtually every cognitive domain, including creativity, problem-solving, and scientific skill, allowing it to learn, reason, and improve itself at speeds and scales beyond human comprehension. They sum it up as “if anyone builds it, everyone dies.” Nevertheless, Big-tech continues to compete with one another to develop superintelligent systems. For example, Meta launched Meta Superintelligence Labs (MSL), so they could compete in developing superintelligent systems.

Usually, in important education debates, the government is often called upon to make a determination, but not under the Trump administration. After all, the administration needs media literacy before it can effectively shape the industry. Federal officials have repeatedly demonstrated basic digital ignorance, from assuming Signal is secure to releasing “redacted” Epstein files that anyone could unmask or access before they were ready for release.

Compounding the issue, his administration is packed with AI boosters. David Sacks, the administration’s AI and crypto czar, has helped shape policies favoring Silicon Valley interests and Trump’s own tech investments. Despite these ties, Sacks claims there will be no federal bailout if the AI industry fails. However, Sam Altman of OpenAI appears to hold a different view, arguing that government should serve as a “last resort” once AI, in his view, inevitably disrupts the economy.

The administration appears focused on maintaining the AI façade as a means to uphold an economic bubble, which in turn obscures the deeper, structural challenges facing the economy. This is achieved by allowing chip sales to China; and directing education and science agencies to adopt AI. Its usage in government is leading to corruption not efficiency. For example, the Federal Communications Commission (FCC), tasked with ensuring that news media serves the public good, is failing to address AI-generated misinformation. Instead, it is targeting legitimate news outlets like KCBS in the San Francisco Bay Area for their critical reporting on ICE raids during the Trump administration.

States are hamstrung as well thanks to Trump’s executive order banning state-level AI regulation, viewing such rules as threats to the AI economy. As a result, Americans and their education system are left to depend on the courts and the press. Notably, The New York Times and Chicago Tribune are suing Perplexity AI for reproducing their stories without permission. The lawsuits regarding AI in education have mostly focused on whether schools can discipline students for using AI. There is a growing legal discourse surrounding the legality of AI surveillance in schools, though it remains in its early stages.

Activists have also picked up where government has failed. Environmental activists have highlighted the enormous energy consumption of data centers powering AI, emphasizing the burden placed on local households through higher energy costs. This has become a political issue, with politicians gaining support by opposing new data centers. Recently, Senator Bernie Sanders called for a moratorium on new data centers in order to prevent the continued development of super intelligence. In a related move, Senator Katie Britt has urged legislation to hold Big-tech criminally liable for the damage AI causes to children.

Alarmed by these rhetorical shifts, tech companies and lobbyists are investing millions to fund campaigns and ads supporting data centers. Relatedly, Washington Post reports that Big-tech is funding the university research that shapes the regulatory framework of AI. This ongoing battle reflects growing public pressure on the industry, but sustained resistance will be necessary to counter the increasing power of Big Tech.

Conclusion: Dismantling the Digital Dystopia: The Future of Media Literacy

The rush to automate the American classroom is less an evolution of learning and more a “self-lobotomization” of our educational institutions at the best of Big-tech oligarchs. The current AI trajectory is defined by a dangerous paradox: while tech-oligarchs market these tools as the pinnacle of intelligence, the reality is a landscape of “Mecha Hitlers,” fabricated citations, and decreasing brain engagement among users. By allowing corporate interests to dictate the terms of digital integration, we risk repeating the mistakes of the social media era, where we only recognize the harmful impact of corporate exploitation after the public has lost the ability to imagine a world without them. If we continue to prioritize the convenience of algorithmic summaries over the intellectual labor of critical thinking, we are not just failing a generation of students; we are actively fueling the speculative bubble of a techno-authoritarian state.

Educators and schools must heed the December 2025 call for media literacy, but also more democratic oversight and control of these tools and platforms. By moving beyond basic digital navigation and embracing critical media literacy, educators can ensure that the next generation is equipped to dismantle Big-tech oligarchy rather than being consumed by it. Only by prioritizing human connection and rigorous analysis over algorithmic shortcuts can we prevent the idiots from taking over, and preserve the cognitive foundations of our democracy.

Nolan Higdon is a founding member of the Critical Media Literacy Conference of the Americas, Project Censored National Judge, author, and university lecturer at Merrill College and the Education Department at University of California, Santa Cruz.

You can also make a donation to our PayPal or subscribe to our Patreon.

Please share this story and help us grow our network!

8 Comments
Most Voted
Newest Oldest
Inline Feedbacks
View all comments