AI Hype: How a PowerPoint Deck Became the Smartest Thing in the Room
Satirical journalism is supposed to do two things at once: explain reality and humiliate it. So let us be clear at the top. Artificial Intelligence did not arrive quietly. It burst into the room like a startup intern on espresso, shouting that it had “read everything,” “learned everything,” and would now be “redefining humanity” after a quick Series B.
This is the age of AI hype, where software demos are treated like religious visions and every executive speaks about “the future” the way medieval monks spoke about the afterlife: confidently, vaguely, and with no intention of going there themselves.
The Birth of the AI Prophet Class
The first thing AI created was not intelligence. It was experts.
In 2022, the same people who could not mute themselves on Zoom reintroduced themselves as “AI strategists.” LinkedIn filled with grayscale headshots and captions like “Thrilled to explore the ethical implications of large language models,” posted directly under a product announcement that had no ethics team and three lawyers on standby.
According to a completely real-sounding survey by the Institute for Advanced Buzzwords, 83 percent of AI thought leaders had not seen the code, 92 percent had not read the paper, and 100 percent were “excited to continue the conversation.”
One panelist at a San Francisco conference explained AI this way: “It’s like a brain, but faster.” When asked which part of the brain, he said, “The smart part,” and the audience nodded as if that clarified anything. This presumably explains why Gartner’s 2023 Hype Cycle placed generative AI at the “Peak of Inflated Expectations”—a technical term meaning “everyone is very excited about something they don’t understand.”
The Claim That AI Is Smarter Than Humans

This is technically true if the human in question is tired, distracted, and trying to remember a password they set in 2014.
AI can beat grandmasters at chess, diagnose rare diseases, and summarize 40-page documents in seconds. It can also invent academic citations, miscount fingers, and confidently tell you Abraham Lincoln had a podcast.
A leaked internal memo from a major tech company reassured investors that hallucinations were “mostly a branding issue.” An anonymous engineer described the system as “very impressive, provided you do not ask it follow-up questions.”
Which explains the strategy: never ask follow-up questions.
Researchers discovered that ChatGPT fabricated roughly one in five academic citations, with more than half containing errors. In response, one tech CEO explained this was actually a feature, demonstrating the AI’s “creative approach to research.” When pressed, he added that the fake citations were “aspirational” and represented papers that “should exist but sadly don’t yet.”
AI in the Workplace: The Productivity Miracle
Corporations promised AI would increase productivity. It did. Just not in the way advertised.
Office workers now complete tasks at unprecedented speed, mainly because AI does half the task incorrectly, forcing humans to redo it while explaining politely to management that “the model is still learning.”
One mid-level manager in London reported spending three hours correcting an AI-generated report that had replaced his original two-hour report. “But it feels innovative,” he said, adjusting his standing desk while his ergonomic chair judged him silently.
A productivity study found that employees using AI tools sent 47 percent more emails, attended 12 percent more meetings, and felt 68 percent more confused about what they actually did all day.
The future of work, it turns out, is editing. And attending Zoom calls to discuss why the AI-generated quarterly forecast predicted the company would sell seventeen million units of a product they discontinued in 2019.
Meanwhile, McKinsey’s 2024 survey revealed that while 65% of enterprises piloted at least one AI initiative, fewer than 12% actually scaled it into production. The remaining 88% presumably found that their “AI transformation” worked better as a PowerPoint slide than as actual software.
AI Ethics: Introduced After the Fact
Ethics entered the conversation roughly twelve seconds after the first lawsuit.
Tech companies assured the public that bias was being taken seriously. This was comforting, especially since the training data consisted of the entire internet, a place famously known for nuance and restraint.
An ethics board was announced. Its role was unclear, but it existed long enough to appear in a press release. A spokesperson confirmed the company was “committed to fairness,” then declined to explain how the system learned to associate certain jobs with certain people.
“We’re building responsibly,” said another executive, moments before shipping an update at midnight on a Friday. The update’s patch notes read: “Various improvements and bug fixes,” which is tech-speak for “we have no idea what we just deployed.”
According to Pragmatic Coders’ analysis of Gartner’s hype cycles, by 2026, Responsible AI will enter the “Trough of Disillusionment” as organizations discover that ethical principles published in blog posts don’t automatically prevent algorithmic disasters. One analyst noted that most AI ethics committees exist primarily to be listed in press releases and pointed to during congressional hearings.
AI Marketing Language and the Collapse of Meaning
The hype required a new dialect.
“Human-level” now means “better than a distracted intern.”
“Autonomous” means “someone is watching.”
“Learns over time” means “will do something alarming later.”
Every demo is “early,” every failure is “edge case,” and every success is “proof of concept.” This allows AI to be simultaneously unfinished and inevitable, a Schrödinger’s product that is never wrong, just misunderstood.
“Transformative” means “we spent the marketing budget.”
“Revolutionary” means “works 60% of the time.”
“Game-changing” means “please don’t ask for benchmarks.”
As Euronews observed, by late 2023, practically everything at the Consumer Electronics Show had an AI label, from pillows and mirrors to vacuum cleaners and washing machines. Presumably, AI-enabled toothbrushes will soon remind you that your brushing technique is “suboptimal” and schedule a TED Talk to explain why.
The Fear Narrative: Will AI Kill Us All?

No hype cycle is complete without existential dread.
Some experts warn AI could destroy humanity. Others say it will save us. Both groups are selling books.
A leaked email from a risk assessment team admitted the real danger was not superintelligence but “middle managers delegating judgment to autocomplete.” Another email suggested the apocalypse would arrive formatted in 12-point Calibri with “Sent from my iPhone” at the bottom.
The apocalypse, if it comes, will not be dramatic. It will be a calendar invite.
Subject line: “Quick sync re: human extinction (30 min)”
Body: “Hi team! Excited to touch base about species-level threats. Could everyone bring their thoughts on survival strategies? Also, reminder to fill out your timesheets. Best, SkyNet”
According to The Next Web’s analysis, AI’s biggest danger isn’t superintelligence achieving consciousness—it’s that people will outsource critical thinking to systems that are fundamentally just very confident autocomplete. One researcher noted that humanity survived nuclear weapons, climate change, and disco, but might not survive widespread deployment of tools that sound authoritative while being comprehensively wrong.
What AI Actually Is
Strip away the hype and AI is something far less mystical and far more revealing.
It is a mirror trained on human output.
A confidence machine fueled by probability.
A reminder that sounding right often matters more than being right.
It does not think. It predicts.
It does not understand. It assembles.
And it does so with a tone so calm that people stop checking.
Which may be the most human thing about it.
As Wikipedia’s comprehensive entry on AI hallucinations explains, these systems don’t actually “hallucinate” in any human sense—they’re just producing plausible-sounding responses based on statistical patterns in their training data. When ChatGPT invents a citation, it’s not lying. It’s not even confused. It’s just doing exactly what it was trained to do: predict the next word that seems most likely to belong in a sentence about academic references.
The term “hallucination” itself is marketing genius, suggesting the AI is slightly confused rather than fundamentally not designed for factual accuracy. It’s like calling a toaster’s inability to compose symphonies a “musical hallucination” rather than admitting you bought the wrong appliance.
The Punchline Nobody Likes

AI is not the problem. The hype is.
The danger is not intelligence without morality. It is authority without comprehension. A world where people outsource thinking because the answer arrived formatted.
AI will not replace humans.
But it will replace the excuse to think slowly.
And that should worry us more than the robot.
Research from NeurIPS 2025—the world’s most prestigious AI conference—revealed that over 100 AI-hallucinated citations slipped through peer review, spanning 53 accepted papers. The ultimate irony? AI researchers, the very people building and studying these systems, were fooled by fake references generated by tools like ChatGPT and Claude.
If the people who literally design these systems can’t spot when they’re being confidently wrong, what chance do the rest of us have?
The answer is simple: we verify. We question. We remember that intelligence and confidence are not the same thing. And we resist the urge to let persuasive formatting replace actual thinking.
Because the real test of intelligence isn’t whether you can sound smart. It’s whether you know when you don’t know something. And on that measure, AI scores a confident zero while insisting it got 100%.
Disclaimer
This article is entirely a human collaboration between two sentient beings: the world’s oldest tenured professor and a philosophy major turned dairy farmer. No machines were harmed, credited, or blamed in the making of this satire.
Though we did ask ChatGPT to fact-check this piece. It assured us everything was accurate, cited seventeen sources that don’t exist, and somehow concluded the article was actually written in 1847. We took this as confirmation that we’re doing something right.
Auf Wiedersehen, amigo!
I am a Lagos-born poet and satirical journalist navigating West London’s contradictions. I survived lions at six, taught English by Irish nuns, now wielding words as weapons against absurdity. Illegal in London but undeniable. I write often for https://bohiney.com/author/junglepussy/.
As a young child, I was mostly influenced by the television show Moesha, starring singer and actress Brandy. Growing up, I would see Brandy on Moesha and see her keeping in her cornrows and her braids, but still flourish in her art and music, looking fly. I loved Moesha as a child, but now I take away something more special from it. Just because you’re a black girl, it doesn’t mean you need to only care about hair and makeup. Brandy cared about books, culture and where she was going — you can do both.
