Sitemap
AI Advances

Democratizing access to artificial intelligence

Follow publication

The Complete Timeline of How AI Went From Miracle to Bubble in 3 Months

Tracking the key moments from that MIT study to ChatGPT-5 to Nvidia — and why even industry insiders admit the hype has gone too far

10 min read5 days ago

--

Press enter or click to view image in full size
Made this using the latest model from Midjourney. Still a lot of prompts to not get what I want

My journey with AI started at Christmas 2022 when my brother-in-law showed me ChatGPT. I had it write a cold open script for a reality show about a MILF dating show.

Yes, my holidays are weird.

It made an incredible first pass. I was blown away and as I sat with the possibilities it made me nervous about what this could mean for producers, especially as media work started drying up in 2023 and headlines warned that AI would replace us all.

Being the curious fellow I am, I enrolled in an AI/Data Science bootcamp in 2023 to learn how Large Language Models (LLMs) actually work and started programming with Python.

Once I understood how these systems were made and their limitations, the magic disappeared.

While AI serves as a useful tool in my production and writing workflow, I’ve yet to see these systems create anything genuinely interesting without human guidance.

Since then, I’ve watched the gap between marketing promises and actual capabilities grow wider.

That MILF dating show script I created in 2022? ChatGPT still makes a nearly identical version three years later despite billions in improvements. Sure, it makes it slightly faster, but is that worth all the valuations we see now?

These systems still need constant human intervention and haven’t progressed much since Sam Altman opened Pandora’s jar.

That’s why I find myself writing more about AI than media lately. When I keep hearing that AI will replace media professionals, I genuinely wonder what others are seeing that I’m missing.

(That, and the media landscape these days is pretty depressing)

But recently, something has shifted in the conversation around AI. The hype train might finally be falling off the track.

Reality Check #1: We Find out 95% of Corporate AI Projects Fail

Something fundamental shifted in how mainstream media talks about AI over the last two months.

The frequency of AI bubble discussions has skyrocketed dramatically. From scattered academic mentions in 2019 to six articles in 2024, we’ve witnessed an unprecedented surge in 2025 with 19 major publications using bubble terminology.

August 2025 alone produced 13 articles specifically discussing an AI bubble — more than double the entire year of 2024. This represents a 317% increase from 2024 to 2025, with the vast majority concentrated in a single month.

This dramatic shift in coverage was triggered by some data that confirmed what many suspected.

What that MIT Study Found

The inflection easily could be the MIT study that came out in July. That study showed that 95% of AI pilots from companies are not helping their bottom line.

The study was based on 150 interviews with executives, surveys of 350 employees, and analysis of 300 public AI deployments.

To be fair and balanced as they say, some critics have questioned the methodology and the validity of the 95% failure claim, noting that the report lacks clear sourcing for this specific statistic and may have a narrow definition of ‘success’.

But I could also question what ‘success’ means for these AI systems.

The Hidden Cost: AI Creates More Work, Not Less

The truth is this study validates what a lot of people using these tools are feeling. If you remove a human doing a job and replace it with AI all you are doing is making someone else do more work. Because these systems require constant checking.

According to enterprise studies, 88% of HR leaders believe AI tools need human intervention to function optimally. When companies remove one worker and expect AI to fill that role, they’re essentially redistributing that person’s workload to everyone else who now has to monitor, correct, and validate the AI’s output.

A 2024 Upwork study showed that 77% of employees report AI has actually increased their workload rather than reducing it.

If your employee failed and hallucinated as much as AI systems do, you’d fire them immediately. So if companies actually care about quality, they won’t let AI systems run unsupervised.

Why Companies Are Still Moving Froward With Investing In AI

This pattern of AI creating more work while promising less should give companies pause. But unfortunately many continue doubling down on AI investments despite the evidence.

That disconnect between reality and corporate spending decisions is what makes this situation particularly concerning.

96% of C-suite executives expect AI to boost productivity, while 77% of workers say it’s made them busier. This gap exists because executives see the cost savings and have been sold on the idea of potential efficiency gains but don’t account for the supervision overhead and burnout of the remaining employees.

Workers report spending significant time on what researchers call “AI-driven performance monitoring.”

We keep getting promised 4-day work weeks, but what we’re actually seeing is employees babysitting AI systems while simultaneously training those same systems to potentially eliminate even more jobs.

Reality Check #2: ChatGPT-5 Fails to Deliver on AGI Promise

Press enter or click to view image in full size
Sam Altman testifying to Congress with his usual two-part message: AI will either destroy civilization or China will beat us to it — either way, we need more funding.

Then came the release of ChatGPT-5. OpenAI’s latest release was supposed to change everything. Instead, it just further showed how the industry has been overselling what’s actually possible.

The AGI Dream Becomes an Illusion

Throughout 2024 and early 2025, Altman consistently positioned OpenAI as being close to achieving AGI.

In January 2025, he wrote on his blog: “We are now confident we know how to build AGI as we have traditionally understood it”. Earlier, he had told a Y Combinator podcast that AGI might be achieved in 2025 and tweeted that OpenAI had “AGI achieved internally”.

The messaging created expectations that GPT-5 would be nearly AGI-level.

Fortune noted that “OpenAI was so AGI-entranced that its head of sales dubbed her team ‘AGI Sherpas’ and its former chief scientist Ilya Sutskever led his fellow researchers in campfire chants of ‘Feel the AGI!’”. Microsoft, OpenAI’s major partner, published a paper in 2024 claiming GPT-4 already exhibited “sparks of AGI”.

Then ChatGPT-5 came out and his tone changed.

Altman himself recently said AGI has become “not a super useful term.”OpenAI’s own charter defines AGI as ‘a highly autonomous system that surpasses humans in most economically valuable tasks.’

What does that even mean? It’s so vague it’s basically meaningless.

But even they admit their latest model falls short of even this standard.

The vagueness around AGI definitions has become so problematic that experts are calling for companies to “be mandated to disclose how they measure against globally agreed metrics” instead of making broad AGI claims.

The situation became so obvious that even OpenAI’s CEO had to acknowledge reality:

“Are we in a phase where investors as a whole are overexcited about AI? My opinion is yes.” — Sam Altman

Yeah Sam, they are, because you and your competitors are all traveling salespeople overselling these tools.

So if we aren’t any closer to this mythical AGI, what are companies actually doing to make these current LLM systems seem more advanced?

What OpenAI Actually Built: Cost-Cutting, Not Innovation

Since they’ve scraped most of the internet and hit legal walls around copyrighted content, there are few things they can do besides increase speed with all the processors they’re buying.

So the trick these companies are playing is they are building what they call sophisticated AI agents. Pretty much what they are doing is having an AI system prompt another AI system. It then prompts another, creating chains of artificial reasoning that look impressive on demo day but struggle under real-world pressure.

This isn’t deep learning or anything new. It’s like when you plug too many extension cables into an outlet. Sure you are getting more power, but you are increasing the chance of problems.

Research shows that multi-step reasoning models hallucinate 14.3% of the time — worse than simpler systems.

Each prompt hop away from your original question increases the odds that the AI will completely lose track of what you actually asked for and then confidently deliver complete nonsense while burning through server farms worth of electricity.

How GPT-5 Actually Works: A Cost-Cutting Router System

GPT-5 isn’t actually one model — it’s ‘a collection of at least two models: a lightweight LLM that can quickly respond to most requests and a heavier duty one designed to tackle more complex topics’.

This router system allows OpenAI to send ‘high-volume, low-complexity queries to a cheaper, faster model’.

The Register’s analysis was blunt: ‘OpenAI’s new top model appears to be less of an advancement and more of a way to save compute costs’ — representing the ‘cost-cutting era’ for OpenAI.

Why the cost-cutting? OpenAI faced pressure to ‘increase its user base, raise prices, or cut costs’. $20 a month is already a lot for something most people don’t understand.

We’re told to give up coffee and avocado toast to afford a house, but somehow paying for chatbots that hallucinate is essential?

Since raising prices would lose customers to competitors, cost-cutting became the strategy.

So if the biggest AI company in the world is already worried about spending, what does this say about the whole AI market? Nvidia’s earnings gave us the answer — and it wasn’t pretty.

Reality Check #3: Nvidia’s Earnings Reveal the Cracks

Press enter or click to view image in full size
That famous Wall Street guy

Nvidia remains the only company actually profiting from the AI boom, making their results critical for understanding the industry’s real health.

Nvidia has become the world’s most valuable company because tech giants like Microsoft, OpenAI, and Meta are buying their chips for data centers. While every other AI company burns through cash, Nvidia collects the profits from selling the tools.

Warning Signs in the Earnings Report

But their latest earnings revealed concerning trends that suggest even the AI gold rush might be slowing:

The Growth Slowdown: Data center revenue grew slightly less than Wall Street expected. This is unusual for Nvidia, which has been beating estimates for the last several quarters.

This slight slowdown gives a “faint hint” that the company’s exponential growth might be approaching a “curve top” instead of continuing its steep upward trajectory. While gaming and other segments beat estimates, the miss in the most critical AI segment caused nervousness in the market.

The Customer Concentration Risk

Dangerous Dependence on Few Buyers: Two “mystery customers,” identified as Meta and Microsoft, account for 39% of Nvidia’s revenue. When customers like Amazon, Google, and Tesla are included, these few tech giants make up the majority of Nvidia’s revenue.

This creates “key man risk.” If just one CEO like Mark Zuckerberg or Satya Nadella gets cold feet about AI spending, Nvidia’s stock price could collapse overnight. For Nvidia to continue growing, these few companies must not only maintain current spending but aggressively increase it.

The health of the entire AI economy (and arguably the whole US economy) now depends on spending decisions made by a handful of executives on tools that have never proven profitable for anyone except Nvidia.

But capitalism is still a growth story. If line don’t go up they bust out the Wall Street guy for some scared photos.

Hey, maybe I’m wrong and this ends up being like the iPhone or the car, but I still haven’t found the reliable use-cases for the valuations. So if this does crash out, the question is what happens when reality finally catches up.

Two Possible Futures

Scenario One: The Classic Tech Crash

The tech industry has a long history of privatizing profits and socializing losses. If the AI bubble finally pops, taxpayers could end up holding the bag while executives walk away with golden parachutes. We’ve seen this movie before.

Scenario Two: Death by a Thousand Cuts

Maybe we don’t get a dramatic crash. Instead, we get slow enshittification as companies keep buying into AI cost-cutting promises. Healthcare becomes more automated and less human. Customer service transforms into chatbot hell. Media gets churned out by algorithms that optimize for engagement over truth.

We don’t get a market collapse — just gradual degradation of every service we depend on so all these companies can replace workers with AI agents, all justified by “efficiency” that only appears on quarterly earnings reports.

Both scenarios end the same way: the people who created this mess profit, while everyone else deals with the consequences.

The AI revolution promised to make our lives better — less work and more freedom to create what you want for less. Instead, it might just make everything a little bit worse, one automated interaction at a time.

Wesley is a 20+ year media professional with Grammy and Emmy nominations who investigates AI’s impact on creative industries.

Follow him elsewhere | 📝Substack. | 📺 YouTube.| 🦋 Bluesky |

--

--

AI Advances
AI Advances

Democratizing access to artificial intelligence

Wesley Edits
Wesley Edits

Grammy/Emmy-nominated editor/producer analyzing media trends & storytelling techniques. From TV to audiobooks, sharing insights from two decades in production

Responses (15)