So far we've talked about the economics. The unsustainable math. The circular money.
But why do it? What justifies burning billions of dollars with no path to profitability?
There's a bigger bet happening behind all of this. OpenAI and other AI companies are gambling everything on AGI.
What Is AGI?
AGI stands for Artificial General Intelligence.
In simple terms, it's a model that doesn't just predict text or search quickly, but actually thinks, reasons, and solves problems it's never seen before.
Not a really good autocomplete. Actual intelligence.
That's the end goal. That's what justifies the billions in losses. That's the moonshot.
And to be fair, if it became reality, revolutionary wouldn't come close to describing the magnitude of this discovery for society.
The Theory
Here's the idea behind the massive investment.
Keep making models bigger. Keep feeding them more data. Eventually something clicks. The model stops being a really good autocomplete and starts being actually intelligent.
It's not a crazy theory. We've seen improvements with scale. GPT-4 is better than GPT-3. Claude Sonnet is better than earlier versions.
The bet is that if you keep scaling, you eventually hit a breakthrough. The model becomes genuinely intelligent instead of just really good at pattern matching.
That's what justifies losing money for five, ten years. Because if it works, you own the future.
The Problem
It's not working. Or rather, not fast enough.
GPT-5 came out. It was fine. Better than GPT-4 in some ways. But it wasn't a massive leap. Just a step.
We're starting to see diminishing returns.
Massive increases in compute, training data, and cost are producing smaller and smaller improvements.
The models are getting better. But not exponentially better. Incrementally better.
What The Researchers Say
The people who actually study how intelligence works are raising concerns.
Just adding more parameters and more data doesn't create intelligence. That approach has gotten us incredibly useful tools. But useful tools and general intelligence are not the same thing.
There might be fundamental limitations to this approach that no amount of scaling will overcome.
We might be approaching the ceiling of what transformer-based models can actually do.
The Timeline Doesn't Add Up
OpenAI is projecting losses until 2029. Outside researchers say it could be much longer than that.
Five years of continued losses. Billions more in spending. All betting on a breakthrough that the underlying technology might not be capable of producing.
At some point, the money runs out. At some point, investors stop believing. At some point, the bill comes due.
And if AGI doesn't arrive to justify all of this? The entire financial justification for these losses collapses.
That's The Bubble
This is what makes it a bubble and not just aggressive investment.
If the bet pays off, everyone wins. AGI changes everything, the companies that built it own the future, investors make fortunes.
But if the bet doesn't pay off? If AGI isn't achievable through this approach? Then you have companies that spent years burning billions with nothing to show for it except really good autocomplete.
That's when bubbles pop.
What I Think
I use AI tools constantly. They're incredibly useful for what they actually do.
Code generation. Content drafting. Data processing. Pattern recognition.
Those are real, valuable capabilities. But they're not intelligence. They're sophisticated pattern matching.
And honestly? For most practical use cases, sophisticated pattern matching is enough.
I don't need AGI to automate a client's email workflow. I need a good language model that can follow instructions consistently.
The problem is the business model requires AGI to justify the spending. Without it, the math doesn't work.
The Real Risk
The risk isn't that AI goes away. The technology is real and useful.
The risk is that we're building an entire industry on the assumption that AGI is coming soon, and it's not.
When that becomes clear, when investors realize the breakthrough isn't arriving on the timeline they expected, that's when the bubble pops.
Not because AI failed. Because the expectations were wrong.
What This Means For Us
If you're building with AI tools, plan for the tools we have now, not the tools we're promised.
Build systems that work with sophisticated pattern matching, not systems that require actual intelligence.
Because if AGI arrives in the next few years, great. Your systems will get better.
But if it doesn't? You've built something that works with the technology that actually exists, not the technology people hoped would exist.
That's the smart play in a bubble. Build for reality, not hype.
Next up in Part 5: The threat from below. Cheap local models are eating the market while the mega models burn cash chasing AGI.