The rise and fall of the AI bubble?
Why the current wave of generative AI is more gold rush than revolution
Opinion, 23 November 2025
by L.A. Davenport
I have argued in these virtual pages that, in reality, there is no such thing as progress, and that while the tools and technologies change, the course of history does not point ever-forwards, like an arrow pointing perpetually into the future, but is rather a series of circles, in which we are destined to repeat the same experiences, and mistakes, over and over again.
Of course, the improvements in healthcare and, more importantly, sanitation in recent centuries and decades have had a remarkable impact on the human condition, and feel like a progress of sorts, and an irreversible one at that. Yet it should be remembered that there have been periods in which the lot of humankind, from a health perspective, has been nowhere near as bad as we would like to imagine, only to be followed by eras in which commonly held knowledge was lost and people endured terrible lives overshadowed by the spectre of debilitating ill health and death.
And I do wonder whether the efforts of Donald Trump and Robert F. Kennedy, Jr, to swing an axe at healthcare institutions and structures in the USA herald the coming of a new dark age for humanity, especially when we take into consideration the lack of respect for public health and safety in many countries outside of the European region.
But I don’t want to talk about healthcare this week. I want to return to a topic to which I have covered several times: artificial intelligence. That’s not because I’m obsessed with the technology (although wider society is), but because it seems to sit at a crucial point between economics; culture; the labour force; the exercise and control of power; and our ever-evolving beliefs over what it means to be human.
Of course, the improvements in healthcare and, more importantly, sanitation in recent centuries and decades have had a remarkable impact on the human condition, and feel like a progress of sorts, and an irreversible one at that. Yet it should be remembered that there have been periods in which the lot of humankind, from a health perspective, has been nowhere near as bad as we would like to imagine, only to be followed by eras in which commonly held knowledge was lost and people endured terrible lives overshadowed by the spectre of debilitating ill health and death.
And I do wonder whether the efforts of Donald Trump and Robert F. Kennedy, Jr, to swing an axe at healthcare institutions and structures in the USA herald the coming of a new dark age for humanity, especially when we take into consideration the lack of respect for public health and safety in many countries outside of the European region.
But I don’t want to talk about healthcare this week. I want to return to a topic to which I have covered several times: artificial intelligence. That’s not because I’m obsessed with the technology (although wider society is), but because it seems to sit at a crucial point between economics; culture; the labour force; the exercise and control of power; and our ever-evolving beliefs over what it means to be human.
A tech tumour
I’ve written about the threat AI poses to specialised careers and the unthinking enthusiasm with which some people and organisations have embraced it in a headlong rush, despite its obvious limitations. I’ve also looked at how we project our worst traits onto it and then act surprised when those projections bounce back at us with a steady, algorithmic stare. And I’ve argued that the problem with AI is not that it’s too smart, but that it reflects us too well.
What I hadn’t realised, until very recently, was quite how far we’d strayed into an entirely new place: bubble territory.
Reading Carole Cadwalladr’s recent column The Great AI Bubble was like seeing a new layer of complexity, and obfuscation, pulled back. The image she conjures of a trillion-dollar “tech tumour” metastasizing across the global economy is as grotesque as it is compelling. And like all speculation manias, from the South Sea Bubble to the dotcom boom (and subsequent bust), what fuels it is not simply optimism or innovation, but hype, fear of missing out (or FOMO), and money that needs somewhere, anywhere, to go.
Cadwalladr doesn’t hold back in her critique. She writes that what we are witnessing is not just a hype cycle, but “a circular economy of loans and debts and froth and hype…all entirely dependent upon one another.” In other words, it’s a closed doom loop that will not spiral upwards but inevitably and inexorably downwards. The money being thrown into this doesn’t build value; it simply reinforces itself, spinning faster and faster in a techno-utopian hamster wheel.
A Bloomberg diagram she cites lays bare the mechanics of this: Nvidia selling chips to AI firms that raise money based on future success they promise to deliver using Nvidia’s chips; OpenAI inking deals that only make sense if their tools fulfil promises they can’t yet keep (and may never); venture capital firms piling in based on vibes and PR, not fundamentals.
The stakes are high: a trillion dollars. Possibly more.
What I hadn’t realised, until very recently, was quite how far we’d strayed into an entirely new place: bubble territory.
Reading Carole Cadwalladr’s recent column The Great AI Bubble was like seeing a new layer of complexity, and obfuscation, pulled back. The image she conjures of a trillion-dollar “tech tumour” metastasizing across the global economy is as grotesque as it is compelling. And like all speculation manias, from the South Sea Bubble to the dotcom boom (and subsequent bust), what fuels it is not simply optimism or innovation, but hype, fear of missing out (or FOMO), and money that needs somewhere, anywhere, to go.
Cadwalladr doesn’t hold back in her critique. She writes that what we are witnessing is not just a hype cycle, but “a circular economy of loans and debts and froth and hype…all entirely dependent upon one another.” In other words, it’s a closed doom loop that will not spiral upwards but inevitably and inexorably downwards. The money being thrown into this doesn’t build value; it simply reinforces itself, spinning faster and faster in a techno-utopian hamster wheel.
A Bloomberg diagram she cites lays bare the mechanics of this: Nvidia selling chips to AI firms that raise money based on future success they promise to deliver using Nvidia’s chips; OpenAI inking deals that only make sense if their tools fulfil promises they can’t yet keep (and may never); venture capital firms piling in based on vibes and PR, not fundamentals.
The stakes are high: a trillion dollars. Possibly more.
Déjà vu: from tulips to chatbots
Financial bubbles are as old as finance itself, of course. The Dutch tulip mania of the 1630s is the cautionary tale taught in schools, but it’s the South Sea Bubble of 1720 that might offer a better analogy. That was a scheme promising great rewards from overseas trade but was in fact founded on misdirection, manipulation and an eager public keen to ride the wave of speculation. Sound familiar?
Or consider the dotcom boom, which Cadwalladr herself experienced firsthand as a journalist. “The media hadn’t just fallen for the bubble. It was the bubble,” she writes, noting how excitement over a travel start-up’s IPO swept through her own newsroom. When the crash came, it was swift and brutal.
The difference today is that the technology is more opaque, the promises more abstract, and the consequences potentially far greater. Back then, we were promised a better way to buy flights. Now, we are told that AI will solve global inequality, cure cancer, and—why not?—defeat death itself.
And yet, despite the huge sums invested, seemingly most of what generative AI can currently do is produce passable marketing copy, fake nudes and erratic legal advice. Meanwhile, companies hoover up unimaginable quantities of data, burn through electricity, guzzle water, and demand more subsidies and freedom from regulation, all while claiming to be creating “the future.”
As Cadwalladr puts it, the emperor has $1 trillion and not a stitch to wear.
Or consider the dotcom boom, which Cadwalladr herself experienced firsthand as a journalist. “The media hadn’t just fallen for the bubble. It was the bubble,” she writes, noting how excitement over a travel start-up’s IPO swept through her own newsroom. When the crash came, it was swift and brutal.
The difference today is that the technology is more opaque, the promises more abstract, and the consequences potentially far greater. Back then, we were promised a better way to buy flights. Now, we are told that AI will solve global inequality, cure cancer, and—why not?—defeat death itself.
And yet, despite the huge sums invested, seemingly most of what generative AI can currently do is produce passable marketing copy, fake nudes and erratic legal advice. Meanwhile, companies hoover up unimaginable quantities of data, burn through electricity, guzzle water, and demand more subsidies and freedom from regulation, all while claiming to be creating “the future.”
As Cadwalladr puts it, the emperor has $1 trillion and not a stitch to wear.
A mirror to ourselves
In a column way back in April 2023, I discussed how AI tools don’t create meaning so much as remix it. Chatbots don’t think; they pattern-match. What seems like intelligence is a trick of presentation, a linguistic trompe-l’œil, even if the picture created can be highly illuminating, and even time-saving. And what unsettles us isn’t the technology, but what it reveals about us .
That’s why the most fascinating, and damning, aspect of the current moment isn’t technical at all. It’s social. We are willing participants in the fantasy. We want the machines to be smarter than us; to relieve us of workload, uncertainty and perhaps even moral responsibility. We want to believe that the future is already here, and that it will be seamless.
But the truth is more complicated. The technology, while impressive, remains crude. The financial model is unsustainable. And the cultural infrastructure, such as journalism, education and regulation, that supports it seems barely able to keep up.
That’s why the most fascinating, and damning, aspect of the current moment isn’t technical at all. It’s social. We are willing participants in the fantasy. We want the machines to be smarter than us; to relieve us of workload, uncertainty and perhaps even moral responsibility. We want to believe that the future is already here, and that it will be seamless.
But the truth is more complicated. The technology, while impressive, remains crude. The financial model is unsustainable. And the cultural infrastructure, such as journalism, education and regulation, that supports it seems barely able to keep up.
Where are we going?
To be clear, I’m not saying that AI is useless, or that it won’t transform aspects of our lives. I’m not even saying that it shouldn’t receive investment, perhaps even a lot. But there is a world of difference between meaningful innovation and a trillion-dollar frenzy. When money flows faster than understanding, and when CEOs react to basic questions like petulant children (see Altman’s meltdown embedded in Carole’s piece—it’s worth watching for the sheer squirm value), something is seriously amiss.
Will AI end up like 3D televisions—hyped, expensive and quickly forgotten? Or will it quietly integrate into the fabric of our lives, not as a revolution, but as an evolution? That remains to be seen.
But what seems increasingly clear is that many of the grandest claims made for AI—world-changing artificial general intelligence; fully automated white-collar labour; the dawn of a new human epoch—will not come to pass, or at least not in the ways we imagine.
And so the bubble may well burst. Perhaps gently, slowly, like an old balloon from a kid’s party. Perhaps with a bang. But in the end, what remains will be what always endures: human curiosity, creativity and the need to make sense of the world.
Let’s not lose sight of that in the froth.
Will AI end up like 3D televisions—hyped, expensive and quickly forgotten? Or will it quietly integrate into the fabric of our lives, not as a revolution, but as an evolution? That remains to be seen.
But what seems increasingly clear is that many of the grandest claims made for AI—world-changing artificial general intelligence; fully automated white-collar labour; the dawn of a new human epoch—will not come to pass, or at least not in the ways we imagine.
And so the bubble may well burst. Perhaps gently, slowly, like an old balloon from a kid’s party. Perhaps with a bang. But in the end, what remains will be what always endures: human curiosity, creativity and the need to make sense of the world.
Let’s not lose sight of that in the froth.
© L.A. Davenport 2017-2025.
The Rise and Fall of the AI Bubble? | Pushing the Wave