Why AI feels like dot-com déjà vu
November carried the early hallmarks of a potential market reversal. For much of the month, risk assets struggled as momentum faded, growth stocks came under pressure, and leadership narrowed sharply. US equities were broadly flat overall, with the S&P 500 up just 0.2% and the NASDAQ down 1.6%, masking what was a more difficult period beneath the surface. A late rally in the final week of November helped stabilise returns, preventing a more meaningful pullback and reinforcing how dependent markets have become on short bursts of optimism.
The month also highlighted a growing bifurcation within US technology stocks. While a small group of mega-cap names continued to attract capital, much of the broader tech complex struggled as valuations, earnings expectations and the path of AI monetisation were increasingly questioned. Outside the US performance diverged further: Canada and parts of Europe held up, while Japan, Australia, Emerging Markets and China weakened. Taken together, November felt less like a pause and more like the early stages of a regime shift, where markets are becoming increasingly selective and more sensitive to data, policy signals and execution risk.
AI: Is it a bubble?
Artificial intelligence will almost certainly touch every part of our lives. But, as with all transformative technologies, there’s a difference between rapid adoption and deep penetration. A lot of people try the new toy quickly; but it takes much longer for it to reshape the entire economy.
Historically, the internet, personal computers and smartphones were arguably even more transformative than AI because they changed daily life everywhere. Today, it’s completely normal to see smartphones and satellite internet in some of the remotest communities on earth.
In 1990, less than 1% of Americans used the internet. Usage exploded through the mid-90s, growing in the US at an astonishing average of 56% per year. By the time the much-feared Y2K bug had passed on with a whimper, over a third of Americans were online.
The stock market reacted exactly as you might expect when faced with a shiny new world-changing technology: it got carried away. The Nasdaq 100, dominated by technology companies, outperformed the S&P 500 by 19% per year throughout the 1990s. In 1998 and 1999 alone, it outperformed by 58% per year. Animal spirits were running high.
Then came the comedown. Over the next three years, as the dot-com bubble burst, those same tech stocks underperformed the market by 24% a year. The US technology sector collectively lost more than 85% of its market value between March 2000 and October 2002. And yet, throughout that crash, internet adoption kept rising, up another 18% per year, with almost 60% of the country using the web by the end of 2002.
1996-2002, US internet usage increased from 16-59% of population while tech stocks rallied hard then plummeted
Source: Bloomberg, World Bank via FRED®
With 20/20 hindsight, the lesson is obvious: investors priced in the potential long before the profits. The technology was real, and usage kept growing, but the commercial models were immature and the earnings weren’t there yet.
Fast-forward to today. A June 2025 survey of more than 5,000 Americans by Pew Research shows that 63% interact with AI at least several times per week, levels the internet took more than a decade to reach. The excitement is real and the adoption curve is clearly steep.
But there is one important difference this time: the companies leading the AI race, the established cloud providers, are already some of the most profitable businesses in history. Their core businesses have been capital-light, enormously scalable and highly cash-generative.
However, the current AI arms race is changing that. Building and running AI data centres requires extraordinary amounts of capital, both to construct the facilities and to power and cool them. Electricity and water usage alone is enormous.
Recently, high-profile investors Jim Chanos (who exposed Enron’s accounting practices) and Michael Burry (who identified the subprime mortgage bubble) have raised questions about how tech giants are accounting for these huge expenses. In short, some of the large increases in capital expenditure are being offset by extending the “useful life” of hardware, from around three years before COVID, to six years or more today. Spreading the cost over a longer period boosts reported profits, even though the cash is spent upfront.
Tech executives say the longer depreciation is justified because older chips remain in heavy use. The counterargument? Nvidia now launches new generations annually. If hardware becomes economically obsolete every 12 months but is depreciated over six or more years, the numbers may not capture the true cost of staying competitive.
Meanwhile, the established players are making significant investments in AI companies such as OpenAI and Anthropic. The Wall Street Journal recently put a startling number on what’s happening behind the scenes:
“… OpenAI’s quarterly loss alone equated to 65% of the increase in underlying earnings - before interest, tax, depreciation and amortisation - of Microsoft, Nvidia, Alphabet, Amazon and Meta combined. That ignores Anthropic, from which Amazon recorded a profit of $9.5 billion from its holding in the loss-making company in the quarter.”
In other words: a huge chunk of big tech’s reported profit growth is being offset, quietly, by the losses of the AI labs they’re funding.
Internet adoption in the US now sits above 93%, and the web is now an integral part of the global economy. But early in the dot-com boom, investors just believed the hype and bought at any price, unconcerned about revenues or fundamentals.
So the big question today is whether the AI boom is an echo of the past.
AI is real. AI will undoubtedly change everything. That much is clear. But have investors once again overestimated the near future? And are they prepared for the consequences if this technology follows the historical path from hype to disillusionment before embedding itself long-term in the economic landscape?

