ai innovation

The Trillion-Dollar Flywheel: Are We Building an AI Bubble Too Big to Burst?

The global AI megacycle of 2025 stands as the most extraordinary technological and financial phenomenon in modern history. With total AI spending projected to surge past $1 trillion this year and annual growth exceeding 30%, the velocity of adoption is dismantling every precedent in scale, capital formation, and industrial transformation.

Beneath this explosive expansion, however, lies a precarious experiment in economic engineering — where supply chains, energy systems, and geopolitical alliances are being rewired at breakneck speed. This is no ordinary boom; it is a global restructuring of infrastructure, capital, and power unfolding in real time.

The Engineered Flywheel: AI’s Circular Capital Machine

OpenAI’s rise, in partnership with Nvidia, AMD, Oracle, and a growing constellation of strategic partners, reveals just how much the old rules have been shattered. The attached diagrams make clear: this is not a supply-and-demand marketplace, but a vast, self-reinforcing flywheel. Each company not only buys billions in technology and services from the others, but finances those purchases through reciprocal equity, debt, and vendor arrangements.

Nvidia invests $100B in OpenAI, in turn funding GPU purchases that flow straight back to Nvidia’s revenue column. Terms: Reciprocal equity/debt/vendor arrangements; contracts serve as collateral for further financing.

Oracle’s $300B cloud contract — effectively renting Nvidia chips to OpenAI — sparks immediate surges in both Oracle’s and Nvidia’s stock value. Terms: Long-term hyperscale GPU rental; vendor contract can be pledged as collateral, fueling infrastructure expansion.

AMD’s deal is even more radical: OpenAI purchases up to $90B in AMD chips (6 gigawatts of AMD’s Instinct MI450 AI GPUs by 2030 = 3 to 6 million GPUs), but in exchange receives up to 160 million AMD shares at $0.01 — a performance-based “equity-for-purchase” trade that could theoretically make OpenAI’s GPU spend almost free if valuations soar.​​ Terms: “Equity-for-purchase” arrangement; if AMD’s valuation soars, OpenAI’s net GPU cost could approach zero.

  • The combined compute capacity target of 20+ gigawatts (GW) breaks down as Nvidia delivering 10 GW starting in 2026 with 4–5 million GPUs as part of a $100 billion partnership; AMD supplying 6 GW by 2030 through a $90 billion deal scaling 3–6 million Instinct MI450 GPUs; and Oracle contributing 4.5 GW via its $300 billion cloud contract deploying Nvidia GPUs.
  • This scale of deployment demands energy equal to 20 large nuclear reactors — powering several major US metropolitan areas — and ranks among the largest private-sector energy projects ever.
  • Building 20 GW of AI compute is extraordinarily challenging due to US power grid capacity limits, multi-year interconnection delays, supply chain constraints, permitting complexity, and workforce shortages, equating to the difficulty of constructing multiple nuclear or large gas-fired power plants.

Each cycle sustains the next: OpenAI’s success boosts AMD, Oracle, and Nvidia valuations, which in turn drive further capital raises and infrastructure buildout. The result: vendor contracts become the new collateral, data center hardware becomes a financial asset class, and “circular revenue” passes for growth — Goldman Sachs warns that investors must now scrutinize these earnings with greater skepticism than ever before.​

Mining for AI: Repurposing Crypto Farms

As hyperscaler demand outpaces power supply, one of the most surprising trends of 2025 has been the aggressive conversion of crypto mining farms into AI-ready data centers. Bitcoin miners like Bitfarms, Hive Digital, and Core Scientific have pivoted to AI infrastructure, leveraging their existing high-capacity grid connections, cooling systems, and bare-metal automation to offer rapid GPU deployments and AI training workloads.​

Why? Building a new traditional data center in the US can take 2–4 years due to permitting and interconnection delays. But a retrofitted mining site — often already optimized for dense rack power and thermal management — can be up and running for AI in just 6–12 months. Miners are sitting on a generational goldmine, reshaping the economics of digital infrastructure and squeezing more value out of assets that once chased BTC hashes.​

With the AI bubble inflating and credit markets fueling capex, every square foot of data center land and every available megawatt are being snapped up in advance — sometimes without clear visibility on long-term energy supply or competitive risks. The shift to energy-optimized AI-ready sites could become the defining moat for future hyperscaler winners — while those locked into slow, expensive US buildouts risk stranding billions if the competitive tide turns.

The era of “build it and they will come” is over. Now it’s “lease before it’s built — or you’re locked out.” In this high-stakes game, crypto miners turned AI landlords may be as critical to the next wave as the cloud titans themselves.

The Global Bottleneck: US vs Asia

US power constraints are now so acute that analysts warn America could lose the AI race to China and Asia, whose governments and energy providers are leapfrogging grid upgrades and offering rapid incentives to capture the next wave of compute-intensive workloads.​

Asia is rapidly reshaping the global landscape for AI infrastructure and digital competitiveness, accelerating past established bottlenecks that now constrain the US. The International Renewable Energy Agency and the International Energy Agency both highlight how China, Southeast Asia, and APAC utility companies deployed a record 413 GW of new clean energy in 2024 alone — representing over half the world’s renewable energy capacity additions. China, India, and Japan have not only built vast wind, solar, and hydropower fleets, but are now leveraging them directly to supply sovereign AI development at national scale.​

China has rapidly established hundreds of AI data centers within 3–4 years, fueled by aggressive government stimulus and massive investments in power infrastructure — including hydro, nuclear, and renewables — which have resulted in high power reserve margins of 80–100%. Centralized planning has enabled streamlined permitting, bulk grid upgrades, and the creation of dedicated data center clusters in low-cost regions equipped with advanced cooling technologies like offshore wind-powered underwater centers. This coordinated approach has allowed China to quickly scale AI infrastructure despite some challenges like speculative investments and underutilized facilities.

In contrast, the U.S. faces a fragmented power grid with reserve margins typically below 15%, compounded by slow and complex permitting processes and multi-year delays for grid interconnections. U.S. operators must navigate a maze of utilities and regulators, volatile electricity prices, and critical workforce shortages, making the deployment of 20 GW-scale AI infrastructure a prolonged, capital-intensive effort. Unlike China, which views AI centers as energy absorbers to stimulate growth, the U.S. grid treats them as disruptive loads requiring extensive new generation, transmission, and storage, significantly stretching national infrastructure plans.

Asia’s Energy Surge: Infrastructure on Fast Forward

  • Asia’s share of renewables grew to 29% of its electrical grid in 2024, with China alone spending $625 billion — 31% of global clean energy investment.​
  • APAC is projected to drive two-thirds of global electricity demand growth by 2030, giving it the economic tailwind to support hyperscale AI, cloud, and chip foundries at a scale unmatched in the West.​
  • Southeast Asian countries are rapidly closing the energy transition gap, with national targets for more than 40% renewables by the end of the decade, and favorable public investment frameworks to reinforce private capital.​
  • While the US faces permitting slowdowns, unpredictable rates, and grid backlogs, Asian governments have moved ahead with direct incentives, streamlined infrastructure investments, and rapid grid upgrades. Data centers in China, India, and other APAC economies are now powered by new wind, hydro, and small-scale solar, often coming online in months — not years — thanks to government interventions and public-private partnerships.​

Implications for the AI Race

With America’s data center power constraints now at crisis levels, energy — and not just compute — is the new limiting factor for global AI dominance. Asian nations, led by China, can now provision sovereign-scale compute access without the risk of rolling blackouts, grid pricing spikes, or protracted construction delays.

Yet it’s not just grid power holding back the US. On 10 October 2025, China imposed new restrictions on exports of rare earth elements, central to the manufacture of leading-edge AI semiconductors, data storage, and networking hardware. China produces 70% of the world’s rare earths and controls 90% of all refining and processing — now linking those exports explicitly to national security, military, and advanced technology uses.​

Exports for semiconductors at 14nm and below or memory chips at 256-layers or more now require “case-by-case approval,” with an end-use test that could shut off supply even for foreign-assembled products containing trace amounts of Chinese-origin rare earths.

China has also intensified restrictions on Nvidia chips in 2025 as part of a broader strategy to boost domestic semiconductor manufacturing and reduce reliance on U.S. technology. Chinese customs authorities have escalated inspections of Nvidia AI chip shipments at major ports, especially targeting advanced models like the Nvidia A100 and Pro6000 series. These actions align with efforts to enforce U.S. export controls and combat smuggling of over $1 billion worth of chips into China. Despite Nvidia releasing a China-specific AI chip, the RTX6000D, demand has been weak as local companies are encouraged to use home-grown alternatives. This crackdown reflects rising tensions between the U.S. and China over control of critical AI technology and chips vital to future AI capabilities.

All these moves, in direct response to mounting US controls on chip exports, makes the supply of AI hardware more politically fraught than ever, threatening to fragment global innovation and escalate costs for Western manufacturers.

Unless the US mobilizes a new energy-industrial policy — combining renewables, nuclear, and AI-friendly regulatory reforms — it risks repeating the 5G telecom mistake: ceding the initiative to more agile, energy-secured competitors. The future of AI may be decided as much by terawatts and grid reliability as by model performance or venture capital flows. If the US rises to the challenge, it could reassert leadership in AI as it did in earlier tech cycles — otherwise, the new epicenter of digital transformation may shift decisively eastward.

The Dissolving Boundary: Venture Capital, Debt, and Energy

Venture capital’s record-shattering commitment to AI infrastructure this year — totaling $192.7 billion and capturing 60% of all US deal activity — stands as both a testament to AI’s transformative promise and a warning sign of unsustainable financial engineering. Private equity and VC funds, driven by a relentless mandate for near-term returns, are pushing capital into trillion-dollar data centers and bespoke chip fabs that require multi-decade payback horizons. This mismatch forces even the tech giants themselves to increasingly rely on debt — AI-linked debt now totals over $1.2 trillion, making up a staggering 14% of the US investment-grade bond market, eclipsing entire banking sectors for systemic risk exposure.​

What this reveals is a fragile ecosystem built on circular capital flows and vendor financing loops, dependent on markets continuously embracing rising valuations and follow-on capital. As debt investors grow more cautious, and as credit markets recalibrate risk premiums with rising rates and economic uncertainty, the risk of a sharp contraction intensifies — one that could cascade through the largest players, leaving data centers half-built and valuations in freefall. The AI boom, for all its promise, walks a narrow path between historic innovation and the precipice of the biggest tech bubble the world has ever seen

The Hidden Dangers: Fragile Growth, Leapfrogging Risk

The tight, circular interlock carries profound dangers. A single crack — whether regulatory, financial (markets tighten, rates rise), technological (the leapfrog of more efficient models, like DeepSeek’s compact architectures), or infrastructural (US grid fails) — can trigger a catastrophic unraveling. Even internal voices now warn that “profitability isn’t even in my top 10 priorities” (Sam Altman); cost discipline has been discarded for growth-at-all-costs — a boast that will turn to a curse if the capital tide recedes.

Parallel to this is the echo of pandemic-era overbuilding: companies constructed warehouses and hired at scale, only to reverse violently once conditions shifted. The current trillion-dollar data center and infrastructure investment spree could be left as a monument to wasted capital if innovation (from Asia or elsewhere) leapfrogs the army of hyperscale buildouts now underway.

The Trillion-Dollar Mirage: Financialized AI as a New Asset Class

Most troubling for the AI ecosystem’s long-term stability is that OpenAI’s financial foundations remain far from self-sustaining. Goldman Sachs highlights that OpenAI is approximately 75% dependent on external capital, framing it as a highly collateralized and financialized operation deeply reliant on the continual willingness of investors and markets to keep pumping liquidity into the machine.

Despite generating around $4.3 billion in revenue during the first half of 2025 — a 16% increase over all of 2024 — it is simultaneously burning $2.5 billion in cash, primarily due to heavy R&D costs, which totaled an extraordinary $6.7 billion over the same period. Projections indicate OpenAI will require $8.5 billion in cash burn for the full year 2025 alone, with profitability not expected until around 2029, underscoring a multi-year horizon before achieving positive cash flow. Meanwhile, significant portions of revenue must be shared with partners like Microsoft, dampening retained earnings.

Oracle, whose market cap surged $244 billion after partnering with OpenAI, has already drawn credit risk warnings from Moody’s due to its “excessive reliance on a single customer.”

Nvidia’s rapid pivot from stable hyperscalers to riskier AI startups and sovereign wealth funds — financed through mounting debt — further exposes the fragility of an ecosystem balanced precariously on leverage, speculation, and hopes of rapidly expanding user adoption sustaining growth

Furthermore, in the second quarter of fiscal 2026, Nvidia reported $46.7 billion in revenue, with its data center segment alone contributing $21.9 billion — an astonishing 88% of total sales. However, nearly 53% of that data center revenue is concentrated among just three key customers, with the largest single client accounting for more than 20% of total sales.​

This customer concentration means Nvidia’s growth trajectory is heavily tethered to the capital expenditure plans and strategic decisions of a handful of hyperscalers and sovereign AI funds, including juggernauts like OpenAI, Microsoft, Amazon, Meta, and Oracle.

Any shift — whether one customer decides to develop its own silicon (Google, Meta, Amazon, Microsoft et al.), scales back spending amid economic pressures, or faces geopolitical disruption — would profoundly impact Nvidia’s revenue and, by extension, the entire AI chip supply chain.​

CEO Jensen Huang projects a $3–4 trillion AI infrastructure buildout by the decade’s end and estimates Nvidia’s share at around 70% of AI-related data center expenditures for the next five years. Despite bullish analyst forecasts and expanding market penetration, this dependence on a narrow client base introduces significant volatility, underscoring that Nvidia’s dominant position, while powerful, is intricately linked to the financial health and capital commitments of a select few major players within the AI ecosystem.​

The Case for a Sovereign Intervention: Avoiding the AI Infrastructure Meltdown

Amid the looming crisis in AI infrastructure — marked by explosive capital expenditure, mounting debt, and grid overload — there is a growing case for targeted sovereign intervention to stabilize and propel the unlimited potential of AI without collapsing under its own weight.

Why Government or Sovereign Funds Could Step In

The scale and systemic risk of the trillion-dollar AI buildout far exceed what private markets can sustainably finance alone. Crucially, the US government or sovereign wealth funds could provide:

  • Baseload Power Investments: Massive injections into nuclear and renewable infrastructure capable of delivering stable, affordable energy to keep hyperscale data centers humming. This includes nuclear plant buildouts, grid modernization, and targeted subsidies to clean energy deployment for AI suppliers and cloud providers.​
  • Data Center Public-Private Partnerships: Enabling faster facility rollouts, zoning streamlining, and direct capital allocation to mitigate bottlenecks in data center construction — the commercial equivalent of the interstate highway system for technology.​
  • Debt and Equity Guarantees: Sovereign backstops for debt financing, reducing risk-premiums and interest rates for companies caught in the vicious cycle of circular financing, lowering the cost of capital for AI infrastructure expansion.​
  • Strategic Risk Sharing: Programs to diversify AI infrastructure ownership, enabling more resilient ecosystem design instead of concentrated, fragile vendor interdependencies.

The US Stands a Fighting Chance

The US cannot afford to cede the AI race to China after losing ground in 5G telecom and advanced semiconductor manufacturing. While China advances aggressively on sovereign AI infrastructures backed by state utilities and subsidies, the US maintains unmatched innovation ecosystems, world-class universities, and a mature capital market.

If the US acknowledges the energy and financing bottlenecks and acts swiftly to mobilize public-private cooperation and innovative financing models, it can transform the looming crisis into an unprecedented opportunity, ensuring:

  • AI infrastructure growth fueled sustainably by green energy breakthroughs.
  • Ecosystem resilience through financial market stabilization.
  • Expanding leadership in next-generation AI hardware, software, and applications.

Conclusion: AI’s Reckoning Is Near

The entire ecosystem dances on the edge of leverage, speculation, and hopes that user adoption will multiply fast enough to keep the story intact.​

The AI industry’s $1 trillion circular financial flywheel will not spin indefinitely without sovereign engagement. It is not merely a matter of market forces but national strategic interest to prevent blackout-stoked collapses of the digital economy’s core and leapfrog competitors by harnessing government resources where private actors cannot.

The AI revolution is not merely a story of exponential innovation. It is a grand, precarious, and perhaps unsustainable financial experiment — a trillion-dollar loop that transforms debt into market cap, software into speculative capital, and infrastructure into intertwined collateral.

Will this evolve into a foundation for a new industrial era, making AI the core infrastructure of the 21st-century economy? Or will it end as a cautionary tale — another credit-fueled mirage where circular leverage and unchecked optimism produce the greatest tech bubble in history?

One thing is indisputable: never before have so many of the world’s most powerful companies, banks, and institutions placed such colossal, interconnected bets — on a technology, an ideal, and ultimately, each other.

The outcome will not just reset the future of AI.

It may reset the very architecture of capitalism!

Luke Thomas

Executive Strategy Advisor

Leave a Reply

Your email address will not be published. Required fields are marked *

Unlock Access - Lets Connect