Skip to content

AI Infrastructure Risk: The Conduit Financing Thesis

Research22 min read
AI infrastructureconduit financingsystemic risktechnology capex

The AI infrastructure buildout represents the largest capital expenditure cycle in technology history. In 2024 alone, the five major hyperscalers committed over $200 billion to AI-related infrastructure. This paper examines the risk profile of the financing structures underpinning this buildout, drawing parallels to historical conduit financing models and identifying potential systemic vulnerabilities.

The Infrastructure Imperative

The economics of large language model training and inference create an insatiable demand for compute infrastructure. Each generation of frontier models requires approximately 3-5x the compute of its predecessor, while inference demand scales with adoption. This creates a dual exponential growth curve in infrastructure requirements.

The major cloud providers — Microsoft, Google, Amazon, Meta, and Oracle — are responding with unprecedented capital deployment:

  • Data centre construction — New facilities requiring 18-24 months from groundbreaking to operational
  • Custom silicon — Proprietary chip design and manufacturing partnerships
  • Power infrastructure — Dedicated power generation, including nuclear and renewable commitments
  • Networking — High-bandwidth interconnect fabric for distributed training
  • Cooling systems — Liquid cooling infrastructure for high-density GPU deployments

The Financing Architecture

This infrastructure buildout is financed through several overlapping mechanisms:

Direct Balance Sheet

The hyperscalers fund the majority of capex from operating cash flows and existing credit facilities. This represents the lowest-risk financing layer, backed by diversified revenue streams and investment-grade credit profiles.

Project Finance

Individual data centre projects are increasingly structured as project finance vehicles, with dedicated SPVs holding the assets and debt secured against the specific facility. This isolates risk at the project level but creates complexity in cross-default and cross-collateralization provisions.

Conduit Structures

The most concerning development is the emergence of conduit financing models, where:

  1. Infrastructure developers pre-sell capacity to cloud providers under long-term contracts
  2. These contracts are securitised to raise construction financing
  3. The resulting securities are sold to institutional investors seeking infrastructure-like returns
  4. Leverage is applied at multiple points in the chain

This conduit model creates the classic maturity transformation problem — long-dated, illiquid assets funded by instruments that assume continuous market access.

Historical Parallels

The conduit financing of AI infrastructure bears structural similarity to several historical episodes:

Telecommunications (1998-2001)

The dot-com era saw massive over-investment in fibre optic infrastructure, much of it financed through structured vehicles. When demand failed to materialise at projected rates, the resulting defaults cascaded through the financing chain. Companies like WorldCom, Global Crossing, and Adelphia collapsed, and utilisation of installed fibre reached barely 3% at the trough.

Real Estate CDOs (2004-2008)

The pre-crisis real estate boom featured conduit structures (SIVs, CDOs, CDO-squareds) that created opacity around the ultimate credit risk. When underlying asset performance deteriorated, the lack of transparency amplified panic and prevented orderly price discovery.

Energy Infrastructure (2014-2016)

The shale boom generated massive infrastructure investment, much financed through MLPs and high-yield bonds. When commodity prices collapsed, the infrastructure proved over-built relative to demand, and financing vehicles suffered significant losses.

The Demand Risk

The critical assumption underlying AI infrastructure financing is sustained demand growth. Several factors could undermine this assumption:

Algorithmic Efficiency

Each new model architecture tends to be more efficient than its predecessor. GPT-4 required significantly less training compute per capability unit than GPT-3. If efficiency gains outpace demand growth, the installed infrastructure base could become over-built.

Inference Optimisation

Techniques like quantisation, distillation, and speculative decoding reduce inference compute requirements by 2-10x. As these techniques mature, the compute required per query will decline, potentially faster than query volume grows.

Regulatory Constraints

Emerging AI regulation in the EU, and potentially the US, could constrain deployment scenarios, reducing demand growth below current projections. Data sovereignty requirements may also fragment demand across geographies, creating regional over-capacity.

Customer Concentration

AI infrastructure demand is highly concentrated. A small number of companies — primarily frontier model developers and large enterprises — account for the majority of compute consumption. The loss of a single major customer could make a facility uneconomic.

The Power Bottleneck

Perhaps the most significant risk factor is power availability. AI data centres require 50-100MW per facility, with next-generation designs targeting 500MW+. This creates several vulnerabilities:

  • Grid capacity — Many regions lack the grid infrastructure to support new data centre loads
  • Permitting delays — Power plant construction and grid interconnection face regulatory timelines measured in years
  • Cost escalation — Competition for power capacity is driving electricity costs higher, undermining the unit economics of inference
  • Environmental constraints — Carbon commitments may conflict with the power demands of AI infrastructure

Valuation Framework

Assessing the risk-adjusted value of AI infrastructure requires a multi-scenario approach:

Base Case (60% probability)

  • Demand growth continues at 40-50% annually through 2028
  • Efficiency gains partially offset volume growth
  • Infrastructure utilisation stabilises at 70-80%
  • Returns on invested capital settle at 8-12%

Bull Case (20% probability)

  • AGI breakthrough accelerates demand beyond projections
  • Enterprise adoption reaches mass market
  • Infrastructure becomes scarce, driving premium pricing
  • Returns exceed 15%

Bear Case (20% probability)

  • Efficiency gains outpace demand growth
  • Regulatory constraints limit deployment
  • Over-building creates 50%+ excess capacity
  • Returns fall below cost of capital
  • Conduit financing vehicles experience distress

Investment Implications

  1. Prefer equity over debt — In a sector with binary demand outcomes, equity participation captures the upside while debt bears asymmetric downside risk from over-building scenarios.

  2. Favour vertically integrated operators — Companies that control the full stack (silicon to application) can optimise utilisation across their infrastructure, reducing stranded asset risk.

  3. Monitor utilisation metrics — Data centre utilisation rates are the leading indicator of over-building. Current rates above 85% suggest near-term scarcity, but watch for the inflection.

  4. Hedge power risk — Power cost and availability represent the largest single risk factor. Operators with secured long-term power contracts at fixed prices hold a structural advantage.

  5. Avoid conduit financing exposure — Structured vehicles financing AI infrastructure carry risks that are inadequately compensated. The illiquidity premium is insufficient given the demand uncertainty and historical parallels.

Conclusion

The AI infrastructure buildout is real, necessary, and likely to generate substantial returns for well-positioned operators. However, the financing structures emerging around this buildout introduce risks that extend beyond the technology sector into the broader financial system.

The conduit financing thesis — that AI infrastructure can be financed through securitisation of capacity contracts — assumes demand persistence that is not guaranteed. When the underlying demand assumption is questioned, conduit structures amplify rather than absorb the resulting volatility.

Investors should approach AI infrastructure with enthusiasm for the technology and caution for the financing. The history of infrastructure booms suggests that the operators who build sustainably and finance conservatively will outperform those who optimise for speed and scale.

The $200 billion annual question is not whether AI infrastructure will be valuable — it almost certainly will — but whether the financing structures being built around it can withstand the inevitable demand fluctuations that characterise every technology cycle.