top of page
Search

Five-Layer AI Infrastructure Stack

  • Writer: Andrea Bonini
    Andrea Bonini
  • Feb 1
  • 8 min read

In a Davos panel at the 2026 World Economic Forum, NVIDIA CEO Jensen Huang framed AI not as a single technology but as a “five-layer cake” of interdependent infrastructure. From the bottom up, the layers are: (1) Energy, (2) Chips & Computing Infrastructure, (3) Cloud Infrastructure, (4) AI Models, and (5) Applications. Each layer must be built and scaled to support the next – meaning investments made today in energy grids, semiconductor fabs, and data centers will ultimately enable tomorrow’s AI applications.

Semiconductor and infrastructure investments today are foundational to all future AI value.

The chips-and-compute layer commands the most immediate attention: it is facing an unprecedented capital cycle as leading firms build new fabs, memory factories and accelerators. A portfolio strategy should overweight companies that are essential at this layer (leading chipmakers, foundries, memory suppliers and equipment manufacturers). Equally important is monitoring the application layer: the flood of VC into API-driven startups and rapid enterprise AI adoption indicate that spending on hardware and cloud will ultimately translate into profitable services. In practice, experienced managers will likely balance exposure across the stack – from Nvidia/AMD/TSMC on one end, to cloud and software providers (Microsoft, Amazon, etc.) on the other, including small cap companies with developing capabilities – ensuring the portfolio captures growth at each stage of the AI buildout.

Alpha accrues first to capacity owners (energy, chips, cloud) and later to application-layer winners. Valuations are elevated but supported by real earnings growth; the primary risks to monitor: US–China policy escalation, Taiwan supply-chain disruption, energy price shocks, overbuild and utilization inflection post-2026.

 

Layer 1 – Energy (Power & Cooling): AI’s power demands mean utilities and energy infrastructure are critical investments. Data centers already account for ~1.5% of global power, and this share could double by 2030 as AI rolls out. Governments are treating AI like a national asset, prompting spending on grids and generation. Investors might watch renewable-energy providers, grid-tech companies, and operators of large-scale power projects. (For example, U.S. data-center electricity consumption is projected to rise from ~5% of U.S. power currently to as much as 10% by 2028, implying 74–132 GW of new capacity.) Stable power generation and new investments in green energy and storage will underpin AI’s growth at this base layer.

Recommendation: BUYVisible demand, regulated returns, long-duration contracts, and strategic importance make this layer an early-cycle beneficiary with improving ROIC.

Rationale: AI compute is power-constrained. Data centers require continuous, large-scale electricity supply, making utilities, grid operators, and power equipment providers structural beneficiaries.

What matters for investors: - Multi-year contracted demand from hyperscalers - Regulated or quasi-regulated returns - Grid expansion capex growing rate base - Strategic importance reduces demand cyclicality

Representative exposures: Regulated utilities with data-center concentration, renewable developers with long-term PPAs, power and cooling equipment suppliers.

Key risks: Rising interest rates (financing costs), regulatory lag in rate recovery, local permitting constraints.


Layer 2 – Chips (Semiconductors & Equipment): This layer is the core bottleneck and opportunity. Huang highlighted that nearly every big chipmaker is expanding: “TSMC just announced … [building] 20 new chip plants,” and partners (Foxconn/Wistron/Quanta) are building “30 new computer plants” for AI factories. Meanwhile, memory leaders are pouring capital into fabs – Micron plans roughly $200 billion in U.S. memory manufacturing and R&D (creating ~90,000 jobs). TSMC’s capex alone is unprecedented (~$52–56 b in 2026), dwarfing Samsung’s foundry spend by ~5×. Investment implications: Semiconductors tied to AI are in a multiyear boom.

Public chip companies poised to benefit include Nvidia (leading AI GPUs), AMD (GPUs/CPUs), Intel (AI CPUs, and as a new foundry player under CHIPS Act funding), TSMC (leading contract foundry), and Samsung/Kioxia/Micron/SK Hynix (memory).

Semiconductor-equipment firms (ASML, Lam Research, KLA, Tokyo Electron, Nikon) also gain from this Fab build-out. In effect, the chip layer’s capex trends (huge fab spending, new fabs everywhere) suggest durable tailwinds for semiconductor suppliers and foundry participants.

Recommendation: SELECTIVE BUYCore AI enablers with strong earnings momentum, though valuation dispersion and geopolitical risk require selectivity and sizing discipline.

Rationale: AI workloads are compute-intensive, driving unprecedented demand for advanced logic, memory, and semiconductor equipment. This layer captures the highest near-term profit pool.

Sub-segments: - Fabless designers (high margin, low capex) - Foundries and memory (capital intensive, strategic) - Equipment and EDA (picks-and-shovels with multi-year visibility)

What matters for investors: - Order backlog and customer concentration - Capex discipline vs subsidy dependence - Technology leadership durability - China exposure and export controls

Valuation note: Multiples are elevated for designers but more reasonable for manufacturers and equipment, partly due to geopolitical discount.

Key risks: Taiwan geopolitics, post-2027 overcapacity, policy-driven demand distortions.

 


Layer 3 – Cloud Data Centers (Hyperscalers & REITs): As AI workloads multiply, hyperscale cloud providers are beefing up capacity. Amazon, Microsoft, Google and Meta dominate this layer: AWS, the leader, has already spent over $100 billion on data centers over the years. While global data-center capex has been roughly $600 billion in 2025.

Investors may target cloud and networking stocks: e.g. Amazon (AWS), Microsoft (Azure) and Alphabet (GCP) (as they deploy GPUs at scale), as well as equipment firms like Broadcom/Cisco (data-center switches/routers) and Marvell (NICs, optical).

Data-center REITs (Equinix, Digital Realty) and even telecom tower operators can benefit indirectly.

The key point: investments here enable AI delivery (via on-demand compute), and robust spending by these firms means strong, ongoing demand for high-end hardware (many of which use Nvidia/AMD chips).

Recommendation: SELECTIVE BUY / HOLDStrategic assets with durable moats; near-term margin dilution from capex offsets long-term pricing power and lock-in.

Rationale: Hyperscalers are deploying massive capital to secure AI leadership. Near-term margins are pressured, but long-term strategic value is increasing.

What matters for investors: - AI workload pricing vs utilization - Capex-to-revenue trajectory - Incremental margins on AI services - Regulatory and antitrust overhang

REIT angle: Data center REITs offer lower beta exposure with yield, benefiting from overflow demand and interconnection growth.

Key risks: Margin compression if AI pricing weakens, macro IT spending slowdown, power constraints.

 

Layer 4 – AI Models: The fourth layer – designing and training AI models – underpins the application layer above. Investors should note that “accelerated” AI servers (those with GPUs/TPUs for ML) are driving extraordinary power density in datacenters. Companies developing or hosting these models (e.g. OpenAI/Microsoft, Google/DeepMind, Meta) may be private or subsumed by big tech, but their activity benefits chipmakers and cloud firms. Dedicated AI semiconductor startups (Graphcore, Cerebras, SambaNova, etc.) and accelerators are also hotspots for VC funding. For portfolio managers, this layer highlights that any company enabling large-scale training or inference – from hardware to middleware – is part of the AI value chain. (While direct consumer demand is abstracted, strong growth here signals continued demand for AI hardware and services.)

Recommendation: SELECTIVE BUY (Indirect Exposure)Monetization uncertain and largely private; exposure best achieved through integrated platforms rather than pure-play model risk.

Rationale: Foundation models are strategically important but economically unproven as standalone businesses. Cost structures remain opaque and competition is intense.

Investment approach: - Favor platforms that embed models into monetizable ecosystems - Avoid pure-play exposure without earnings visibility

Key risks: Commoditization via open source, regulatory constraints, escalating training costs.

 

Layer 5 – Applications: This top layer is where AI creates real economic value and revenue. Huang emphasized that applications in finance, healthcare, manufacturing, etc. are “where economic benefit will happen”. Crucially, 2025 saw record VC investment in AI-driven applications – Huang noted most of last year’s funding flowed into “API-native” startups. In other words, many new companies are building AI products by “plugging in” to the underlying models (via APIs like OpenAI’s), accelerating monetization of AI. For investors, this means the software and service providers at the top layer will capture the payoffs from the infrastructure beneath. Public beneficiaries include the tech giants (Microsoft, Google, Amazon) offering AI services and enterprise AI platforms, as well as specialty software firms (e.g. C3.ai, UiPath, Palantir) and even semiconductor companies licensing IP. The VC trend implies a robust pipeline of revenue-generating AI apps – a positive signal that infrastructure investments (chips/cloud) are likely to be turned into real-world profits.

Recommendation: SELECTIVE BUY / UNDERWEIGHTWinners will emerge, but broad sector exposure risks overpaying ahead of proof of pricing power and margin uplift.

Rationale: This layer captures ultimate economic value, but timing and distribution of gains are uncertain. Market pricing already assumes broad productivity uplift.

What matters for investors: - Evidence of pricing power (AI add-ons, subscription uplift) - Margin expansion, not just feature launches - Data advantage and workflow integration

Winners likely: Enterprise software leaders, industrial automation, select healthcare and financial platforms.

Avoid: Firms with AI narratives unsupported by capex, data, or monetization.


 

 

 

Capex Trends and Key Companies

The AI buildout is driving massive capital expenditure across the semiconductor ecosystem. Table 1 below highlights selected large cap public companies and their AI-related investment roles:

Company (Ticker)

Role in AI Stack

Recent/Planned AI-Related Investment

Nvidia (NVDA)

AI GPUs and systems (Layer 2); also integrates AI into services (Layer 5)

Core GPU supplier for training/inference. Dominant share of AI accelerator market. Continues R&D investment in new chips (e.g. Blackwell) and systems.

AMD (AMD)

CPUs/GPUs for AI servers (Layer 2)

Supplies MI300 series GPUs and EPYC CPUs for datacenters. Relies on TSMC capacity. Significant R&D spending.

Intel (INTC)

CPUs, GPUs, FPGAs (Layer 2); Foundry fabs

Building new US fabs (Oklahoma City, Ohio) under CHIPS Act. 2025 capex target ~$15–18B. Shifting toward foundry services for AI chips.

TSMC (TSM)

Contract foundry (Layers 2–3)

Plans ~20 new fabs globally. Capex guidance ~\$52–56B for 2026 (30%+ increase). Serves Nvidia, Apple, etc.

Samsung (005930.KS)

Memory & foundry (Layers 2–3)

Expanding logic and memory fabs, but capex is much smaller than TSMC’s (2026 foundry spend ~1/5 of TSMC’s). Building new U.S. memory fab, investing in AI DRAM/HBM.

Micron (MU)

Memory (DRAM/Flash) (Layer 2)

Announced ~$200B U.S. expansion (6 fabs + R&D) to meet AI demand. Major new fab openings in Idaho, expansions in Virginia, up to 4 fabs in NY.

SK Hynix (000660.KS)

Memory (DRAM/Flash) (Layer 2)

Boosting capacity in Korea (e.g. new P-T7 packaging plant). Record profits from AI demand; planning multi-year capex increases (Q3 2025 reported).

ASML (ASML)

Lithography equipment

(Layer 2)

World’s only EUV lithography supplier. R&D and new

tool capacity expanding as foundries (TSMC, Samsung, Intel) ramp up volume.

Broadcom (AVGO)

Networking chips (Layer 3)

Supplies top-of-rack switches, storage controllers, ASICs. Benefiting from cloud/network expansion. (~27% FY capex growth driven by AI/cloud).

Microsoft (MSFT)

Cloud/Software (Layers 3,5)

Spent $75B+ on datacenters (to 2025). Embedding OpenAI tech into Office, Azure; building Azure AI superclusters.

Amazon (AMZN)

Cloud/AI services (Layers 3,5)

Spent $100B+ on AWS datacenters. Offering SageMaker, Bedrock AI APIs. Invested $4B in new cloud regions (Chile, etc.) to scale capacity.

Alphabet (GOOGL)

Cloud/AI (Layers 3,5)

Spent ~$82B on data centers. Its Google Cloud offers Vertex AI, and DeepMind’s models are feeding AI products (Gemini LLM, etc.).

 

Strategic Enhancements for Portfolio Manager Review


1. IPO Watchlist – Model-Native Firms to Monitor:

·        Anthropic, Cohere, Mistral are likely IPO candidates within 12–24 months.

·        Major cloud platforms (Amazon, Google, Microsoft) are equity partners or preferred deployment partners.

·        Portfolio teams should prepare to evaluate business models, compute intensity, and monetization paths as public filings emerge.

 

2. Anchors for Forecasting Forward Growth:

·        Nvidia FY27 consensus revenue: $130–140B vs. $60B in FY24.

·        Microsoft’s Azure AI guidance: 25–30% CAGR through FY26.

·        TSMC: Guided >30% growth in high performance computing chips for FY26.

·        ASML: Order backlog continues to rise with EUV capacity bookings through 2028.

 

3. Capital Allocation Guidance by Layer:

Layer

Suggested Weight

Rationale

Layer 1: Energy

20%

Long-cycle, defensive regulated return profile with early-cycle demand certainty

Layer 2: Chips

40%

Highest earnings leverage and capital formation visibility

Layer 3: Cloud

20%

Moat-rich hyperscalers, infrastructure scale

Layer 4: Models

10%

IP-rich, but indirect access

Layer 5: Applications

10%

Selective based on proof of monetization

 

AIncrementum - Andrea Bonini - January 2026

 
 
 

Comments


bottom of page