
Tracking of data center investments, compute acquisitions, and partnerships across the world's leading AI companies. Updated January 2026.
Source: McKinsey & Company, April 2025
Comparison of major AI infrastructure players
Last updated: January 2026
| Company | Infrastructure They're Selling | Target Customer | Data Centre Build | Power Strategy | Infrastructure Philosophy |
|---|---|---|---|---|---|
| Microsoft | Flexible access to all frontier models | Enterprise, government | Nvidia GPUs + custom silicon (Maia) | Grid + long-term PPAs | Don't overbuild. Stay flexible. Reliability > speed |
| Best cost-performant AI compute | AI natives; internal Google | Nvidia GPUs + TPUs | Grid | Vertical integration; turn infra excellence into product differentiation | |
| AWS | "AI factory" focused on throughput and low cost per token | Everyone building AI | Custom chips + Nvidia GPUs | Grid + PPAs | Win on cost and supply certainty; not chasing bleeding-edge speed |
| OpenAI | End-to-end AI offering (APIs, apps, infra) | Enterprises wanting full-stack AI | Partner-led DCs (Stargate, Oracle, Crusoe); exploring custom chips | Partner-dependent (varies) | Secure massive compute fast; reduce hyperscaler dependency |
| xAI | Internal AI capability (vertically tied to Elon's ecosystem) | Elon's ecosystem | Nvidia GPUs | Grid + off-grid where needed | Speed above all else. Build now, optimise later |
| Meta | Open-source AI models (Llama) + internal compute | Internal use + developer community | Nvidia GPUs + custom MTIA chips | Grid + nuclear PPAs (6.6 GW by 2035) | Open-source leadership; build massive scale for internal products |
| Anthropic | Safety-focused AI APIs (Claude) | Enterprises wanting safe, reliable AI | Multi-cloud (AWS Trainium, GCP TPUs, Hut 8 partnership) | Partner-dependent (relies on cloud providers) | Multi-cloud diversification; rely on partners for data centre build and focus on research instead; safety-first development approach |