The AI Arms Race Moved to Atoms: $625 Billion, Superconductors, and the Fight for Electricity
The AI Arms Race Moved to Atoms: $625 Billion, Superconductors, and the Fight for Electricity
Published February 12, 2026
Something weird happened to the AI race. It stopped being about software.
Two years ago, the competition was benchmarks. Who had the best model. Who scored highest on MMLU. Who could pass the bar exam. The battleground was code and training data and clever architecture tricks.
In 2026, the battleground is concrete, copper wire, and natural gas turbines.
The Numbers That Should Scare You
The four biggest hyperscalers — Amazon, Alphabet, Microsoft, and Meta — will spend a combined $625 billion or more on infrastructure this year. That’s up 36% from 2025, which was already up 73% from 2024.
Let that compound for a second. In 2023, Big Tech’s combined capex was around $150 billion. In three years, that number has more than quadrupled.
Here’s the individual breakdown:
| Company | 2026 Capex | What That Means |
|---|---|---|
| Amazon | ~$200B | More than NASA’s entire budget. Times ten. |
| Alphabet | ~$185B | Spending 46% of revenue on infrastructure |
| Microsoft | ~$105B | More than the GDP of 100+ countries |
| Meta | ~$135B | 54% of revenue. Zuck is all-in. |
About 75% of this — roughly $470 billion — is specifically for AI compute, data centers, networking, and cooling. The rest is regular cloud and enterprise infra.
Goldman Sachs projects total AI-related infrastructure spending could exceed $1.3 trillion through 2027.
What They’re Actually Building
Today, Meta confirmed a roughly $10 billion data center project in Lebanon, Indiana. Expected online late 2027. Designed for about 1 gigawatt of power capacity.
One gigawatt. For one data center. For one company.
For reference, a typical nuclear power plant produces about 1 gigawatt. Meta is building the energy equivalent of a nuclear plant — just for AI compute in rural Indiana.
And they’re not alone. Microsoft has data center projects across the US, Europe, and Asia that collectively require more power than some small nations consume. Amazon is buying nuclear-powered data center capacity. Google is investing in geothermal and small modular reactors.
The AI race became an energy race while nobody was watching.
Samsung Just Shipped the Next Bottleneck Fix
Also today: Samsung began commercial shipments of HBM4 memory. This matters more than it sounds.
High Bandwidth Memory is the critical link between AI chips and the data they process. Every GPU in every data center needs it, and there hasn’t been enough. NVIDIA’s H100 and B200 chips have been supply-constrained not because NVIDIA can’t make chips, but because Samsung, SK Hynix, and Micron couldn’t make enough HBM fast enough.
HBM4 roughly doubles the bandwidth of HBM3E. More bandwidth means each GPU can process more data per cycle, which means fewer GPUs needed per workload, which means the $625 billion in capex goes slightly further.
Or — more likely — it means they’ll just train bigger models.
Microsoft Is Testing Superconductors. Seriously.
Here’s the one that tells you where this is really going.
Microsoft is exploring high-temperature superconductors to reduce energy losses in data center power delivery. Not in a research paper. Not in a “maybe someday” blog post. In actual testing.
Current data center power distribution loses roughly 5-10% of electricity as heat in transmission cables and transformers before it even reaches a server. At the scale Microsoft operates, that’s billions of dollars in wasted energy per year.
Superconductors conduct electricity with zero resistance. Zero loss. If Microsoft can make this work at data center scale, it fundamentally changes the economics of AI infrastructure.
This is what the AI race looks like now. Not “our model scores 2% higher on a benchmark.” It’s “we’re researching exotic physics to save 7% on our electric bill because our electric bill is $8 billion.”
The Pentagon Wants In
Meanwhile, U.S. defense leaders are pressing major AI companies to deploy tools on classified networks. The Pentagon wants AI on SIPR and JWICS — the military’s secret and top-secret networks.
The friction points are predictable: autonomous targeting concerns, domestic surveillance risks, liability questions, and the fact that most AI companies built their products for consumer and enterprise use, not for environments where a wrong answer could mean a missile strike.
But the money is there. Defense AI budgets are expanding aggressively, and the Pentagon sees AI infrastructure as a national security asset, not just a commercial one.
This is another front in the infrastructure war. Whoever builds the most compute capacity doesn’t just dominate commercially — they dominate strategically.
Anthropic Plays Politics
And in a move that’s either principled or cynically brilliant, Anthropic announced it’s investing $20 million in candidates who support stricter AI regulation and export controls ahead of the 2026 midterms.
Read between the lines: Anthropic wants regulations that it can comply with but smaller competitors can’t. Stricter rules favor incumbents with deep pockets and legal teams. Export controls limit Chinese competition.
This is the oldest play in Silicon Valley: climb the ladder, then pull it up behind you. “AI safety” becomes a regulatory moat. You can believe in responsible AI development AND recognize that the company spending $20 million on politicians has interests beyond altruism.
What This Means for Everyone Else
Here’s the part nobody wants to say out loud.
AI is becoming a physical infrastructure play. The model quality gap between the top labs is narrowing. GPT-5, Claude Opus, Gemini Ultra — they’re all very good. The differentiator increasingly isn’t intelligence. It’s capacity. Who can serve the most users. Who can process the most tokens per second. Who can offer the lowest latency.
That’s a hardware problem. An energy problem. A supply chain problem. A construction problem. It’s the kind of problem that favors companies that can spend $200 billion in a single year.
This is bad for competition and potentially great for open source. If the frontier moves to “who can build the biggest data center,” small AI startups can’t compete on infrastructure. But they CAN compete on efficiency. Every year, open-source models get smaller, faster, and more capable on consumer hardware. While Big Tech builds gigawatt data centers, the open-source community is figuring out how to run competitive models on a $500 GPU.
The future might not be one model that rules everything. It might be massive cloud models for enterprise scale AND small local models for individual use. The $625 billion bet and the garage hacker running Llama on a 4090 aren’t competing. They’re serving different futures.
The real question: who pays for this? Meta is spending 54% of revenue on infrastructure. Amazon may go free-cash-flow negative. These companies are betting that AI revenue will eventually justify these costs. If it doesn’t — if the ROI timeline stretches — the correction will make the SaaS wipeout look like a rounding error.
Bank of America’s paradox still holds: you can’t simultaneously believe AI will destroy all software AND that the AI infrastructure investment won’t pay off. One of those stories is wrong. We just don’t know which one yet.
But $625 billion says Big Tech has picked their side.
This is Kyber Intel. We track the shift from corporate gatekeeping to individual sovereignty in AI. Follow us on X @kyberintel for daily analysis.