Key Takeaways
- 1 Data centers are heat removal problems—all electricity becomes heat that must be extracted
- 2 AI increased rack power density 10x—from 5-15 kW to 60-140+ kW per rack
- 3 Cooling technology determines facility design and maximum power density
- 4 PUE measures efficiency—modern facilities achieve 1.1-1.2 (only 10-20% overhead)
The Rack: The Fundamental Unit
Walk into a data center and you'll see rows of tall black cabinets. These are racks—the fundamental building block.
5-15 kW per rack
20-40 servers @ 200-500W each
60-140+ kW per rack
4-5 HGX servers @ 10kW+ each
The Heat Problem
Here's inescapable physics: all electricity becomes heat. A 100 MW facility produces 100 MW of heat that must be removed continuously.
Cooling Technologies
The cooling approach determines everything: facility design, maximum density, and cost.
Traditional HVAC, raised floors, perforated tiles
Liquid cold plates on processors, becoming standard for AI
Servers submerged in dielectric fluid, highest density
PUE: Measuring Efficiency
Power Usage Effectiveness measures how much overhead a facility requires. Lower is better.
PUE 1.6 - 2.0
60-100% overhead
PUE 1.1 - 1.2
10-20% overhead
Redundancy: The Cost of Reliability
Data centers can't go down. Redundancy ensures failures don't cause outages—but it comes at a cost.
One spare unit beyond minimum
Double everything (~2x cost)
The Scale of AI Data Centers
Modern AI facilities are among the largest infrastructure projects in the world.
And this is just one facility among hundreds being built for AI.
Go Deeper
Chapter 3 of This Is Server Country explores how data centers evolved from server closets to gigawatt-scale infrastructure—covering cooling technologies, power distribution, redundancy systems, and the engineering trade-offs that shape modern AI facilities.
Learn more about the book →