Now available: This Is Server CountryGet the book
Power & Energy

The Power Constraint

Why electricity, not capital or computing power, is the limiting factor in AI infrastructure growth.

8 min read

Key Takeaways

  • Electricity, not computing power or capital, is the primary constraint limiting AI infrastructure growth
  • Peak capacity, not average consumption, drives infrastructure requirements—grids must serve maximum possible demand
  • Gigawatt-scale loads require transmission-level connections, which urban distribution networks cannot provide
  • Grid interconnection timelines (5-8 years) often exceed data center construction timelines (2-3 years)
  • Site selection follows transmission topology more than fiber networks, labor pools, or tax incentives

The Limiting Factor

The AI industry has announced over $1.1 trillion in data center investments. Companies have secured millions of GPUs. Billions of dollars flow into infrastructure daily. Yet one question keeps coming up: where will the power come from?

Not money. Not chips. Not real estate. Electricity.

This might seem strange. The United States generates about 4,000 terawatt-hours of electricity annually. Data centers currently consume perhaps 4-5% of that. Even aggressive growth projections rarely exceed 10-15% by 2030. The country has plenty of total power generation capacity.

But that framing misses the fundamental challenge. Electricity isn't fungible like money. You can't take power from California and use it in Virginia. You can't store it economically at scale. And you can't deliver gigawatt loads through distribution infrastructure designed for neighborhoods.

The constraint isn't total generation—it's local transmission capacity and the ability to connect new loads to the grid. And on that dimension, the United States faces a crisis.

Understanding Power Scale

Before we can understand the constraint, we need to understand the units. Power is measured in watts, and the scale ranges from tiny to incomprehensible.

A watt is small—a LED bulb might use 10 watts. Your laptop might draw 60 watts. These numbers are human-scale, easy to grasp.

A kilowatt is 1,000 watts. A typical American home uses about 1 to 2 kilowatts on average, though peak usage might reach 5 to 10 kilowatts when the air conditioning runs on a hot day.

A megawatt is 1,000 kilowatts, or one million watts. A megawatt serves roughly 750 to 1,000 homes (depending on region and season). A large wind turbine generates 2 to 3 megawatts. A small natural gas peaking plant might generate 20 to 50 megawatts.

A gigawatt is 1,000 megawatts, or one billion watts. A large coal or nuclear power plant generates 1 to 2 gigawatts. This is where we cross from comprehensible to infrastructure-scale.

Now consider the Saline Township data center project in Michigan: 1.4 gigawatts of planned capacity. That's not total project capacity over time—that's phase one. It's more than a large power plant. It's roughly one-quarter of DTE Energy's entire peak load across southeastern Michigan, which serves 2.2 million customers.

One facility. On 250 acres of former farmland.

This is the scale challenge. We're not talking about adding a factory or even a large industrial complex. We're talking about adding loads equivalent to entire power plants, concentrated in single locations, and we're talking about doing it hundreds of times across the country.

Peak vs. Average: Why Grids Must Overbuild

When discussing power consumption, people often focus on annual totals or average usage. "This data center will consume X terawatt-hours per year." But that's not how grid infrastructure works.

Grids must be built to serve peak demand—the highest simultaneous load that might occur. On a July afternoon when air conditioners run across a region, demand spikes. The grid must handle that spike, even though most hours of the year have lower demand.

Traditional loads are "peaky." Residential demand peaks in early evening. Commercial demand peaks during business hours. Industrial demand varies by shift schedules. Utilities understand these patterns and plan accordingly.

Data centers are different. They run 24/7 at high utilization. A 100-megawatt data center might operate at 90-95 megawatts continuously. There's no off-peak. There's no shoulder season. It's essentially constant peak demand.

From a grid planning perspective, this is both good and bad. Good because it's predictable—you know exactly what capacity you need. Bad because you need that capacity all the time. You can't share infrastructure across different usage patterns.

This distinction matters enormously when a region adds 500 megawatts or 1 gigawatt of data center load. That capacity must be available continuously, which means transmission lines, substations, and generation reserves must all expand to accommodate it.

And "expand" is not a quick process.

Why Not Cities? Transmission vs. Distribution

If data centers need power and fiber networks, why not build them in cities where both exist? This seems logical, and indeed, early data centers clustered in urban areas. But modern AI facilities can't follow that pattern.

The reason is the difference between transmission and distribution infrastructure.

Transmission is the high-voltage network that moves power long distances. Think 115 kV, 230 kV, 345 kV, even 765 kV lines. These are the large towers and corridors you see crossing rural areas. Transmission connects power plants to regional substations and can carry hundreds or thousands of megawatts.

Distribution is the local network that delivers power to end users. Think 12 kV to 35 kV lines on poles along streets, stepping down to 120/240 volts at homes. Distribution is designed for neighborhoods, office buildings, and light industrial use—loads measured in kilowatts to tens of megawatts.

Cities have extensive distribution networks. But they typically don't have direct access to transmission-level infrastructure. Transmission lines come to regional substations outside urban cores, then distribution networks branch out.

A 100-megawatt data center might fit on a distribution network, though it's pushing limits. A 500-megawatt facility cannot. A 1.4-gigawatt facility like Saline absolutely cannot. At that scale, you need to connect directly to transmission lines—the 230 kV or 345 kV network.

This fundamentally changes site selection. You're not looking for fiber-rich urban areas anymore. You're looking for farmland next to transmission corridors. You're following the power lines, not the internet cables.

And transmission corridors don't follow population density. They follow power plant locations, regional demand centers, and historical grid topology. They cut through rural areas, agricultural land, and small towns—places that suddenly find themselves attractive to an industry that can write billion-dollar checks.

The Interconnection Challenge

Connecting a new load to the grid isn't just plugging in a cable. It's a complex, multi-year process called interconnection, and it's become a major bottleneck.

When you want to connect a gigawatt-scale facility, the regional transmission operator (RTO) must study the impact. Will the connection destabilize the grid? Do transmission lines need upgrades? Does protection equipment need modification? Will this affect power flows elsewhere in the network?

These studies take time. A feasibility study might take 6 months. A system impact study another 12 months. A facilities study another 6 to 12 months. And that's before construction begins on any required upgrades, which can take 2 to 4 years.

The interconnection queue in PJM—the regional operator serving 13 states and 67 million people—currently has about 40 gigawatts of data center requests pending. That's roughly equivalent to 40 large power plants' worth of load. In MISO (the Midwest operator), the queue has 300+ gigawatts of total requests (mostly renewable generation but increasingly data centers).

Processing these requests takes years. And the queue keeps growing faster than requests get approved.

Here's the fundamental mismatch: you can construct a data center in 2 to 3 years. But getting grid interconnection approval can take 5 to 8 years. The infrastructure timeline is longer than the building timeline.

This creates perverse incentives. Some developers submit interconnection requests for sites they don't control, hoping to get queue position before competitors. Others submit multiple requests at different sites, planning to abandon whichever doesn't get approved quickly. This clogs the queue further, making the problem worse.

Some regions have reformed the process. ERCOT (Texas) has faster timelines, partly because the grid is simpler and partly because rules prioritize speed. This is one reason Texas has seen explosive data center growth—you can get approval in 18 to 24 months instead of 5 to 8 years.

But faster approval means less comprehensive studies, which can create reliability risks. It's not a free lunch—it's a trade-off between speed and caution.

Site Selection Logic: Following the Wires

Understanding the power constraint reframes how we think about data center site selection. The traditional factors—fiber network access, labor pool, tax incentives, land cost—still matter. But they're secondary to one question: where can we get gigawatts of power, and how fast?

This drives developers to farmland near transmission corridors. A cornfield in rural Iowa or Kansas might be 50 miles from the nearest city, but if there's a 345 kV transmission line nearby and an RTO willing to process interconnection quickly, it's more valuable than urban real estate.

Look at the geography of recent mega-projects. They're not in Silicon Valley or New York. They're in places like:

  • Northern Virginia - Data Center Alley, where transmission corridors converge and PJM has historically been accommodating (though increasingly constrained)
  • Central Iowa - Abundant wind power generation, existing transmission, favorable tax policy
  • Kansas - Access to SPP (Southwest Power Pool) transmission, low land costs, state incentives
  • West Texas - ERCOT grid, fast interconnection, wind/solar generation expansion
  • Rural Michigan - Saline Township and others, near transmission lines serving Detroit-area industry

What these locations have in common isn't tech ecosystems or coastal proximity. It's transmission access and interconnection feasibility.

Some developers are bypassing the grid entirely through "behind-the-meter" generation—building natural gas plants or solar arrays on-site. This avoids interconnection queues but creates other challenges: fuel logistics, emissions, higher costs, and community opposition.

Others are exploring nuclear power, including small modular reactors (SMRs), for on-site generation. These are years away from deployment, but the interest level is high. When your limiting factor is power, any solution gets consideration.

The Trillion-Dollar Question

The AI industry has capital. It has chip supply (though tight). It has land options. What it doesn't have is a clear path to accessing the 50 to 100 gigawatts of additional power capacity it needs over the next five years.

This isn't an unsolvable problem. The United States built out the rural electrification system in the 1930s-40s. It constructed the Interstate Highway System in the 1950s-60s. It has proven capable of infrastructure transformation when motivated.

But those efforts took decades and massive public investment. The current AI buildout is happening on a compressed timeline with primarily private capital, navigating a fragmented regulatory landscape where no single authority controls the outcome.

The power constraint will shape where data centers go, which companies can expand, and ultimately who wins the AI infrastructure race. It's not a temporary bottleneck—it's a fundamental restructuring of how we think about digital infrastructure.

When a project like Saline Township proposes consuming one-quarter of a major utility's capacity, that's not just a data center development. It's a question about regional economic priorities, grid investment strategy, and the balance between digital infrastructure and traditional industrial use.

These aren't questions with obvious answers. And they're being asked in township meeting rooms across rural America, where local officials face decisions with billion-dollar stakes and multi-decade consequences.

The power constraint is, ultimately, a political and policy challenge dressed up as an engineering problem. The engineering has solutions. Whether the political will exists to implement them at the required scale and speed remains to be seen.

Go Deeper

The power constraint and its implications for data center development are examined in depth in Chapter 4 of This Is Server Country, which traces how electricity became the limiting factor in AI infrastructure growth.

The book explores interconnection queues, transmission vs. distribution networks, site selection logic, and the policy challenges of building gigawatt-scale facilities in rural communities.

Learn more about the book →