
(vectorfusionart/Shutterstock)
AI is often about scale, but what “scale” means is beginning to shift. For years, that meant throwing more GPUs at the problem, adding storage and shoving through bigger datasets. Which none of that does you any good if you can’t nourish something even more fundamental: electricity. In part 1 of our Powering Data in the Age of AI series, we learned how energy went from a background expense to the hard upper bound on AI progress. Part 2 picks up where the industry realizes this isn’t just a technical limitation, it’s a question of control.
The most ambitious AI companies aren’t trying to use power more efficiently. They’re trying to own it. That transformation is rewriting the entire infrastructure playbook. The new frontier is the energy systems built specifically for AI.
Small modular reactors, fusion contracts, private microgrids, long-duration storage, vertically integrated energy stacks: these are not concepts anymore; they are required. This is the advent of compute sovereignty, where whoever owns the power behind intelligence in the future holds intelligence itself.
From Energy Problem to Energy Control
Energy stopped being one of those background issues when tech companies discovered that the grid was never built for what they were trying to do anyway. The logic was simple enough for decades: Build a data center, plug it into the grid, keep it cool. That worked—until it didn’t.
The equation shifted as AI models grew in size and training runs no longer simply took hours, but days or even weeks. It ceased to be a bandwidth entry in the budget and began acting as a sharp edge. Companies spent years trying to outrun the problem with efficiency gains: better chips, tighter cooling, smarter scheduling, all of which was supposed to mean that every new gain would make room for more aggressive workloads. All of the savings they gleaned were immediately eaten by larger models and nonstop compute. Ultimately, the bottleneck wasn’t within the walls of the data center. It was the socket.
That realization was instrumental in the energy slipping quietly from the facilities team’s spreadsheet to the forefront of boardroom strategy decks. The questions changed: How much power can we really extract? Who decides that? What will happen when we need to double that next year? And why are we letting someone else manage the one resource on which everything else depends: whether we can build the future we’re planning?
Why Big Tech No Longer Trusts the Grid
The grid’s failure forced the industry to take energy seriously. In 2024, the utility that provides power to North Virginia’s data center hub, Dominion Energy, informed state regulators that it could not promise new power for AI data centers unless they agreed to share some of the expense of grid upgrades at a massive scale. That in itself was a warning shot.
Then came Loudoun County, home to several data centers in Virginia, which started to pump the brakes on approved or planned projects as existing substations reached capacity. What these electricity companies were saying is that they just don’t have enough power to support GenAI’s meteoric rise.
This was a global problem. In 2024, Ireland’s energy regulator made it clear that any new data center in Dublin would have to provide the bulk of its power generation or storage capacity rather than drawing from the national grid. Singapore also brought back data center approvals only for projects with on-site or ultra-efficient power.
The Netherlands wouldn’t even negotiate; the government turned down Meta’s giant data center project in 2024 on the grounds of excessive energy demand. These are major hubs of the global AI network, and not just emerging markets. So this showed that electricity for AI data centers isn’t guaranteed, not even the tech giants or the developed markets.
It also highlighted that public infrastructure wasn’t able to move to keep pace with the meteoric rise of GeAI. It could not scale with AI workloads. That was the turning point. AI companies began to view energy not as something they buy, but as something they must try to control or even own as a means of self-preservation.
Nuclear as Strategy: SMRs and Fusion Move to Center Stage
It’s easy to think about nuclear’s revival in AI infrastructure as a clean energy narrative. But it is not. The actual play is about leverage by cutting out the last external dependency standing between compute giants and full-stack control.
When Microsoft struck that twenty-year deal to revive the dormant Three Mile Island Unit 1 reactor, it wasn’t because their math beat solar’s cost per kilowatt-hour. It was because the facility delivered 835 megawatts of stable baseload. That means no variability, no curtailment risk, and no dependence on grid operators. Energy is pre-allocated, site-bound, and politically insulated. Now that’s a true asset in the AI era.
Small Modular Reactors (SMRs) go even further. They shrink the distance between power generation and compute execution. They can be deployed close and containerized, and perhaps most importantly, they can be controlled. That’s why Amazon is actively exploring one at its cloud hub in eastern Washington.
The U.S. Department of Energy (DOE), which has openly supported SMR–AI colocation models, sees them as a way to guarantee “high-assurance loads” for AI infrastructure. However, turning SMRs from prototypes into production-grade infrastructure won’t happen overnight.
Licensing alone takes years, and early builds are expensive, especially when everything from fuel to fabrication has to be developed in parallel. The U.S. is still working on a stable domestic supply of high-assay low-enriched uranium (HALEU), which a lot of advanced reactors will need.
Then there’s the question of how these setups interact with the grid. Metering issues could be a hurdle. Amazon’s deal to colocate with the Susquehanna plant hit a wall when regulators paused over metering rules, as there were concerns that data centers might benefit from transmission systems without paying into them.
Fusion plays a different role: it offers regulatory escape. Fusion systems don’t fall under the same Nuclear Regulatory Commission licensing regime because they don’t sustain chain reactions or produce long-lived radioactive waste. That legal distinction is critical. It means fusion can move faster, face fewer political choke points, and avoid the decades-long permitting gridlock that has buried every traditional reactor plan since the 1980s.
Helion, the Sam Altman–backed fusion firm in Washington state, is promising electricity by 2028. It also goes beyond that by trying to build an energy source that lives outside the old constraints. If it succeeds, the electricity won’t just be clean or cheap; it will be sovereign. No grid permissions. No curtailment. No external gatekeepers. This isn’t about owning power for the sake of sustainability. It’s about owning the one resource that determines who gets to build intelligence and who has to ask permission.
Nuclear, in both fission and fusion form, is becoming the quiet backbone of computer sovereignty, and the companies moving first aren’t making a bet are moving closer to fortifying their future.
Building the AI Energy Stack
With the grid no longer seen as a reliable partner, AI companies are starting to act like infrastructure architects. The strategy now isn’t just to buy energy—it’s to build around it. Land, energy source, cooling, and latency are all being bundled into one integrated plan. Data center design has become a utility-scale problem, and the smartest companies are treating it like one.
The modern AI energy stack goes well beyond plugging into solar or buying a PPA. It’s layered and tailored to the workloads it’s meant to support. On-site generation might include solar, hydro, or nuclear, depending on what’s available—and what the compute footprint demands. Google is investing in enhanced geothermal systems near its Nevada data center.
In other places, hyperscalers are co-locating next to hydropower or exploring SMRs for future-proofed baseload. Storage systems range from lithium-ion arrays to iron-air and hydrogen. On top of that, you’ll find smart orchestration: carbon-aware scheduling, predictive load shifting, even AI models forecasting their own demand to precondition the grid.
Some companies are taking it further, building private microgrids and what amounts to energy islands. For example, QScale in Quebec is pairing hydro with AI-optimized cooling. Microsoft’s fusion-backed ambitions with Helion suggest an endgame where generation, compute, and scheduling all sit inside the same fence line.
What’s especially new is how AI is starting to shape the curve of energy use. Instead of reacting to grid signals, workloads are being timed to align with carbon intensity or local supply. Google already does this across regions. Gridmatic is using market signals to dispatch load when it’s cheapest. DeepMind has even trained models to predict grid imbalances in advance. The result is a subtle inversion: AI used to be a problem for the grid. Now, it’s beginning to act like a stabilizer, and the companies that understand this will be better positioned to future-proof compute.
Related Items
Bloomberg Finds AI Data Centers Fueling America’s Energy Bill Crisis
OpenAI Aims to Dominate the AI Grid With Five New Data Centers
MIT’s CHEFSI Brings Together AI, HPC, And Materials Data For Advanced Simulations