3 Comments
User's avatar
The AI Architect's avatar

Great historical framing here. The virtualization unlock really changed everything once utilization became the primary constraint. What stood out to me was the PUE optimization story, cause we're kinda seeing the inverse problem now with AI workloads where you've got such high power density that traditional cooling can't even keep up. I worked on a hyperscale deployment last quarter and the conversation shifted from PUE to literaly how many megawats per rack you can physicaly deliver.

MCJ's avatar

Thanks for reading and sharing your personal insights. If you haven't already, check out Stepchange for more about the history of data centers. https://www.stepchange.show/

Sarath Nagaraj's avatar

Excellent breakdown—data centers are now the dominant incremental load in the U.S., growing 10x faster than overall electricity and forcing utilities to rethink everything. The PPA (Power Purchase Agreement) scramble and behind-the-meter rush highlight how power has become the scarcest input for AI scaling.

This lines up with the transformer/grid wall: 120–150 week lead times + copper intensity (thousands of tonnes per 100 MW site) + 3–5 year interconnection queues compound the delays into a hard physical cap.

Your point on underbuilt transmission reinforces why hyperscalers are chasing nuclear/SMRs/on-site gas—anything to bypass the queue.

Full take on copper/transformer/geo risks amplifying this: https://geoconstraints.substack.com/p/the-120-week-wall-why-transformers

Which bottleneck do you see as the biggest near-term limiter—interconnection queues or transformer supply?