As artificial intelligence accelerates across industries, data centers are being forced to reckon with a challenge they can no longer ignore: power density. These new AI workloads, powered by dense clusters of GPUs, are transforming the compute landscape and revealing the limitations of current infrastructure.
During a recent AI Week panel hosted by Data Center Dynamics and sponsored by ZincFive, industry leaders from Vantage Data Centers, ZincFive, and Nomad Futurist joined the conversation. They shared insights on what’s changing, what is not working, and what it will take to keep up.
AI Workloads Are Breaking Traditional Power Models
In the cloud era, power demand ramped up gradually. With AI, demand can surge from zero to full capacity in milliseconds. Shawn Dyer of Vantage noted how quickly power profiles are shifting. “It used to be 40 kW racks. Now we’re planning for 250 to 500 kW per rack and utilities are asking what that means for the grid.”
These rapid load spikes can introduce hundreds of megawatts of additional demand on a campus without warning. Traditional infrastructure, designed for steady-state workloads, is being pushed past its limits. Some operators are responding by overbuilding, but that approach is neither cost-effective nor sustainable.
Batteries May Hold the Key
Brandon Smith of ZincFive pointed to advanced battery chemistries as a critical part of the solution. Nickel-zinc batteries can rapidly discharge high levels of power in short bursts and recover just as quickly, which makes them well suited to smooth the spikes generated by AI clusters. Positioned close to the rack, they can prevent those fluctuations from affecting upstream infrastructure. “We need to start thinking about battery technology not just as a backup solution but as an enabler of AI,” Smith said.
The key will be integrating battery capabilities directly into the design and planning process, rather than treating them as a last-mile consideration.
The Real Bottleneck Is Not Just Power
AI growth is also stretching human capital and processes. Teams working on software, hardware, and facilities are often disconnected, working toward different objectives with little coordination. This misalignment hinders both deployment speed and long-term scalability. Nabeel Mahmood of Nomad Futurist stressed that collaboration is essential. “If we don’t build smarter, more resilient, and sustainable infrastructure, we’re heading into a potential catastrophe.”
Beyond internal coordination, there are external hurdles. Utility interconnection delays, long permitting timelines, and labor shortages are all making it difficult to meet aggressive AI infrastructure demands.
Collaboration Will Drive the Next Generation of AI Infrastructure
The panelists agreed that no single company or technology can solve this challenge alone. A new mindset is needed, one that prioritizes shared planning, open architectures, and constant collaboration between developers, vendors, and utilities.
Smith emphasized the need to break down silos across the power stack. Mahmood called for standardization and intelligent infrastructure design. Dyer stressed that without honest conversations and partnership, even the best ideas will fail to scale.
Preparing for What Comes Next
Today’s AI demands are already overwhelming traditional infrastructure. And with projections of gigawatt-scale campuses and even megawatt-level racks within the next decade, the stakes are only growing.
Meeting the moment requires new chemistry, new partnerships, and a shared commitment to building for the future, not just reacting to the present. AI is here to stay. The challenge now is to ensure the infrastructure powering it can keep up.
Click here to stream the full podcast.