Originally posted on Compu Dynamics
Artificial intelligence (AI) is transforming the world—and data centers are at the heart of this shift. At the recent Bisnow DICE Conference, industry experts – Steve Altizer (CEO of Compu Dynamics), Dagi Berhane (Sr. Director, Global DC Architecture & Engineering, at Salesforce), Julie Coates (VP, Life Cycle Management, 1547 Critical Systems Realty), and Shad Sechris (Director, Data Center Solutions, NSI Building Technology) explored how AI workloads are redefining data center design, operation, and management. In this Modern Data Center Journal podcast, our team analyzed the key insights from the discussion, focusing on the challenges of high-density demands, retrofitting legacy facilities, and creating adaptable, future-ready designs.
Here are the highlights from data center panel discussion:
AI Workloads: Breaking the Limits of Data Center Design
The traditional data center, built for workloads consuming 5-15 kilowatts per rack, is now a relic of the past. AI applications are pushing power requirements to hundreds of kilowatts—and even megawatts—per rack. For instance, systems like NVIDIA’s Blackwell GPUs demand advanced power distribution and liquid cooling technologies, with some racks needing up to five cooling circuits.
This surge in density highlights new engineering challenges:
- Cooling and Power Distribution: Managing heat dissipation in such compact spaces requires highly efficient liquid cooling solutions, such as direct-to-chip or immersion cooling.
- Flexibility in Design: With no universal template for AI-ready data centers, adaptability is key. White space integration—blending power, cooling, and networking into a cohesive system—has become a cornerstone of modern data center design.
To hear the discussion and continue reading the full article, please click here.