As demand for AI accelerates, data centers face increasing scrutiny from regulators, communities, and power providers for their energy use. More than passive infrastructure, these centers are actively shaping grid performance and environmental impact.
This development demands more than incremental upgrades or quick fixes. It calls for a fundamental rethinking of how data centers are designed so that the spaces are scalable, flexible, and responsive to rapidly changing technology and regulations. As we look ahead to how future data centers should be designed, developers need to consider not only the power of AI, but also the demands of various computing systems, the rise of on-site power generation, Virtual Power Plants (VPPs), and evolving legislation.
Comparing Power and Efficiency Across Different Computing Systems
Data center efficiency begins with the ITE (Information Technology Equipment). The type and density of computing platforms, AI clusters, traditional hyperscale nodes, or high-performance computing, dictate both the energy use and cooling requirements of a facility. Factors like cabinet layout, floor type, and airflow strategies (e.g., CRAH units or fan coil walls) must be planned early to avoid downstream inefficiencies.
Cooling design is a multi-layered system, influenced by:
- Data hall configuration: ITE density, hot aisle containment, and cabinet spacing directly affect airflow and thermal efficiency. Misaligned layouts can cause unintended air recirculation, leading to higher energy use from both cooling and power systems.
- Infrastructure and distribution systems: Components like chilled water pumps, automated valves, and ductwork in ancillary rooms (UPS, mechanical, battery spaces) can contribute to significant energy consumption unless designed with intelligent control strategies like valve sequencing or integrated temperature regulation.
- Central plant systems: This is where most of the energy use occurs. Efficiency of air-cooled chillers has increased significantly over the years from g 1.0 kW/ton to current high-efficiency models operating closer to 0.45 kW/ton and below. Design decisions at this level, including water temperature, compressor type and heat exchanger design, can drastically influence a facility’s Power Usage Effectiveness (PUE).
For the foreseeable future, data centers will need a hybrid mix of air-cooled and liquid-cooled spaces. Except for specialty applications, there will always be some amount of air-cooling required. To stay ahead, many operators are incorporating the latest International Energy Conservation Code (IECC) and ASHRAE 90.4 standards for energy-efficient HVAC and power distribution systems. These measures not only reduce carbon impact but also future-proof facilities against tightening regulations.
Therefore, data center engineers will need to continue to work closely with manufacturers and clients to provide integrated and flexible designs that meet the IECC and other pertinent industry standards.
The Evolution (and Reinvention) of Liquid Cooling
The data center world may feel like it’s surging toward the future, but when it comes to liquid cooling, design principles often go back to the earliest days of computing. Since the 1940s, each new generation of computer platforms has brought increased complexity and thermal management challenges.
Today’s systems may handle five to 10 times the heat load of early mainframes, but the core principles haven’t changed. Modern AI computer platforms require high-density, rack-level cooling for GPUs, CPUs, and networking gear alike. At the same time, today’s systems can no longer rely on room air movement alone, making liquid cooling a necessity.
However, implementing liquid cooling at scale requires careful architectural planning. Designers must integrate leak containment, water reuse systems, and heat recapture technologies. Sustainable liquid cooling also depends on the use of closed-loop systems and the sourcing of low-impact coolants.
Navigating the Rise of State-Level Regulations
As electricity consumption for data centers rises, state and federal regulators are taking notice. New legislation is being introduced to better control, monitor, and forecast data center energy use and emissions.
This means that designers, developers, and engineers are now working in a shifting policy landscape. In some states, new bills are being proposed to guide responsible growth. For example, Illinois’ latest legislation supports a 20-year roadmap to deploy 15 GW of clean energy, with direct implications for high-density data center projects.
Legislation like this represents an opportunity for better alignment between grid operators, regulators, and developers. A lack of transparency in forecasting data center energy use has contributed to strained infrastructure planning. By integrating design strategy with regulatory foresight, data centers can avoid costly delays or capacity gaps.
Data Centers as Virtual Power Plants
Data centers are no longer just energy consumers, they’re becoming energy participants.
And a new model is taking hold: VPPs. VPPs are software-based, digital, and disaggregated methods of electricity production and distribution that encompass smaller energy production and storage. These virtual systems can respond dynamically to demand and grid conditions, providing power where and when it’s needed without building new central infrastructure.
Data centers are particularly suited for VPPs as they offer large-scale, on-site energy production and storage capabilities, known as distributed energy resources (DERs), like batteries, generators, fuel cells, and on-site renewables.
Designing data centers with VPP integration in mind unlocks new value and resilience, and more than half of U.S. states have introduced policies or utility programs that support VPP development. VPP capabilities include:
- Sizing and specifying DER assets to enable grid interaction
- Integrating intelligent energy management systems that communicate with ISOs and utilities
- Aligning electrical and mechanical infrastructure with future flexibility in load shedding and power export
- Anticipating regulatory incentives and technical requirements for VPP participation
VPPs represent a strategic shift that positions data centers as grid partners, not just grid users.
Data center design is at a pivotal moment. The forces of AI, energy constraints, and regulatory scrutiny are reshaping what it means to build and operate digital infrastructure. Flexibility is essential, and scalability must also be applied to energy usage.
Forward-looking data centers are moving beyond passive operations toward active grid participation, collaboration with grid operators and utilities, adaptive cooling strategies, and regulatory resilience. Operators that embrace these shifts will lead the market with smarter, more sustainable design.
# # #
About the Author
Bill Kosik, PE, CEM, LEED AP is a Mission Critical Sector Leader at HED, an integrated architecture and engineering company. With over 25 years of experience as a Licensed Professional Engineer and a subject matter expert in data center energy and water use reduction, HVAC efficiency, and systems optimization, Bill Kosik is a Senior Mechanical Engineer who provides leadership, technical analysis, and consultative services to clients across various sectors. He leverages his extensive knowledge and skills to foster the development of new technical market programs and support energy efficiency implementation projects nationwide.