By Bob Walicki, Ecolab Senior RD&E Program Leader

The rapid evolution of artificial intelligence has moved from a software trend to a massive physical infrastructure challenge. While headlines often focus on the gigawatt-scale builds of hyperscalers, a significant portion of the AI boom is occurring in the “mid-market” -enterprise data centers, regional colocation hubs, and edge facilities. For these small-to-mid-scale (SMS) operators, the challenge of hosting high-performance graphics processing units (GPUs) and AI accelerators exceeding thermal design powers of 1,000 watts is even more acute. Unlike hyperscalers with dedicated research teams, SMS players must find ways to adapt existing “brownfield” infrastructure to manage unprecedented heat without the luxury of starting from scratch.

The Mid-Market Liquid Cooling Transition

For decades, air cooling was the “flat and boring” standard for the computer centers found in banks, universities, and regional hosting firms. However, as rack densities climb from a traditional 5 kW toward 50 kW or even 100 kW, traditional air-conditioning methods are reaching a physical ceiling. In fact, 2026 is seeing a surge in retrofit activity as colocation sites struggle to let mixed densities coexist efficiently.

The primary hurdle for SMS operators is not just the cooling capacity itself, but the operational complexity and capital investment required for a liquid-cooled transition. Many operators are now moving toward integrated cooling platforms that bridge the building’s traditional chilled-water loop and the new high-density server racks.

The CDU as a Bridge for Existing Facilities

At the center of this shift is the Coolant Distribution Unit (CDU). For a mid-market operator, the CDU acts as a critical thermal “bridge.” A liquid-to-liquid CDU effectively isolates the facility’s existing water loop from the sensitive, high-value electronics via a secondary fluid network (SFN).

This isolation is particularly valuable for colocation and enterprise sites because it allows managers to precisely control the fluid chemistry, flow rate, and temperature for a specific “GPU-heavy” cluster without needing to overhaul the entire building’s plumbing. In-rack CDUs, in particular, offer targeted cooling with a smaller footprint and simplified deployment, making them ideal for the edge or regional high-density setups where floor space is at a premium.

Reliability Through Precision Chemistry

For smaller teams with fewer on-site cooling specialists, the coolant formulation itself becomes a strategic reliability factor. Standard water or traditional glycols often lack the long-term material compatibility required for modern direct-to-chip, where incompatible metals can trigger galvanic corrosion.

SMS operators are increasingly adopting next-generation coolants that match high-performance specifications while offering a lower carbon footprint. To manage these complex fluids, advanced telemetry – such as Ecolab’s 3D TRASAR™ technology – can now be built directly into smart CDUs. This “connected coolant” approach monitors pH, conductivity, and glycol concentration in real-time, allowing smaller teams to shift from reactive maintenance to proactive adjustments. By automating these checks, operators can extend maintenance intervals and significantly reduce the risk of early-life failures.

Stewardship as a Strategic Requirement

As data centers increasingly embed themselves in metro and suburban locations to support low-latency AI, they face rising community scrutiny regarding resource use. Small-to-mid-scale operators must now balance Power Usage Effectiveness (PUE) with Water Usage Effectiveness (WUE) to maintain their social license to operate.

A roadmap for this shift is visible in programs like Microsoft’s Community-First AI Infrastructure initiative, launched in early 2026. This framework commits to five core pillars, including concrete promises to minimize operational water consumption and replenish more water than facilities withdraw. For SMS players, following these stewardship best practices is not just about ethics; it is about securing permits and ensuring long-term operational resilience in power- and water-constrained regions.

Future-Proofing with “Cooling as a Service”

To overcome the “high CapEx” barrier of liquid cooling, many operators are turning to service-led models like Cooling as a Service (CaaS). These models convert complex thermal management stacks into predictable, auditable outcomes. By leveraging specialized vendors who handle commissioning, fluid analysis, and real-time monitoring, SMS data centers can scale their AI capabilities as quickly as the platforms change, without over-engineering their facilities for an uncertain future.

Ultimately, the transition to liquid cooling is not just for the giants of the industry. By integrating smart hardware, precision chemistry, and service-based models, small-to-mid-scale operators can bridge the density gap and reliably host the next generation of mission-critical AI workloads.

# # #

About the Author

Bob Walicki is an innovation leader with nearly 20 years of experience in research, development and engineering at Ecolab, a global leader in water and infection prevention solutions. He is currently responsible for driving innovation for Ecolab’s Global High Tech Data Centers segment. Most of Bob’s career has been oriented to solving customer problems related to industrial water treatment and utilization in many industries, including Mining and Mineral Processing through application of novel chemistries as well as intelligent automation and digital solutions. He holds a Bachelor of Science degree in Chemistry from the University of Notre Dame as well as a Master’s of Science and a PhD in Physical Chemistry from the University of Chicago.