Liquid Cooling Deployments Triple as AI Server Densities Exceed Air Cooling Limits
Direct-to-chip and immersion cooling installations surged 200% year-over-year as rack densities for AI workloads push beyond 100 kW per rack.
The data centre liquid cooling market is experiencing explosive growth as artificial intelligence workloads push server rack densities well beyond the practical limits of traditional air cooling. Industry research firm Dell'Oro Group estimates that liquid cooling deployments tripled in the 12 months ending Q1 2026, with the market now valued at $4.2 billion annually - up from $1.4 billion in 2024. The trajectory is staggering: the market is projected to reach $32 billion by 2028, driven by the universal adoption of liquid cooling in new AI-capable facilities.
The acceleration is driven primarily by NVIDIA's latest GPU architectures, particularly the GB200 NVL72 server configuration which draws up to 120 kW per rack and requires direct liquid cooling as a baseline requirement - not an option. With hyperscalers deploying these systems at scale for large language model training, the installed base of liquid-cooled racks in North America alone has reached an estimated 85,000 units, representing roughly 15% of all enterprise and cloud compute capacity by power draw. Ecolab's recent $4.75 billion acquisition of CoolIT Systems from KKR validates the market's trajectory and signals that liquid cooling has graduated from niche technology to essential infrastructure.
The physics behind this shift are straightforward. Traditional air cooling becomes impractical above approximately 30-40 kW per rack due to the volume of air required and the energy needed to move it. A standard CRAC (Computer Room Air Conditioning) unit can handle roughly 40-60 kW of heat rejection; a single AI training rack generating 120 kW would require 2-3 dedicated air handling units, consuming enormous floor space and energy. Liquid coolant, by contrast, has roughly 3,500 times the heat capacity of air, enabling dramatically more efficient heat removal in a fraction of the physical space.
Two primary liquid cooling approaches are competing for market share. Direct-to-chip (or direct liquid cooling, DLC) systems circulate coolant through cold plates attached directly to processors and GPUs, removing heat at the source while leaving the rest of the server air-cooled. This approach is favoured by most hyperscalers because it can be retrofitted into existing rack infrastructure with relatively modest facility modifications. Full immersion cooling, where entire servers are submerged in dielectric coolant, offers even better thermal performance (supporting 100+ kW per rack with PUE improvements of 0.3-0.5) but requires purpose-built tanks and fundamentally different maintenance procedures. Companies like GRC, LiquidCool Solutions, and Submer lead the immersion segment, while CoolIT, Asetek, and Vertiv dominate direct-to-chip.
The supply chain is struggling to keep pace. Cooling infrastructure vendors report order backlogs stretching 12-18 months for coolant distribution units (CDUs), rear-door heat exchangers, and facility-level liquid cooling infrastructure. The plumbing required - including coolant piping, manifolds, quick-connect fittings, and leak detection systems - represents a new category of data centre infrastructure that most operators and contractors have limited experience with. Training certified technicians to install and maintain liquid cooling systems has become a bottleneck in its own right.
For colocation providers, the liquid cooling transition presents both a competitive opportunity and an existential challenge. Operators who invest early in liquid-ready infrastructure can command premium pricing from AI tenants - some providers report 20-40% price premiums for liquid-cooled cabinets compared to air-cooled equivalents. However, retrofitting existing air-cooled facilities for liquid cooling is expensive and disruptive, requiring new piping runs, CDU installations, floor reinforcement (liquid-cooled systems are heavier), and upgrades to electrical distribution. Greenfield developments are increasingly designed with liquid cooling as the primary thermal management system from day one.
The implications for data centre real estate are significant. Facilities without liquid cooling capability face potential obsolescence as tenants increasingly require GPU-dense configurations. The value differential between "liquid-ready" and "air-only" data centres is widening, with some investors reporting 15-25% valuation premiums for facilities with operational liquid cooling infrastructure. For developers and investors, understanding the cooling technology landscape is no longer optional - it is fundamental to underwriting data centre assets in the AI era.
Related Articles
Record $12B in New Subsea Cable Projects Announced for Trans-Atlantic and Trans-Pacific Routes
Google, Meta, and Microsoft are collectively investing over $12 billion in new submarine cable systems to support AI workload distribution across continents.
11 min readTechnologyAmazon, Google, Microsoft Announce Record AI Infrastructure Spending
The three largest cloud providers have collectively committed over $200 billion in data centre capital expenditure for 2026-2027.
10 min readTechnologyLiquid Immersion Cooling Reaches Mainstream Adoption
Major operators report 30-40% power savings as liquid cooling technology moves from pilots to production.
11 min readNeed bespoke market analysis?
Our advisory team delivers in-depth research tailored to your investment and operational requirements.
Get Advisory Support