Data centers could account for 44% of U.S. electric load growth through 2028 and consume up to 9% of the country’s power supply by 2030, causing concerns over their impact on U.S. power availability and costs. Up to 40% of data center electricity use goes to cooling, according to the National Renewable Energy Laboratory, thus greater cooling efficiency is of great interest as one strategy for reducing energy consumption. Cooling is also integral to data center design, influencing how these facilities are developed, built and renovated.
The second half of 2024 saw several notable announcements related to data center cooling systems, which protect high-performance processors and servers, enabling the advanced computations that artificial intelligence requires. In December, Microsoft and Schneider Electric separately released designs for high-efficiency liquid cooling systems to support increasingly powerful AI chips. Microsoft’s water-based design operates on a closed loop, eliminating waste from evaporation, while Schneider Electric’s data center reference design uses a non-water refrigerant. Earlier in 2024, Vertiv and Compass Datacenters showcased their “first-of-a-kind” liquid-air hybrid system, which they expected to deploy early this year.
Here’s what trends and developments data center cooling experts say they’re watching for 2025 and beyond.
Two-phase liquid cooling will break into the mainstream
Most data center professionals say they’re dissatisfied with their current cooling solutions, according to AFCOM’s 2024 State of the Data Center Industry report. Thirty-five percent of respondents said they regularly make adjustments due to inadequate cooling capacity, and 20% said they were actively seeking new, scalable systems.
Many data center cooling experts predict data center developers and operators will increasingly turn to two-phase, direct-to-chip cooling technology to improve cooling performance. These systems toggle the working fluid — typically a non-water refrigerant — between liquid and vapor states in a process that “plays a pivotal role in heat removal,” according to direct-to-chip liquid cooling system designer Accelsius.
2025 will be a “year of implementation” for two-phase systems as data center professionals get more comfortable with the technology, Accelsius CEO Josh Claman said in an interview. More sophisticated data centers with higher computing needs are more likely to seek out two-phase cooling, Claman said.
“Almost no new [data center] builds will be exclusively air-cooled nor exclusively liquid [because] not all applications require intense liquid cooling — think of archived data that is rarely accessed versus generative AI."
Sarah Renaud
ENCOR Advisors vice president, consulting services
Traditional air cooling reaches its physical limit at server rack densities of about 70 kilowatts, the benchmark for state-of-the-art AI training facilities today, said Sarah Renaud, vice president of consulting services at ENCOR Advisors, a commercial real estate firm that works with data center clients.
Because future racks will be even denser, “two-phase is the future,” Renaud said. “It can handle higher power densities and heat fluxes, meaning it’s better-suited for handling AI workloads.”
Hybrid cooling will expand, but supply chain risks loom
Two-phase immersion cooling provides a lower 10-year total cost of ownership for data center operators than DTC or single-phase immersion cooling, according to a March 2024 study by Chemours, the Syska Hennessy Group and cooling system designer LiquidStack. But its high upfront costs, the long operational life of legacy cooling systems and variable cooling needs within individual data centers mean two-phase will continue to coexist alongside other technologies for some time, experts say.
“Almost no new [data center] builds will be exclusively air-cooled nor exclusively liquid [because] not all applications require intense liquid cooling — think of archived data that is rarely accessed versus generative AI,” Renaud said. “You can cool those [less demanding] racks more cost-effectively with air.”
Microsoft’s closed-loop, water-based cooling system “appears to align with an incremental strategy” that supports its near-term needs while “allowing [its] infrastructure to readily pivot to accommodate advanced cooling technologies like direct-to-chip two-phase when the time comes,” said Nick Schweissguth, director of product and commercial enablement at LiquidStack.
“Ten years ago, you’d try to [design data centers with] more capacity than you need and grow into it, but now you don’t know what [power] density you need to build at."
Steven Carlini
Schneider Electric vice president, innovation and data center
But data center operators’ hybrid cooling plans could be complicated by supply chain issues that could be made worse by anticipated Trump administration tariffs, Schweissguth said. Direct-to-chip coolant distribution units, which keep processors bathed in fluid, are particularly at risk, he noted.
With CDU demand set to surge in 2025, “companies vying to capture the direct-to-chip market will ultimately prevail based on their ability to produce at scale and build bulletproof relationships with suppliers,” Schweissguth said.
Building and system design will evolve to enable 24/7 uptime
Operators expect far more out of state-of-the-art AI data centers than they did from previous generations of these facilities, said Steven Carlini, vice president of innovation and data center at Schneider Electric.
Whereas earlier facilities might have variable workloads averaging 30% or 40% of total processing capacity, AI facilities typically run at 100% capacity for weeks or months when training models, necessitating more rugged and redundant design, Carlini said.
“It takes the variability out of the equation, but you have to be very sure you design the cooling system to support that,” he said.
Carlini described a near future in which higher rack-power densities require heavier cooling infrastructure, which creates additional physical demands in data center design. The designs his team has worked on recently, for example, involve “huge” pipes with “big steel cages over the supercluster” or two-story floor plans, with the first level flush on a concrete slab to handle the added weight.
“All that water has to go somewhere,” he said.
“Slow but steady” retrofit activity ahead
Retrofitting an operating data center to accommodate more powerful processors is a big technical and logistical challenge that leads some to conclude that it’s easier to build new, Accelsius’ Claman said.
But new buildings are significantly more resource-intensive, complicating corporate sustainability goals, he noted. And existing data centers often have more robust power supplies. “That’s why they are where they are, and it’s not easy for them to move,” he said.
The majority of an operating data center’s asset value lies within its power supply and infrastructure, such as electrical, plumbing and other technical systems, according to JLL’s 2025 Global Data Center Outlook. These assets are particularly valuable given the challenges of securing power for new developments. Thus, retrofits such as transitioning existing data centers to liquid cooling will “be a viable solution and an opportunity to increase asset value,” JLL’s outlook says.
Meta is transitioning its existing data centers to liquid cooling “because they say they ‘have to,’” while colocation giant Equinix said in December 2023 that it would expand liquid cooling to 100 of its data center facilities, Renaud noted.
Claman predicted a “slow but steady” pace of retrofits and “a more balanced conversation” around their benefits. Schneider Electric is betting on this trend as well, recently partnering with Nvidia on the release of three retrofit reference designs for data center operators looking to boost performance without redesigning their facilities from scratch.
The rapid increase in computing power means data centers on the bleeding edge today may rapidly fall behind, further complicating the already formidable challenge of designing facilities with both air and liquid cooling infrastructure, Carlini said.
“Ten years ago, you’d try to [design data centers with] more capacity than you need and grow into it, but now you don’t know what [power] density you need to build at,” he said.
Facilities in Northern climates might get an edge
Air provides 20% to 30% of the cooling load, even in newer data centers, according to Carlini. That’s driving efficiency-minded developers to site more facilities in “the attic,” the informal industry term for cooler Northern regions, Renaud and Claman say.
“The market talks a lot about a ‘free cooling zone’” in the Northern United States, Northern Europe and Canada, Claman said.
In cooler weather, energy use for air cooling systems could drop by as much as 95%, according to Renaud. “We are seeing a trend of hybrid colocation strategies in which data that does not require frequent access can be stored in more remote and colder locations,” leaving higher-access-frequency facilities to operate in warmer, more established data center hubs like northern Virginia, she said.
Cold-climate sites also are less likely to need water-hungry evaporative cooling systems, which are common in warmer, drier climates and have raised concerns around data centers’ environmental impacts, Claman said. He predicted a move toward closed-loop cooling systems that can take advantage of seasonal free cooling.
“There is a lot of scrutiny around emptying aquifers to cool data centers,” he said.