Getty Images/iStockphoto

JetCool CEO on liquid cooling and the modern data center

In this Q&A, JetCool's CEO talks about the growing interest in liquid cooling and how it's being used to divert heat away from compute so that bigger workloads can run consistently.

Things are heating up as GPUs draw more power and generate more heat to run AI workloads. To this end, the use of liquid cooling in data centers is growing in popularity.

Liquid cooling keeps CPUs and GPUs running at maximum performance while reducing the energy needed on fans and total data center cooling. It typically uses a heat plate or sink near operating CPUs or GPUs, where heat generated is transferred to the plate and then the flow of coolant absorbs the heat and moves it away from the compute to be cooled through a radiator or heat exchange.

One company, JetCool, has been focused on liquid cooling since its days at MIT's Lincoln Laboratory where it focused its attention primarily on the aerospace industry. JetCool's method, which is called microconvective liquid cooling, is different from traditional designs in that it uses small fluid jets to target hot spots on CPUs and GPUs. In 2019, JetCool was spun out of MIT by two team members, including its founder and CEO Bernie Malouin. More recently, JetCool partnered with Dell Technologies to bring SmartPlate System to Dell servers including the Dell PowerEdge R760, R660 and R7625.

Here, Malouin looks at the rise of liquid cooling for enterprise data centers and what the future might hold for liquid cooling.

Editor's note: The following was edited for length and clarity.

Some experts see liquid cooling as the future design for servers, where nearly all servers use it due to increased power demands and heat generated by GPUs. Do you agree with this take or see a different future?

Bernie Malouin, founder and CEO, JetCoolBernie Malouin

Bernie Malouin: When you think about GPU-based computing and AI, I think that is going to be largely met with liquid cooling. More generalized compute, you can do without it. However, I think the majority of [servers] are going to be liquid-cooled as we talk about the next five to seven years. A lot of that is driven by artificial intelligence and GPUs, where the power keeps going up.

Will liquid cooling be able to keep up in a way that's sustainable? Google recently reported that its energy use went up 48% compared to 2019 due to data center energy consumption and supply chain emissions.

Malouin: We're seeing some interesting trends with power reduction. Through our partnership with Dell, we're reducing between 10% and 20% of the entire IT power. Aside from power, what folks don't realize is that data centers consume a lot of water. A typical global data center will consume 300,000 gallons of water a day just for evaporative cooling. We're seeing a renewed interest in warm water cooling or hot water cooling to reduce the burden on water consumption while also saving electricity. JetCool is reducing water consumption by up to 99% for many of these facilities due to our ability to handle cooling fluids that are 50 degrees Celsius [122 F] and higher, which is fairly unusual in liquid cooling today.

Up to 99% is not 99%. Is it hard to recapture all the water?

Malouin: There are some cases that we have totally eliminated the water consumption for the facility. It's usually 90%-plus. 99% is not an outlier number for us. It's a pretty ordinary number for us in terms of reducing water consumption.

Microconvective versus microchannel liquid cooling.
JetCool's approach to liquid cooling uses microconvectivity versus the more typical microchannel approach.

What is hot water cooling? Is this a technique where water heated from the temperature transfer is still cooler than the GPUs?

Malouin: It's a balance. The fluid that you're going to use to cool the GPU is going to be at a lower temperature than the GPU itself. But you want it to be as close to that GPU temperature as you can get, because then the fluid is easier to cool using an outside ambient temperature. The fluid can be cooled outside without running refrigeration cycles and without running evaporative cooling boosts. The warmer the fluid is, the more effective [outside cooling] is. Even on a hot day in the southwest United States, you can still cool off a fluid if its 50 degrees Celsius. It rarely gets 50 degrees Celsius in the United States.

Water comes in through a pipe, runs across operating GPUs or CPUs, gets heated up, travels outside of where the server lives, and the ambient temperature on the exposed pipe carrying this hot water cools it off? The power used to cycle water through these pipes is negligible compared with what it takes to power the servers?

Malouin: Yes, less than 2% of the power that the servers are using, so it's quite small.

We see the future of compute where cooling and devices don't have to be two separate things. Cooling and devices can be the same thing with good integration, good scalability and high performance.
Bernie MalouinCEO and founder, JetCool Technologies

How can liquid cooling keep up with the increases in GPU power usage?

Malouin: GPU power is increasing rapidly. Traditional GPUs of a year and a half ago were 300 watts. But [the Nvidia] H100 comes out at 700 watts and [the Nvidia] Blackwell has been announced for 1200 watts. This puts a strain on the system, and GPUs are taking up a more significant fraction of the power budget. Power budgets are increasing but are becoming hard to fit within because everybody wants more power and it is just not available. This is where it becomes critical to be efficient with the rest of the power consumption. If you can reduce the other ancillary loads, like HVAC equipment and fan cooling, that leaves more of your available power budget for GPU computing. We've seen this through our partnership with Dell. In particular, with those servers, [we've] reduced power consumption on average by 15%, … which you can now allocate toward additional computing or GPUs.

Will IT be able to overcome the new power demands of GPUs through cooling? There is only so much power you can run to a rack.

Malouin: That's true. Ultimately, you're limited by how much power you put to the rack and the power you can get from the grid. With the appetite of AI, those are going to be some absolute delimitations for the next couple of years.

Is there a way to use liquid cooling technology, or JetCool specifically, to cool older equipment and free up power that way?

Malouin: JetCool solutions can go into older [equipment] with some limitations in terms of chipsets we support. We don't support every chipset dating back 10 years. But the latest generations and the ones before that, we support, [as well as] those that are retrofittable into liquid cooled solutions.

Looking to the future, what advances are you seeing to address growing heat and power consumption?

Malouin: One thing we are working on at JetCool is thinking about the future of compute. We've been thinking about that for a while -- since our aerospace days at MIT. As the technology advances, even at the current levels of liquid cooling, how can we do better? We're taking our cold plates -- and we can shrink those to fit into a layer of silicon itself. We’ve taken our cold plates and built it into a layer of silicon, with our targeted jets targeting different core regions of these devices, which can then be integrated in the silicon substrate.

We see the future of compute where cooling and devices don't have to be two separate things. Cooling and devices can be the same thing with good integration, good scalability and high performance.

Adam Armstrong is a TechTarget Editorial news writer covering file and block storage hardware and private clouds. He previously worked at StorageReview.com.

Dig Deeper on Data center hardware and strategy