Researchers at the Lawrence Berkeley National Laboratory (Berkeley Lab) are spearheading critical advancements in data center cooling technology as the artificial intelligence sector drives global energy demand to unprecedented heights. According to reports released in mid-December 2025, the lab is finalizing the architecture for its next flagship system, "Doudna," scheduled for installation in late 2026. This system represents a pivotal shift in infrastructure, utilizing liquid, direct-to-chip cooling combined with ambient air systems to manage the intense thermal output of modern AI microchips. The breakthroughs come as energy analysts warn that data centers could consume nearly 12% of total U.S. power by 2028.
The urgency for these innovations has intensified following the explosive growth of hyperscale facilities. As AI models require increasingly powerful chips, traditional air conditioning is proving insufficient and energy-inefficient. Berkeley Lab's new approach bypasses standard HVAC systems entirely for its high-performance machines, marking a significant evolution in how the tech industry approaches sustainable computing.
From Perlmutter to Doudna: A Timeline of Efficiency
The trajectory of Berkeley Lab's cooling innovations can be traced through its supercomputing systems. The Perlmutter system, installed at the National Energy Research Scientific Computing Center (NERSC) in 2021, set the stage for current developments. According to Berkeley Lab News Center, both Perlmutter and the upcoming Doudna system utilize a hybrid approach: liquid direct-to-chip cooling paired with ambient air cooling.
The mechanics of this system are designed to minimize energy waste. Recent reports describe a system where water exchange pipes transport cool water from exterior cooling towers directly to the machine room. Crucially, the system sends warm water back to the towers to be cooled naturally, eliminating the need for energy-hungry mechanical refrigeration or traditional HVAC systems. This closed-loop efficiency is essential for the Doudna system, which is expected to shoulder massive AI computational workloads upon its debut in late 2026.
The Energy Equation: AI's Growing Appetite
The push for liquid cooling is driven by simple physics and daunting economics. AI servers run hotter and draw more power than their predecessors. A 2024 report from Berkeley Lab highlighted that computing capacity in North American data centers under construction reached a record-high 6,350 MW by the end of 2024-more than double the figure from the previous year.
"With a more efficient cooling system, the data center is now able to allocate additional power towards its IT loads, enabling more computing to be performed to support the needs of LBNL's researchers," stated a report from the Better Buildings Initiative regarding the lab's success with liquid cooling retrofits.
Essentially, every watt saved on cooling is a watt that can be used for processing power. This efficiency is critical as the Department of Energy (DOE) seeks strategies to meet demand, including onsite power generation and storage solutions to mitigate strain on the national grid.
Water Usage and Environmental Trade-offs
While electricity usage grabs headlines, water consumption remains a complex variable in the sustainability equation. Moving to liquid cooling changes the resource profile of a data center. According to Vertiv's analysis of Berkeley Lab's 2024 data, Water Usage Effectiveness (WUE) is expected to rise slightly in projected scenarios, reaching between 0.45 and 0.48 L/kWh. This indicates that while energy efficiency improves, the industry must carefully manage water resources as part of the broader environmental impact.
Expert Perspectives on "Uncharted Territory"
The transition to AI-centric infrastructure is viewed by experts as both a crisis and a catalyst for innovation. Arman Shehabi from Berkeley Lab noted that the sector is entering "uncharted territory," which paradoxically offers great opportunities for energy efficiency improvements. The lab's work has already been recognized nationally, with two of its technologies receiving R&D 100 Awards in 2024.
Industry trends support the lab's direction. Immersion cooling systems are becoming standard for new builds, while retrofitting remains a challenge for older facilities. The growth of hyperscale data centers-massive facilities with capacities ranging from 100 MW to 1,000 MW-has accelerated the adoption of these innovative designs to maximize infrastructure efficiency.
Outlook: The Road to 2028
As the industry prepares for the arrival of the Doudna system in late 2026, the focus will likely remain on refining liquid cooling techniques to handle higher densities. With projections suggesting data centers could account for 12% of U.S. power consumption by 2028, the technologies currently being tested at Berkeley Lab will serve as a blueprint for the commercial sector. The success of these systems in balancing IT load capacity with energy conservation will determine whether the AI boom can sustain its momentum without overwhelming the nation's energy infrastructure.