Amazon Liquid Cooling for Nvidia GPUs has taken center stage as AWS engineers introduce a groundbreaking cooling system to handle the intense energy demands of next-generation Nvidia AI chips. As artificial intelligence workloads surge, Amazon’s new In-Row Heat Exchanger (IRHX) delivers a scalable and efficient solution to support massive GPU clusters powering generative AI.

AWS Tackles AI’s Power Hungry Demands
Artificial intelligence applications have placed unprecedented demands on data center infrastructure. Nvidia’s cutting-edge GPUs, including the new Blackwell line, deliver unmatched computing power but require equally powerful cooling systems. Traditional air-based cooling methods no longer meet the thermal requirements of modern AI chips.
Amazon recognized this technological gap and moved quickly to find a solution that could match its massive scale and high-performance needs.
The Challenge with Traditional Liquid Cooling
AWS vice president of compute and machine learning services, Dave Brown, explained the complexity in a video posted on YouTube. Amazon evaluated the construction of entirely new data centers optimized for liquid cooling. However, Brown noted that this strategy posed serious challenges.
“The facilities would have taken too long to build,” Brown stated. “Standard commercial cooling equipment either took up too much floor space or dramatically increased water consumption.”
These options also failed to meet Amazon’s unique capacity needs. Some smaller-scale providers could use limited liquid cooling, but AWS operates on a scale that dwarfs most competitors.
Amazon Designs the In Row Heat Exchanger (IRHX)
Instead of waiting for suitable third-party hardware, AWS engineers developed a customized solution. The In-Row Heat Exchanger (IRHX) integrates seamlessly with both existing and new AWS data centers. Unlike large-scale cooling setups, IRHX slots between server racks, delivering efficient thermal regulation without requiring structural redesigns.
This advancement allows AWS to deploy the most powerful Nvidia GPUs without sacrificing space, water efficiency, or deployment speed.
Amazon Launches P6e Instances with IRHX Support
AWS customers can now access the new system through P6e instances, designed to support Nvidia’s GB200 NVL72 supercomputing architecture. This platform houses 72 Nvidia Blackwell GPUs within a single rack, interconnected to optimize training and inference for large language models and other complex AI tasks.
These clusters deliver unprecedented power and efficiency to companies relying on large-scale AI development.
AWS Enters Competitive Territory with Microsoft and CoreWeave
Nvidia had previously partnered with cloud providers like Microsoft and CoreWeave to offer its NVL72 platform. Now, AWS joins the ranks with its own infrastructure-ready offering, further intensifying the cloud computing arms race.
As the world’s leading cloud infrastructure provider, AWS enters this competition with a technological edge—its proprietary cooling solution ensures high performance while minimizing operational footprint.
Amazon’s In-House Hardware Strategy
AWS has long invested in developing its own infrastructure hardware. The cloud giant has designed custom chips for both general-purpose computing and machine learning, along with proprietary storage and networking solutions.
This approach reduces Amazon’s reliance on third-party vendors and allows for tighter integration across its services. By controlling both hardware and software layers, Amazon can optimize cost, performance, and scalability.
IRHX: Efficient, Scalable, and Eco-Conscious
Amazon designed IRHX not just for power but also sustainability. Traditional liquid-cooling systems can lead to significant water usage, especially at hyperscale. IRHX minimizes water consumption and leverages efficient energy transfer technology, making it more environmentally friendly.
In addition, the modular design allows AWS to deploy cooling selectively and flexibly across various data centers, increasing adaptability while reducing waste.
Financial Impact and Strategic Advantage
The rollout of IRHX strengthens AWS’s competitive positioning at a time when AI workloads are driving massive cloud spending. In Q1 2025, AWS posted its highest operating margin since 2014, with custom infrastructure playing a crucial role.
By investing in in-house innovations like IRHX, Amazon improves performance, reduces vendor dependency, and increases profit margins all while meeting rising customer demands.
Microsoft’s Parallel Efforts
While AWS pushes ahead, rival Microsoft is not far behind. In 2023, Microsoft developed its own hardware solutions to support the Maia AI chips, including specialized cooling systems named Sidekicks. These initiatives reflect a broader industry trend: the need for vertically integrated systems to manage next-gen AI infrastructure.
As leading cloud providers race to support increasingly complex models, proprietary hardware including custom cooling is emerging as a critical competitive differentiator.
The Rise of Dense AI Infrastructure
The AI boom has reshaped cloud hardware design. Unlike traditional servers, modern AI workloads require high-density clusters with massive interconnectivity. Nvidia’s NVL72 platform packs tremendous computing power into compact physical units. But without sufficient cooling, this density becomes a liability.
Amazon’s IRHX solves this issue elegantly, providing necessary heat removal without overhauling entire data centers.
Customers Gain Instant Access to Next-Gen AI Tools
With the launch of P6e instances, AWS customers can now harness Nvidia’s Blackwell architecture at scale. These instances suit use cases ranging from LLM training to scientific simulation and financial modeling.
More importantly, developers and enterprises no longer need to wait for new infrastructure. IRHX integrates directly into AWS’s existing network, accelerating deployment timelines and eliminating barriers to innovation.
AWS Demonstrates Infrastructure Leadership
This strategic move reaffirms Amazon’s leadership in cloud infrastructure. While others explore custom cooling and chip designs, AWS has successfully launched a production-ready solution that meets current and future demands.
IRHX shows Amazon’s commitment to innovation, cost efficiency, and customer-centric design.

Looking Ahead: AI’s Growing Infrastructure Demands
As generative AI models grow more complex, cloud providers will need even more sophisticated infrastructure. Cooling systems like IRHX represent just one piece of the puzzle. Networking, chip interconnects, memory design, and storage all play vital roles.
Amazon’s continued investment in infrastructure positions it well for the next wave of AI innovation.
Final Thoughts
The development and deployment of Amazon’s IRHX liquid cooling system marks a significant milestone in cloud infrastructure evolution. By solving a critical problem for high-performance GPUs, AWS delivers more than a technical fix it offers a scalable, efficient, and future-proof foundation for generative AI.
With Amazon Liquid Cooling for Nvidia GPUs, AWS sets a new standard for handling the power demands of advanced computing, showing that innovation often starts with tackling the heat.
