Power, Cooling, and AI’s Impact on Data Center Design

PTC'25 Panel Recap: Scaling GPU Infrastructure – How to Speed Up Without Losing Our Cool?

Feb 11, 2025

The hype around artificial intelligence (AI) in data centers has never been greater and this panel discussion at the Pacific Telecommunications Council (PTC) Conference discussed how we are transforming the world to drive AI. Industry leaders gathered to tackle the seismic shifts in data center infrastructure brought on by the surge in AI-driven workloads. Attendees were eager to hear how companies are preparing for the next wave of high-performance computing and what it takes to support GPU-driven infrastructure at scale.

Moderated by Michael Elias, Senior Equity Research Analyst at TD Cowen, the panel featured top executives shaping the future of digital infrastructure:

  • Chris Sharp, CTO, Digital Realty
  • Gil Santaliz, CEO, NJFX
  • Jaime Leverton, CEO, Jaime Leverton Ventures & Advisory
  • Bjorn Brynjulfsson, CEO, Borealis Data Center

Elias set the stage by directing the first question to Chris Sharp: What is required to run GPU-powered infrastructure, and how is Digital Realty adapting to meet the needs of high-performance computing?

Sharp began by acknowledging the unprecedented scale of AI’s growth and how it is fundamentally changing the way data centers operate. “The quantum’s that we’re talking about today, the momentum, the pacing—it’s astronomical,” he stated. “Digital Realty, currently has 2.7 to 2.8 gigawatts of IT load deployed, with an additional three gigawatts coming online in rapid succession.”

This doubling of capacity underscores the massive infrastructure shift required to support AI as a workload. He attributed this surge to transformers, the AI models behind ChatGPT, which have intensified power density and cooling demands. “Liquid cooling is now essential, with over 50% of Digital Realty’s capacity modular enabling densification up to 150 kW per rack for Nvidia deployments,” said Sharp.

The shift toward AI infrastructure brings additional challenges. AI racks can weigh up to 5,000 pounds, requiring reinforced floor structures to support their density. The demand for AI workloads has also increased cabling requirements by five times, adding to infrastructure complexity and costs. Sharp added, “Network efficiency has become a critical factor, with InfiniBand imposing distance limitations that require precise architectural design.”

“AI infrastructure extends beyond compute power. Connectivity through carrier-neutral facilities and subsea cables is just as crucial in enabling low-latency AI workloads,” Gil Santaliz adding that the critical role of connectivity in AI deployments is essential and secure.

By facilitating high-capacity traffic between subsea and terrestrial networks, NJFX enables AI workloads to function with minimal latency and maximum efficiency. Without this seamless connectivity, even the most advanced AI models risk becoming isolated and ineffective, unable to deliver real-time insights where they are needed most. “Connectivity is key,” Santaliz emphasized. “NJFX sits uniquely between Virginia and Boston, offering direct access to four subsea cables connecting Denmark, Norway, Ireland, and Brazil. This strategic positioning enables us to provide AI inference as a platform for global collaboration.”

Santaliz emphasized that AI inference, which allows trained models to generate rapid real-time data, remains uncertain who is leading the charge. “The population has to get access to these language models that are trained for production,” he said. “We’re going to see smaller, edge-performance five-megawatt deployments in population centers of 100 million people within five milliseconds.”

Deploying AI inference at this scale is not without challenges. “This isn’t a cheap sport,” Santaliz noted. “The cost of generators, transformers, and switchgear has doubled in the last 24 months due to supply chain issues. Bringing in five megawatts of power, plus the infrastructure to support liquid cooling, requires significant investment.”

Despite these hurdles, he believes a successful model will emerge. “Hyperscalers will keep inference within their existing campuses, while others may deploy in the New York metro area. The economics are staggering, but those who do it right will set the standard for the rest of the industry.” Santaliz also pointed to industry leaders like Digital Realty and Equinix as key players in this evolution, but emphasized that more capacity is needed. “The land grab has already happened—so now, where do you put these inference nodes?”

As AI continues to evolve, the panelists agreed that balancing compute power with strategic connectivity and infrastructure investment will be critical to meeting the demands of AI inference at scale.

The moderator transitioned to Jaime Leverton asking how organizations are navigating the infrastructure changes required for AI and high-performance computing. Leverton noted that many companies are seeking expert guidance rather than trying to figure it out in-house. “Things are moving rapidly, it’s not possible to do this on your own unless you’re working day in and day out,” she explained. “I see most organizations looking for external expertise, which I believe is a smart approach.”

Leverton also mentioned the increasing migration of power resources from Bitcoin mining to the data center space. “I was recently the CEO of a large, publicly traded Bitcoin mining company, and I’ve noticed more capital and operators from that sector shifting into traditional data centers,” she said. “It’s an interesting trend that isn’t widely discussed, but it’s shaping the infrastructure landscape.”

Jumping back into the discussion, Gil Santaliz expanded on the importance of power sourcing for AI workloads. “The power of GPU infrastructure is driving workloads closer to the source of energy,” he noted. “Moving data is more cost-effective than moving energy, which is why we’re seeing shifts like workloads migrating from continental Europe to the Nordics, where power is more abundant.”

Bjorn Brynjulfsson added how industries are repurposing existing infrastructure for AI expansion. “In Finland, we’re seeing old paper mills transformed into high-capacity data centers, tapping into hundreds of megawatts of available power. These types of conversions are creating new opportunities,” he explained. “For example, in Iceland, we’ve upgraded older facilities to support 50 kW per rack using direct air cooling while maintaining US standards.”

As AI scales, infrastructure will continue to adapt, blending new and traditional approaches to ensure efficiency and sustainability.

Sharp provided insights into the future of AI-driven infrastructure, emphasizing the need for efficient cooling solutions and modular liquid cooling integration. “Sufficient allocation of capital is what wins,” he stated. “At the core of our business, we have to be mindful of technological obsolescence while ensuring scalability.” No single design works universally, requiring companies to adapt based on location and climate conditions. Sharp cited a DGX deployment in the Nordics, where free air cooling significantly improved power usage effectiveness (PUE). “What’s unique about that facility is the ability to integrate liquid cooling efficiently, something that remains rare at scale,” he added.

Sharp emphasized that AI is not just a technology—it’s transforming every industry. “Enterprises that embrace AI will gain a major competitive edge,” he said. “Those thaat fail to adapt will struggle as AI reshapes workflows and business models. AI’s rise to the SaaS revolution, predicting that enterprises will increasingly buy AI-driven services rather than developing in-house solutions,” he noted.

Sharp also pointed to private AI deployments as a growing trend, with companies leveraging AI to gain deeper insights from their vast datasets. “Hyperscalers are leading in GPU deployments, but enterprises are realizing the importance of controlling their own data,” he explained. “We’ve moved from data lakes to data oceans, and having access to those insights is a major differentiator.”

With AI models becoming more sophisticated, the discussion underscored the importance of strategic cooling, efficient data management, and long-term infrastructure investments to support AI’s rapid expansion.

Shifting to Brynjulfsson, as the moderator asked how data centers can plan for a 20-year lifespan amid accelerating rack densities. He acknowledged the challenge of making long-term infrastructure decisions in a rapidly changing environment. “It’s tricky. You’re making investments with long-term impacts, and we’re currently building a PLC solution that’s auto-collect in size, targeting 130 kW on the DLC side,” he explained. “However people are already talking about 250 kW per rack, so we have to think ahead.”

When asked if planning for a 20-year timeframe is even realistic, Bjorn acknowledged that some infrastructure will remain, but upgrades will be inevitable. “We previously optimized for blockchain in 2019, using high power in a small footprint,” he shared. “But we knew we’d transition to HPC eventually, so we made our facilities upgradeable. We didn’t anticipate going beyond 150 kW per rack at the time, but we’ve had to add more busbars and power feeds.”

Bjorn noted it may make more sense to build from scratch rather than continually upgrading existing facilities, making planning a crucial part of the data center evolution strategy.

A critical issue emerged—the strain on utilities to meet the surging power demands for AI workloads. The industry is facing a paradox: while there is a significant land grab for AI deployments, the reality is that power availability is a bottleneck. This artificial demand risks stalling genuine, scalable AI infrastructure projects.

The key takeaway? If you have power and an operational facility, you’re in a strong position. Securing generators, transformers, and utility connections has become a multi-year process, and companies that fail to plan ahead may find themselves unable to execute AI deployments on schedule.

The industry must balance power constraints, infrastructure preparedness, and software optimizations to enable AI’s next phase. The future will likely see greater integration between AI inference and training models, shifting workloads dynamically based on available capacity.

“2024 has been a year of rapid land acquisitions, but 2025 will be defined by efficiencies—determining where AI inference should live and how it should scale,” Sharp added.

Panelists emphasized that AI inference is still in its infancy, and its architecture is evolving rapidly. No two models operate the same way, and as inference workloads become more specialized, they will require tailored infrastructure solutions. AI’s future will not be a one-size-fits-all model but rather a mix of edge inference, centralized processing, and optimized power distribution.

 

 

 

learn how you can Expand your Network at njfx

Reach out and talk to us today 

small_c_popup.png

Let's have a chat

Learn how we helped 100 top brands gain success.