Building Standards for AI Infrastructure

Building Standards for AI Infrastructure
At the AI Infra Summit in Santa Clara, leaders from NJFX, CoreSite, OpenAI, Actnano, and Cirrascale examined how liquid cooling, latency, and connectivity will shape the next generation of AI infrastructure. Learn why purpose-built, carrier-neutral data centers are critical as AI enters production mode.
Santa Clara, CA – Artificial intelligence may be advancing at breathtaking speed, but the limiting factor is no longer just compute power. It’s infrastructure. At the AI Infra Summit in Santa Clara, executives from NJFX, CoreSite, OpenAI, Actnano, and Cirrascale tackled the pressing challenges of scaling AI responsibly.
The discussion, moderated by Dave Driggers, CEO and CTO of Cirrascale Cloud Services, brought together:
- Gil Santaliz, CEO & Founder of NJFX
- Eric Dela Pena, Director of Sales Engineering at CoreSite
- Reza Khiabini, Member of Technical Staff at OpenAI
- Taymur Ahmad, Founder & CEO at Actnano
Together, they examined how colocation providers, new facility design, and industry standards will define the next stage of AI infrastructure.
By 2026, when NVIDIA’s next-generation chips are released, liquid cooling will no longer be optional—it will be the baseline.
“We’re going to need a new level of density in these facilities—mixing and matching workloads, but with hardened infrastructure,” said Dave Driggers of Cirrascale. “Telcos eventually set those standards; the difference is we have to get there much faster than they did.”
Eric Dela Pena of CoreSite emphasized the collective responsibility: “We’re really going to gain adoption on all of this as a community. We need to settle on what is going to be the standard—and how we come together to build data centers capable of supporting the future of AI workloads.”
Inference Shaping Infrastructure
Inference workloads, unlike training, are highly latency-sensitive. To deliver real-time responses, compute must be pushed closer to users and data sources. This shift is giving rise to Inference Optimal Locations (IOLs).
“Every millisecond matters,” said Reza Khiabini of OpenAI. “You can’t ship inference halfway across the country and expect real-time performance. We need infrastructure close to the edge, near the users.”
Gil Santaliz of NJFX tied this directly to connectivity: “Inference is about connectivity and production data. The ecosystem sits in carrier hotels today, but we have to rethink how those facilities can responsibly support AI workloads.”
Carrier hotels have historically been vital for interconnection, but panelists agreed they are ill-suited for liquid-cooled AI at scale.
These multi-story, multi-tenant buildings face four critical challenges:
- Leak mitigation in shared cooling environments
- Structural load capacity limits for dense racks
- Power density requirements beyond design specs
- Physical and cyber security vulnerabilities
Santaliz used a memorable analogy: “If you own a home, you control everything. But if you live in a condominium, you share centralized systems and must be mindful of your neighbors. That’s the reality of multi-tenant infrastructure.”
Some operators are tethering expansions to existing facilities to buy time. But panelists stressed the limits of this approach.
Taymur Ahmad of Actnano warned of risks that cannot be ignored: “Cooling introduces challenges like condensation and leaks. If you don’t design with protective technologies, you risk outages that no operator wants to face.”
Driggers added: “You can’t build production AI on stopgaps. Tethering may help in the short term, but it’s not the long-term solution.”
The Path Forward: Purpose-Built, Connectivity-Rich Facilities
The future of AI infrastructure lies in purpose-built facilities designed for density, liquid cooling, and direct interconnection.
“We can support three or four unique requests; no one can handle 10 or 20,” said Santaliz. “We start customers at a megawatt and let them grow. It’s a boutique approach—doing an exceptional job for a few, not trying to be everything for everyone.”
Panelists agreed that North America will require next-generation AI facilities in four key regions:
- Northeast – to serve the main population corridor
- Southeast – close to fast-growing population hubs
- Southwest – balancing hyperscale demand and edge growth
- Northwest – providing resiliency and redundancy
“Liquid cooling should be the standard,” Ahmad added. “But at the edge, we may also need custom approaches. The design has to fit the workload and location.”
AI Entering Production Mode
The panel closed with a clear message: AI is no longer in the experimental stage—it is entering production mode.
“AI has left the lab,” said Driggers. “This is about scaling production workloads, and that means scaling infrastructure in a way that hasn’t been done before.”
Santaliz reinforced the point: “Inference is about connectivity, production data, and working with the masses. That’s why colocation providers matter more than ever.”
The conversation in Santa Clara underscored a turning point: AI is no longer just a software story—it’s an infrastructure story.
Cooling, density, and connectivity will define the winners. And colocation providers—once seen as landlords—are emerging as strategic partners in enabling AI’s future.
As the panel made clear, the future of AI will be written not just in code, but in concrete, steel, fiber, and water.
Latest News & Updates
Stay informed with the latest press releases, industry news, and more.
Building Standards for AI Infrastructure Read More »