Galleries

The Fiber Exchange Redefining Global Network Strategy

Connecting Cable Landing Stations Is the Future of Global Network Strategy | NJFX
Global Network Strategy

Connecting Cable Landing Stations Is the Future of Global Network Strategy

The cable landing station is no longer a passive endpoint. It has become the defining infrastructure node of the AI era — and the carriers who recognize this are building tomorrow's network architectures today.

UpdatedApril 2026
CategoryInfrastructure & Strategy
AuthorNJFX Editorial
5Subsea cable systems at NJFX Wall, NJ↑ from 4 in 2023
35+Global & domestic network operatorsCarrier-neutral ecosystem
10 MWDedicated power milestone reached10th anniversary, 2026
$900BGlobal AI infrastructure spend by 2029IDC forecast

From access point to infrastructure core

In today's hyper-connected world, the role of the cable landing station is transforming at a pace the industry has never seen before. No longer simply an access point for undersea cables, the CLS is emerging as a critical hub that networks increasingly rely on for primary connectivity — and now, for AI-era compute delivered at the network edge.

Recent industry analyses reinforce this shift, underscoring how direct interconnection at CLSs drives lower latency, enhanced resiliency, and cost efficiency. At NJFX, we have always been forward-thinking. As the first carrier-neutral CLS and colocation facility in the U.S., we've been empowering companies with direct, secure access to major subsea cables spanning North America, Europe, South America, and the Caribbean.

That model is now accelerating dramatically. Networks are beginning to leverage CLSs as their primary point of interconnection — while still using other Points of Presence to ensure network diversity and redundancy. And the newest wave of demand isn't purely connectivity. It's compute, inference, and AI.

The cable landing station is no longer just a network access point — it's becoming the infrastructure core for tomorrow's digital economy.

Gil Santaliz, CEO, NJFX

A fundamental shift — confirmed by 2025–2026 developments

The trajectory we've long championed at NJFX is now being validated at a global scale. Data Center Dynamics calls this the era of "dynamic scalability, strategic redundancy, and adaptive infrastructure" — where CLSs evolve from interconnection points into foundational nodes of integrated global network infrastructure.

The industry is also witnessing a decisive move toward open, neutral CLS models. Traditionally, single-operator control of a landing station created barriers and bottlenecks. That model is giving way to carrier-neutral facilities that allow multiple network providers to share infrastructure, fostering competition, reducing costs, and enabling far more dynamic interconnection options.

The hybrid CLS emerges

The most transformative development of 2025 is the rise of the hybrid cable landing station — facilities that integrate subsea connectivity with AI processing capabilities, enabling localized compute power right at network entry points. This evolution supports the surging demand for distributed inference deployments, moving beyond traditional hyperscaler-only models.

Hybrid CLSs reduce the need for long-haul data transport, cutting latency for AI-driven applications while optimizing bandwidth utilization. This is exactly the model NJFX has spent a decade building toward — and why our campus in Wall, NJ is exceptionally well-positioned for what comes next.

Global AI Infrastructure Spend $334B in 2025

Exceeding $900B by 2029 — driving massive new demand for subsea edge infrastructure.
IDC Worldwide Quarterly AI Infrastructure Tracker.

Recent milestones powering the future

The last eighteen months have brought a series of landmark developments to our campus — each a concrete expression of the CLS-as-infrastructure-core thesis playing out in real time.

January 2025

Leading European bank joins the NJFX ecosystem

A major multinational institution with a presence spanning 65 countries selected NJFX to enhance cloud connectivity and explore advanced AI applications.

Financial Services
2025

Boldyn deploys Ciena WaveLogic 6 at NJFX

Delivering wavelength services scalable to 1.6 Tb/s — the bandwidth and optical networking capabilities essential for AI-intensive workloads at the subsea edge.

AI Infrastructure
Mid-2025

Colt connects Apollo South to NJFX

Colt Technology Services expanded global fiber reach directly into New Jersey, ~3 miles from the Apollo South cable landing station — new standards for network resilience.

Connectivity
2026

NJFX celebrates 10 Years / 10 MW

A decade as North America's first carrier-neutral CLS campus, marked by reaching 10 MW of dedicated power capacity — purpose-built for the AI infrastructure era.

Anniversary Milestone

Hyperscalers are betting on the subsea edge

The largest technology companies in the world are making enormous bets that validate the CLS-centric network thesis. Google's America-India Connect initiative — part of a $15 billion AI infrastructure investment through 2030 — is connecting four continents through subsea cable infrastructure, with open cable landing stations positioned as strategic hubs for a new global data corridor.

This is the exact architecture NJFX pioneered on the U.S. East Coast: multiple independent subsea systems, diverse terrestrial backhaul, and a carrier-neutral environment where global and domestic networks interconnect freely — without recurring cross-connect fees, without backhaul bottlenecks, and without single points of failure.

As data volumes skyrocket and real-time AI applications become the norm, what was once a forward-looking thesis has become urgent infrastructure policy. The network industry's center of gravity is moving to the edge — to where the cables land.

Direct interconnection at cable landing stations represents more than just a trend. It is a fundamental shift in network architecture and strategy.

NJFX, Original Publication

Why direct CLS interconnection wins

The strategic benefits of connecting directly at the cable landing station — rather than backhauling through legacy carrier hotels in NYC, Ashburn, or Miami — are clearer than ever in 2026.

Lower latency for AI workloads

Proximity to diverse network nodes minimizes delays — mission-critical as AI inference applications require sub-millisecond response times at scale.

Improved network resilience

Direct CLS interconnection combined with diversified PoPs ensures robust continuity — eliminating single points of failure that plague legacy city-center routes.

Direct global market access

Immediate connectivity to five subsea cables spans North America, Europe, South America, and the Caribbean — from a single carrier-neutral campus.

Cost-effective scalability

Reducing backhauling and eliminating recurring cross-connect fees translates directly into economic benefits for expanding network infrastructure at any scale.


A decade of building toward this moment

2016

NJFX opens North America's first carrier-neutral CLS campus

Wall Township, NJ — the first colocation facility in the U.S. designed specifically at a cable landing station, with carrier-neutral access from day one.

2019

First-ever CLS-to-CLS terrestrial connection

Windstream connected NJFX to Telxius at Virginia Beach — the first carrier-neutral CLS-to-CLS route on the U.S. East Coast, with access to over 500 Tbps of combined subsea capacity.

2024

NJFX acquires Subcom bore pipe assets

NJFX acquired Subcom's bore pipe assets, securing ownership of our front-haul systems. This milestone delivers greater resilience and trust for our customers with direct control over critical last-mile infrastructure.

2025

AI-ready infrastructure buildout

10 MW water-cooled data hall. Ciena WaveLogic 6 at 1.6 Tb/s. European bank and Colt partnerships. Fifth subsea cable connected. NJFX positioned as a hybrid CLS for distributed AI inference.

2026

10 Years / 10 MW — the AI infrastructure decade

A decade of carrier-neutral CLS leadership and a 10 MW power milestone — at the exact moment a global AI infrastructure boom puts the subsea edge at the center of digital economy strategy.

The CLS is the infrastructure of the AI decade

As data volumes skyrocket and real-time applications become the norm, the network industry is undergoing rapid — and irreversible — evolution. Direct interconnection at cable landing stations is a fundamental shift in how global network infrastructure is designed, deployed, and operated.

The next generation of CLS facilities will integrate AI processing with subsea connectivity, serving as anchor points for distributed inference at a global scale. They will reward the carriers and enterprises who chose to co-locate here early — bypassing legacy chokepoints in favor of a more direct, resilient, and economically sound model.

NJFX leads this transformation. Our campus in Wall, NJ — 64 feet above sea level, Category 5 hurricane-resistant, DHS-designated Protected Critical Infrastructure — is where the subsea edge meets the terrestrial backbone, and where the AI infrastructure decade is being built, one connection at a time.

NJ

NJFX Editorial Team

Published by NJFX — North America's first carrier-neutral Cable Landing Station campus, Wall Township, New Jersey. Updated April 2026.

The Fiber Exchange Redefining Global Network Strategy Read More »

NJFX at ITW 2026

ITW 2026

Join NJFX at the ultimate convergence of the global connectivity and digital infrastructure ecosystem in National Harbor.

Catch the Team After the Event:

Events

NJFX at ITW 2026

See our CEO and GM will be in NYC networking with industry thought leaders unleashing the Power of AI in Enterprise Networking and Security Infrastructure

Meet Us »

PTC’26

See our CEO and GM will be in NYC networking with industry thought leaders unleashing the Power of AI in Enterprise Networking and Security Infrastructure

Meet Us »

AI Summit New York

See our CEO and GM will be in NYC networking with industry thought leaders unleashing the Power of AI in Enterprise Networking and Security Infrastructure

Meet Us »

NJFX at ITW 2026 Read More »

Pilot Fiber Expands Enterprise Connectivity at NJFX’s Premium Cable Landing Station Campus

Pilot Fiber Expands Enterprise Connectivity at NJFX | NJFX Press Release
Press Release

Pilot Fiber Expands Enterprise Connectivity at NJFX's Premium Cable Landing Station Campus

Partnership brings high-capacity IP transit, cloud connectivity, and wavelength services to one of the most strategically connected interconnection hubs in North America.

Pilot Fiber 1,000+ On-net buildings
NY metro
Pilot Fiber 47 Data centers
NY & NJ
NJFX 250+ Tbps live traffic
through CLS
NJFX 100% Uptime at CLS
for 10 years

Pilot Fiber, a leading provider of high-capacity connectivity solutions for businesses across the New York metro region, today announced the availability of its enterprise network services at the NJFX Cable Landing Station campus in Wall Township, New Jersey — one of the most advanced carrier-neutral interconnection hubs in North America. The partnership follows Pilot's recent acquisition of Extenet Systems's enterprise fiber business, supporting the company's commitment to connecting customers throughout New York and New Jersey.

Through its presence at NJFX, Pilot now delivers a range of high-capacity connectivity solutions to enterprises, cloud providers, and network operators co-located within the facility.

Services Now Available at NJFX
IP Transit Cloud Connect Wavelength Services Ethernet Transport

The expansion reflects growing enterprise demand for high-bandwidth, low-latency connectivity as organizations increasingly move data-intensive workloads between data centers, cloud environments, and global networks.

"Today's data infrastructure is being pushed by entirely new classes of workloads — from generative AI and real-time analytics to financial trading and media distribution. By partnering with NJFX, we're ensuring that enterprises operating in one of the most strategically connected facilities in North America have access to Pilot's diverse and resilient network that connects over 1,000 on-net buildings throughout the New York metro area."

— Joe Fasone, CEO  ·  Pilot Fiber

Located in Wall Township, New Jersey, the NJFX campus serves as a critical gateway between North America, Europe, and South America. More than 250 terabits per second of live traffic flows through the campus, reflecting the rapid shift toward high-capacity environments and the accelerating adoption of 400G network infrastructure.

The carrier-neutral cable landing station provides direct access to multiple subsea cable systems — including Seabras-1 and HAVFRUE/AEC-2 — and dozens of global network operators, creating a dense interconnection ecosystem that reduces latency and enables greater control over routing, performance, and network resiliency.

"NJFX was built to create the most resilient and interconnected cable landing station ecosystem in the United States. Pilot's presence strengthens the diversity of network providers available within the campus and expands connectivity options for the global enterprises, financial institutions, and cloud platforms operating here."

— Gil Santaliz, CEO  ·  NJFX

Pilot's expansion into NJFX reinforces its continued investment in supporting enterprise connectivity across the region. As of March 2026, Pilot's network is connected to 13 data centers in New Jersey and an additional 34 across the New York metro area.


About the Companies
About Pilot Fiber

Pilot Fiber delivers enterprise-grade connectivity to businesses across the New York metro area through complete network ownership and in-house operational expertise. Since 2014, the company has built a network of 300+ miles of modern fiber infrastructure throughout New York City, serving 3,500+ businesses across 1,000+ buildings with a full suite of services: Dedicated Internet Access, Ethernet Transport, Dark Fiber, Wavelength, and IP Transit. Pilot's end-to-end ownership model delivers 5–15 day installations in on-net buildings — versus the industry standard of 30–90 days — with transparent pricing, 24/7 support, and no contracts required.

pilotfiber.com →
About NJFX

NJFX is a Tier 3, carrier-neutral Cable Landing Station and colocation campus in Wall Township, New Jersey, where subsea and terrestrial networks converge. The campus provides diverse, low-latency connectivity with direct access to Europe, South America, and the Caribbean through multiple subsea systems and more than 25 terrestrial routes. NJFX supports a highly interconnected ecosystem of over 35 global network operators, cloud providers, and enterprises. The facility is built for resiliency, with four diverse points of entry, dual utility feeds, and N+1 redundancy across critical systems. With a decade of 100% uptime, NJFX delivers secure, scalable infrastructure where compute meets connectivity — including a new 10MW AI-ready data hall now expanding on campus.

njfx.net →

Pilot Fiber Expands Enterprise Connectivity at NJFX’s Premium Cable Landing Station Campus Read More »

The Inference Inflection Point Is Here

The Inference Inflection Point | NJFX
NJFX · Field Intelligence · NVIDIA GTC 2026

The Inference Inflection
Point Is Here.

Jensen Huang said it plainly at NVIDIA GTC in San José: AI is leaving the training era behind. What comes next runs on telecom infrastructure — and NJFX is built for exactly this moment.

Wall, NJ March 2026 NJFX Team
We just watched the AI industry announce its next chapter and it runs on fiber, power, and proximity to the network edge.

NVIDIA GTC is where the industry maps its future. This year's message was unmistakable: the massive investment in AI training is paying off, and now the work shifts to inference at scale — getting AI out of centralized data centers and into the distributed infrastructure where it can actually reach users, devices, and data in real time.

For NJFX, a carrier-neutral cable landing station at the intersection of transatlantic subsea fiber and the U.S. backbone, the implications are immediate. The infrastructure that delivers AI inference at global scale has to start somewhere. It starts here.

Telecom is one of the world's infrastructures. It will be completely reinvented as a future AI infrastructure platform.
Jensen Huang, CEO — NVIDIA GTC 2026

NVIDIA Is Rebuilding the World's AI Infrastructure — Starting with Telecom

Huang didn't position telecom as a supporting player. He called it a foundational partner — an industry that will be completely reinvented as the platform through which AI inference reaches the world. The mechanism for this reinvention is something NVIDIA is calling AI Grids: a distributed inference architecture that turns existing network real-estate into active compute infrastructure.

NVIDIA has already partnered with six network operators — primarily in the United States — to begin building out these grids. Some are starting by activating existing wired edge sites. Others are layering in AI-RAN and AI factory deployments. The path varies, but the destination is the same: a geographically distributed AI inference fabric, linked by high-speed data connections, running closer to the end user than centralized cloud ever could.

The key requirement for any node in this system? High-speed, low-latency connectivity. That's not a future investment for telecom — it's what the industry has built over decades. And it's exactly what NJFX was designed to provide.

The Inference Inflection Point

We have moved past the era of building AI models and into the era of running them everywhere. Inference — not training — is where the next decade of AI value is created and delivered.

AI Grids: Distributed by Design

NVIDIA's AI Grid architecture links any network node — fixed, wireless, or CDN edge — into a unified inference fabric connected by high-speed data links telecom companies already own.

Monetizing What Already Exists

Operators can begin lighting up existing wired edge sites as AI grid nodes today — turning stranded real-estate, power, and connectivity into active, revenue-generating infrastructure without building from scratch.

Telecom at the Center

For the first time, telecom is not the pipe that carries AI — it is the platform that runs it. Network operators are being repositioned as the physical layer of the global AI inference economy.

Built at the Edge of the Network. Ready for the AI Grid.

NJFX is not a traditional data center. It is a carrier-neutral cable landing station — the physical point where transatlantic subsea fiber meets the U.S. network, and where the global internet literally comes ashore.

Most colocation facilities are inland. They depend on others to connect them to the world. NJFX is the connection. That distinction, which has always made NJFX a critical node in the global connectivity ecosystem, now makes it a natural anchor point for distributed AI inference infrastructure.

The AI Grid model NVIDIA is building requires nodes connected by the highest-capacity, lowest-latency links available. Subsea cable systems are those links. When AI inference needs to move between the Americas, Europe, and beyond — it moves through Wall, New Jersey, or it doesn't move at all.

NJFX offers carrier-neutral access, open interconnection, Tier 3 data center reliability, and direct proximity to the cables that carry the world's data. This is precisely what NVIDIA describes as the foundation of an AI Grid node — and NJFX has spent a decade building it.

10yrs
Of carrier-neutral subsea interconnection — longer than most AI grid conversations have existed
10MW
Of power capacity at the cable landing station — the energy AI inference demands, already in place
6+
Network operators in NVIDIA's AI Grid program — NJFX serves the infrastructure they all depend on
1
Carrier-neutral landing station at this location — an irreplaceable position in the global network

This Is Not a Future Opportunity. It Is Now.

The shift Huang described at GTC is already underway. Network operators are making decisions right now about where their AI grid nodes will be, which subsea routes they'll rely on, and which interconnection facilities anchor their distributed inference strategy.

NJFX is the answer to those questions for the transatlantic and LATAM markets. The infrastructure exists. The fiber is live. The power is available. The interconnection ecosystem is carrier-neutral and open. What NVIDIA described as the ideal foundation for an AI Grid node is not a vision at NJFX — it is the current state of operations.

The inflection point isn't coming. It's here. And it runs through Wall, New Jersey.

Where the internet begins is where AI inference begins. NJFX is that place and we're ready to connect the next era of AI infrastructure to the world.

Talk to the NJFX Team →

The Inference Inflection Point Is Here Read More »

Not all Data Centers are Created Equal

Not All Data Centers Are Equal | NJFX
Infrastructure Intelligence

Not All Data Centers
Are Created Equal

From legacy converted buildings to purpose-built campuses, high-density AI facilities, and the edge — understanding what separates a data center from a true interconnection hub is the difference between good infrastructure and great strategy.

By the NJFX Team · Infrastructure Strategy · 7 min read

The word "data center" gets used to describe everything from a room full of servers in a retrofitted warehouse to a purpose-built, carrier-neutral interconnection hub sitting directly on a live submarine cable. The difference matters enormously to how your data moves, how your latency performs, and how resilient your network actually is.

The data center industry has evolved rapidly, but not uniformly. Today, the market encompasses a wide and often misunderstood range of facility types — each designed with different priorities, different customers, and radically different infrastructure DNA.

Understanding where a facility sits on this spectrum isn't just academic. For enterprises, cloud providers, content networks, and carriers, choosing the wrong type of facility can mean paying for capacity that doesn't actually serve your traffic the way you think it does.

Gen01

Converted & Traditional Facilities

The Legacy Retrofit

The earliest era of commercial data centers wasn't purpose-built — it was improvised. Office buildings, industrial warehouses, and even old telephone switching stations were retrofitted with raised floors, basic cooling systems, and whatever connectivity could be pulled in from the street. These facilities met the demand of a market that hadn't yet decided what a data center should actually be.

Many of these converted facilities are still operating today. They tend to serve smaller regional colocation markets, enterprise tenants with legacy infrastructure commitments, or cost-sensitive workloads that don't require high power density or carrier diversity. The economics can work — older buildings often carry lower lease costs — but the infrastructure trade-offs are real and compounding.

Power distribution in converted buildings is typically constrained by the original electrical design of the structure. Cooling is often bolted on rather than integrated, resulting in hot spots, inefficiency, and limited headroom for density upgrades. Connectivity depends entirely on which carriers chose to extend fiber to the location, often leaving tenants with limited provider choice.

Low entry cost Established locations Suitable for low-density workloads Aging power infrastructure Limited cooling headroom Constrained carrier diversity Poor scalability

Best Suited For

  • Legacy enterprise applications
  • Low-density storage workloads
  • Cost-sensitive regional colocation
  • Disaster recovery and archival

Watch Out For

  • Single-carrier connectivity risk
  • Power density ceilings below 5kW/rack
  • Cooling inefficiency and PUE overhead
  • Limited physical expansion options
Gen02

Purpose-Built Colocation

The Enterprise Standard

The purpose-built colocation facility became the defining model of the modern data center industry. Designed from the ground up with data center operations in mind, these facilities introduced engineering rigor that retrofitted buildings simply couldn't replicate: N+1 or 2N power redundancy, precision cooling with structured airflow management, Tier III and Tier IV uptime certifications, and physical security infrastructure built to standards far beyond a commercial office environment.

Purpose-built colos became the backbone of enterprise IT infrastructure through the 2000s and 2010s. Enterprises could outsource the complexity of running their own data rooms while maintaining control over their hardware, their software, and their connectivity choices. Carrier-neutral facilities in this category — where multiple providers compete for cross-connect business — became particularly valuable, as they enabled tenants to build multi-provider network architectures without owning the real estate themselves.

The standard power density for purpose-built colo has historically ranged from 5 to 10kW per rack, with higher-density zones available in newer builds. This works for most enterprise workloads but the arrival of GPU-accelerated computing has exposed the limits of facilities not designed for the thermal profiles of AI infrastructure.

Tier III / IV redundancy Carrier-neutral options Standardized power & cooling Proven security compliance Power density limits for AI Geographic clustering in major metros Often far from subsea entry points

Best Suited For

  • Enterprise hybrid IT infrastructure
  • Multi-carrier network interconnection
  • Regulated industries requiring compliance
  • Dedicated hosting and managed services

Watch Out For

  • Connectivity path to global subsea networks
  • Power headroom for evolving density needs
  • Latency to end users in non-metro markets
  • Lock-in to metro fiber ecosystems
Gen03

High-Power & Hyperscale Facilities

Built for the AI Era

The emergence of large language models, GPU clusters, and cloud hyperscalers fundamentally changed what the market demands from data center infrastructure. High-power and hyperscale facilities represent the industry's response: massive campuses designed around compute density rather than multi-tenant flexibility, capable of delivering 30 to 100+ kW per rack and supporting liquid cooling architectures that air-cooled facilities simply cannot match.

Hyperscale facilities — the campuses operated by or built for AWS, Microsoft, Google, and their peers — are purpose-engineered for scale. They operate on campus models where power, cooling, and networking are vertically integrated and optimized for the specific workloads running inside. The economics work at hyperscale because the volume of compute justifies the capital investment in custom infrastructure.

High-power AI-oriented colocation is a newer category: facilities built with the power density and cooling capacity of hyperscale infrastructure, but operated as multi-tenant colocation environments. These facilities serve AI labs, ML engineering teams, and enterprises running inference workloads at scale. They are an important and fast-growing segment — but like all data centers, they depend on the connectivity infrastructure that gets traffic to and from their compute. That connectivity story is often the weakest part of the pitch.

30–100kW+ per rack Liquid cooling capable Massive scalability Purpose-built for AI workloads Often single-tenant or hyperscaler-controlled Limited carrier-neutral ecosystem Connectivity frequently an afterthought

Best Suited For

  • GPU cluster training and inference
  • Cloud hyperscaler deployments
  • Large-scale ML model hosting
  • High-performance compute (HPC)

Watch Out For

  • Ingress/egress costs at hyperscaler scale
  • Limited control over network routing
  • Distance from subsea entry points
  • Power procurement and grid dependency
Gen04

Edge & Local Data Centers

Proximity at the Last Mile

Edge computing emerged as an answer to a latency problem that centralized data centers — no matter how well-designed — cannot solve by themselves: the distance between where data is processed and where users actually are. Edge data centers are intentionally distributed, positioned in secondary and tertiary markets, inside mobile network operator facilities, and increasingly at the cell tower level to bring compute closer to the end point of traffic.

For use cases where milliseconds matter — content delivery, real-time gaming, autonomous vehicle communication, industrial IoT, and augmented reality — edge infrastructure is not optional. A hyperscale facility in Northern Virginia cannot serve a user in Denver with sub-10ms round-trip times. An edge node in Denver can. The distributed model sacrifices the economies of scale that large campuses enjoy, but it trades that for the latency performance that certain applications require.

The strategic challenge with edge infrastructure is backhaul: how does data get from the edge node to the core network, and from the core network to the global internet? Edge nodes are, by definition, downstream from the primary network. They depend on the health and capacity of the networks connecting them back to the backbone — which means the quality of the backbone matters enormously, even if users never see it directly.

Minimal end-user latency Distributed geographic coverage CDN and real-time application support Limited compute density Dependent on backhaul quality No direct subsea or backbone access High operational complexity at scale

Best Suited For

  • CDN node deployment and caching
  • Real-time gaming and streaming
  • Autonomous systems and IoT
  • Mobile network operator (MNO) integration

Watch Out For

  • Backhaul dependency and single-path risk
  • Inconsistent power and cooling standards
  • Fragmented management across nodes
  • Limited carrier choice at edge locations

Each of these facility types serves a real market need. But a critical question often goes unasked: where does your data actually come from, and how does it get there?

Most conversations about data centers focus on what happens inside the building — power density, cooling efficiency, uptime certifications. Fewer conversations address what happens outside: how fiber arrives, from where, and how many independent paths exist to the global internet.

For most data centers, connectivity is a secondary consideration. Carriers bring fiber in, operators sell cross-connects, and customers assume the network is "good enough." In many cases, it is. But for organizations whose traffic is genuinely global — reaching users across the Atlantic, into Latin America, or across Southeast Asia — the origin of that fiber changes everything.

Approximately 97% of intercontinental internet traffic travels over submarine fiber optic cables. The landing stations where those cables come ashore are among the most strategically significant pieces of telecommunications infrastructure on the planet — yet most data centers have no direct relationship with them at all.

A traditional cable landing station is exactly what it sounds like: the physical point where a submarine cable comes ashore. Historically, these were passive — protected, secured, and largely inaccessible. The cable arrived, connected to a carrier's network, and traffic flowed out through that single relationship.

That model worked when the internet was simpler.

Today's global traffic demands require direct, competitive access to the subsea layer — not just the ability to buy bandwidth from a carrier who happens to have a relationship with a cable. The difference between reaching a cable station passively through a carrier and sitting directly on that infrastructure is the difference between renting a phone and owning the switchboard.

Most data centers connect to the internet.
NJFX sits at the point where the internet begins.

NJFX is not a traditional data center. It is not a cable landing station in the passive, legacy sense. It is something the industry hadn't fully defined before it existed: a carrier-neutral, active interconnection hub built directly on a live submarine cable landing station where subsea meets land, and where carriers, content providers, cloud networks, and enterprises can connect directly, competitively, and on their own terms.

Located in Wall Township, New Jersey, NJFX sits at the landing point for multiple submarine cable systems including the highest-capacity routes connecting North America to Europe and Latin America. But unlike a legacy cable landing station, NJFX operates as an open, neutral exchange where customers can cross-connect directly to any cable system, any carrier, or any other party in the facility.

10+Submarine Cable Systems
Tier IIIPurpose-Built Facility
100%Carrier Neutral
NJLowest-Latency Route to Europe

This matters because subsea capacity is not a commodity when you can access it directly. Bandwidth purchased from a carrier who has already traversed cable infrastructure carries overhead — in cost, in latency, and in dependency on that single supplier's routing decisions. At NJFX, participants access the cable systems directly, making their own routing decisions, negotiating their own capacity, and building genuinely resilient, multi-path global networks.

The rise of cloud computing led many organizations to believe that geography had been abstracted away. For applications where latency tolerances are wide and traffic stays on terrestrial networks. But for global content delivery, real-time financial data, international voice and video, and any application where the Atlantic or Pacific is part of the path, physics still wins.

Light travels through fiber at roughly two-thirds the speed of light in a vacuum. That means every kilometer matters. New Jersey's geography — sitting directly on the Eastern Seaboard at the closest point between North America and Europe — isn't a marketing claim. It is a physical reality that translates directly into measurable latency advantage for every packet crossing that ocean.

Edge data centers have emerged to address the last-mile latency problem. High-power facilities have emerged to address the compute-density problem. Neither solves the global routing problem. NJFX addresses something different and more foundational: the point at which global traffic first touches land, and the terms on which it does so.

As AI workloads scale and become increasingly distributed the strategic importance of where data enters and exits terrestrial networks only grows. The question isn't simply where to colocate servers. It's where to anchor your global routing strategy.

Hyperscale facilities offer compute density. Purpose-built colocation offers reliability and redundancy. Edge facilities offer proximity to users. NJFX offers something none of those provide on their own: direct, neutral access to the global subsea infrastructure that all of them ultimately depend on.

The most sophisticated global networks don't just choose where to put their servers. They choose where to control their connectivity — and they anchor that control at the point where subsea infrastructure meets the terrestrial network. That point is NJFX.

Not all data centers are equal. They weren't designed to be. A converted warehouse in a metro market serves a different purpose than a Tier 3 colocation campus, which serves a different purpose than a hyperscale AI facility, which serves a different purpose than a local edge node. Understanding these differences is the first step toward building infrastructure strategy that actually matches your traffic patterns and business requirements.

What NJFX represents is the layer below all of those — the interconnection foundation that makes global networks function. Unlike the passive cable landing stations of the past, NJFX is an active, open, and competitive environment where participants don't just arrive at the subsea layer. They own their place in it.

That's not a subtle distinction. For the networks that understand, it's a strategic advantage.

See Why the World's Networks Connect at NJFX

Learn how direct access to submarine cable infrastructure changes the economics and performance of global connectivity.

Connect With Our Team

Not all Data Centers are Created Equal Read More »

Where Compute Meets Connectivity

Where Compute Meets Connectivity | NJFX Blog

Where Compute Meets Connectivity: Emerging Digital Infrastructure

10MW New Capacity Available
1–5MW Emerging Deployment Range
#1 First Liquid-Cooled CLS
35+ Carrier Partners On-Net
Home Blog Where Compute Meets Connectivity
Infrastructure March 2026 6 min read

For decades, digital infrastructure evolved in two parallel worlds. Data centers provided the compute. Networks delivered the connectivity.

Today, that separation is disappearing.

As artificial intelligence, algorithmic trading, and enterprise data processing accelerate, organizations are deploying compute infrastructure in the 1–5 megawatt range closer to the networks that move their data. The result is a new model — one where compute and connectivity operate as a single ecosystem rather than two independent layers.

Dimension Isolated Model (Compute ≠ Connectivity) Converged Model (Compute + Connectivity)
Deployment Size Hyperscale-only or shared colocation 1–5MW dedicated enterprise deployments
Network Access Single-carrier, provisioned after siting Carrier-neutral, on-net at point of compute
Latency Profile High — data travels off-campus to exchange Minimal — compute sits at the exchange point
Infrastructure Siting Optimized for power cost and land availability Optimized for network convergence and latency
Cooling Architecture Air-cooled, legacy density limits Air-Cooled + Liquid-cooled, 100 kW+ per rack capable
Operational Model Two vendors, two contracts, two SLAs Single campus, unified ecosystem

For years, organizations accepted the complexity of managing compute and connectivity as separate disciplines. That tradeoff made sense when workloads were predictable and network-bound tasks were the exception. Today's AI inference, algorithmic trading, and real-time analytics have inverted the equation — and a new class of infrastructure is emerging to match.


01 —

The 1–5MW Deployment Model Is Emerging

Historically, megawatt-scale deployments were associated almost exclusively with hyperscale cloud providers. But a new class of infrastructure users is reshaping where — and how — compute is being deployed.

These are not hyperscalers building hundred-megawatt campuses. They are financial institutions, trading firms, and enterprises running mission-critical workloads that demand both density and proximity to global networks.

  • Financial institutions are deploying AI clusters to analyze market data in real time — workloads where both compute density and sub-millisecond network access are non-negotiable requirements.
  • Trading firms are expanding capacity to support increasingly sophisticated algorithms. In this market, infrastructure location and latency are a direct competitive variable, not a secondary consideration.
  • Enterprises are building private AI environments to process proprietary datasets and run advanced analytics outside of shared public cloud — retaining control over performance, cost, and data sovereignty.

Many of these deployments are landing in the 1–5MW range — large enough to support meaningful GPU clusters, yet flexible enough for dedicated, purpose-controlled infrastructure. This is not the hyperscale market. It is something more strategic, and harder to replicate once built in the wrong location.

Key Takeaway The 1–5MW deployment is not a stepping stone toward hyperscale. It is a deliberate strategic choice for organizations that need performance and control without the complexity of shared infrastructure.
02 —

Why Connectivity Now Matters More Than Ever

Compute power alone is no longer the defining factor of digital infrastructure. For industries like financial services and real-time analytics, the speed at which data moves between networks, exchanges, cloud environments, and users is equally critical.

This shift is driving a fundamental rethinking of where infrastructure is sited. Rather than positioning compute in isolated campuses optimized for power cost and land availability, organizations increasingly need their GPU clusters and analytics systems deployed where global networks converge — near major exchange points, cable landing stations, and direct fiber interconnects.

  • Latency is a revenue variable. In financial services, the distance between a compute cluster and a network exchange is measured in microseconds — and those microseconds translate directly to execution quality and trading performance.
  • AI inference is network-bound. Large-scale AI workloads require constant, high-throughput data movement between compute, storage, and end users. Placing GPU infrastructure far from network hubs creates a structural bottleneck that additional compute cannot resolve.
  • Hybrid cloud architectures demand proximity. Organizations running workloads across private infrastructure and cloud on-ramps benefit directly from placing compute near carrier-neutral facilities — reducing both latency and egress cost simultaneously.
  • Subsea connectivity is the global backbone. For organizations processing international data flows, proximity to a cable landing station provides access to the lowest-latency international routes — a structural advantage unavailable in inland or campus-based deployments.
Key Takeaway Connectivity determines how fast your data moves and how much you pay to move it. Infrastructure sited at a network convergence point — rather than downstream from one — is a fundamentally different class of asset.
03 —

The Architecture That Brings Both Together

Until recently, the organizations best positioned to co-locate compute and connectivity were the hyperscalers — with the capital and scale to build dedicated campuses adjacent to major cable systems. For everyone else, compute and connectivity remained separate decisions, managed through separate vendors, with compounding operational complexity.

That model is changing. Purpose-built facilities at strategic network hubs are now making the converged architecture accessible to enterprise and financial services deployments in the 1–5MW range.

  • Carrier-neutral colocation allows organizations to cross-connect directly to dozens of network providers, cloud on-ramps, and CDN partners — removing single-carrier dependency and enabling competitive routing at the point of compute.
  • Direct subsea cable access provides the lowest-latency path to international markets, bypassing the transit hops that add cost and latency to commodity internet routes.
  • High-density liquid cooling enables GPU-dense deployments at 100 kW+ per rack — the prerequisite for modern AI and HPC workloads — within the same campus that hosts the network interconnects.
  • Unified operational environment eliminates the coordination overhead of managing compute at one location and network services at another, consolidating SLAs, support, and physical security into a single relationship.
Key Takeaway The converged model is not a product feature — it is an architectural decision. Organizations that build compute at a network hub gain a structural performance and cost advantage that is difficult to reverse-engineer after the fact.

NJFX — The Convergence Is Already Taking Shape

10MW. One Campus. Compute and Connectivity in a Single Ecosystem.

At NJFX, the infrastructure described above is not a roadmap — it is operational reality. Our Wall Township, NJ campus — the world's first liquid-cooled Cable Landing Station — is entering its next phase of development with 10MW of available capacity designed to support a new generation of deployments that combine high-density compute with direct, carrier-neutral access to global networks.

Rather than forcing organizations to bridge the gap between separate compute and connectivity vendors, NJFX brings both together in a single carrier-neutral campus — with 35+ on-net carrier partners, direct subsea cable access, and liquid-cooled GPU infrastructure ready for deployments in the 1–5MW range and beyond. This combination does not exist anywhere else in the world.

  • Structural engineering completed — Rooftop steel grid designed and certified for industrial chiller load ratings
  • Steel installation complete — Heavy-gauge structural framework installed across the full 10MW footprint
  • Utility power confirmed — 10MW load letter secured from utility partner with near-term energization schedule
  • Carrier ecosystem active — 35+ on-net carrier partners with carrier-neutral meet-me room operational
  • Chiller installation underway — Industrial liquid cooling equipment delivery and commissioning in progress
  • Data hall energization — Full 10MW capacity ready for customer deployment
10MW Available Capacity
100kW+ Per Rack Liquid Cooling
35+ Carriers On-Net
#1 Liquid-Cooled CLS

The Infrastructure of Tomorrow, Available Today

The next phase of digital infrastructure will not be defined solely by larger data centers or faster networks. It will be defined by how effectively the two work together — and how close that integration gets to zero latency, zero distance, and zero operational complexity.

As demand for AI workloads, financial analytics, and enterprise compute continues to grow, the 1–5MW deployment at a strategic connectivity hub is quickly becoming a defining architecture of the modern digital economy. Organizations that make this decision early will build a structural performance advantage that is difficult to replicate after the fact.

The question is not whether compute and connectivity need to converge. They already are. The question is whether your infrastructure is positioned where that convergence is happening.

Ready to Learn More?

Contact the NJFX team to discuss how our purpose-built campus can support your compute and connectivity needs.

Talk to an Expert

Where Compute Meets Connectivity Read More »

Built for What’s Next: Traditional Digital Infrastructure vs. Purpose-Built Facilities

Built for What's Next | NJFX Blog
NJFX Cooling Infrastructure

Built for What’s Next: Traditional Digital Infrastructure vs. Purpose-Built Facilities

10 MWData Hall Capacity
100 kW+Per Rack Liquid Cooling
#1First Liquid-Cooled CLS
35+Carrier Partners On-Net

As AI and high-performance computing push computational demands to new extremes, one question has become unavoidable: is your existing data center built for what you need today — let alone tomorrow?

Metric Traditional Facility Purpose-Built Facility
Cooling TypeAir-cooled (CRAC/CRAH)Liquid / immersion cooling
Max Rack Density5–10 kW per rack100 kW+ per rack
Floor Load Rating150–250 lbs/sq ft300–500+ lbs/sq ft
PUE (Efficiency)1.5 – 2.0+1.1 – 1.3
Network ConnectivityLimited, single-carrierCarrier-neutral, dark fiber
Power RedundancyN or N+1 (legacy sizing)2N with HPC-scale UPS

For decades, traditional data centers served the industry well. They housed rows of standard servers, managed predictable workloads, and operated within well-understood power and cooling envelopes. But the rules have changed. Modern workloads are denser, hotter, heavier, and more bandwidth-hungry than anything legacy facilities were designed to accommodate.

Purpose-built facilities — designed from the ground up with modern compute in mind — are closing the gap. Below, we break down the key differences across six critical dimensions.


01 — Air vs. Liquid

Perhaps the single most pressing challenge facing traditional data centers today is thermal management. Legacy facilities were designed around air-cooled systems — computer room air conditioners (CRACs), hot/cold aisle containment, and raised floor plenums. For standard servers drawing 1–5 kW per rack, this approach was sufficient.

Modern AI accelerators and high-density compute nodes tell a very different story. A single rack equipped with today’s GPU clusters can exceed 100 kW — more than 20 times what legacy air-cooling systems were designed to handle. Trying to cool these loads with air alone is not just inefficient; in many cases, it is physically impossible.

What Purpose-Built Facilities Offer
  • Direct Liquid Cooling (DLC): Coolant delivered directly to processor heat sinks via manifolds integrated into the rack, removing heat at the source.
  • Immersion Cooling: Servers submerged in non-conductive dielectric fluid, enabling extreme heat dissipation with near-silent operation and significant energy savings.
  • Rear-Door Heat Exchangers: A transitional approach integrating liquid cooling at the rack level without requiring a full facility retrofit.
  • Precision-Engineered CDUs: Coolant Distribution Units sized for high-density deployments with redundant circuits and real-time monitoring.
Key Takeaway

Air cooling is reaching its physical ceiling. Purpose-built facilities engineered with liquid cooling aren’t a luxury — they are a prerequisite for next-generation compute.

02 — Structural Engineering

Weight is an often-overlooked constraint that becomes immediately apparent when deploying modern infrastructure. Traditional data center floors were typically designed to support 150–250 lbs per square foot — adequate for the 1U and 2U servers of a prior era.

Today’s high-density configurations tell a different story. A fully loaded GPU server chassis can weigh over 100 lbs on its own. Multiply that across a fully populated 42U rack — adding cabling, PDUs, and networking gear — and a single rack can approach 2,000–3,000 lbs. Legacy raised flooring systems can buckle or fail under these loads, creating serious safety and liability risks.

How Purpose-Built Facilities Address This
  • Reinforced concrete slab construction rated for 300–500+ lbs per square foot, engineered for high-density deployments.
  • Structural steel framing with independent rack anchor points that distribute weight loads directly to the building foundation.
  • Elimination of raised flooring in high-density zones, removing a critical failure point and improving airflow predictability.
  • Per-rack weight ratings documented and enforced during facility design, preventing overload scenarios before they occur.
Key Takeaway

Ignoring floor load capacity is a safety and liability issue. Purpose-built facilities engineer weight distribution into the foundation — not as an afterthought.

03 — Power Density & Electrical Infrastructure

Traditional data centers were built on the assumption that average rack densities would remain in the 5–10 kW range. Their electrical infrastructure — PDUs, busways, UPS systems, and generator capacity — reflects that assumption. Retrofitting these systems is possible, but costly, disruptive, and often limited by the physical constraints of the existing building.

Purpose-Built Power Advantages
  • High-density power distribution: 3-phase PDUs rated for 30–60A circuits per rack, with in-rack branch circuit monitoring at the outlet level.
  • Scalable UPS architecture: Modular lithium-ion UPS systems that can be right-sized and expanded without significant downtime.
  • Generator capacity designed for peak load, sized on fully populated high-density configurations — not historical averages.
  • On-site power redundancy (2N or N+1) eliminating single points of failure across the entire electrical pathway.
  • Renewable energy integration: On-site solar, battery storage, and grid interconnects engineered into the original design.
Key Takeaway

Power infrastructure is the lifeblood of any data center. Purpose-built facilities deliver the electrical capacity and redundancy that modern AI and HPC workloads demand — without compromise.

04 — Network Architecture

In the age of cloud computing, hybrid infrastructure, and distributed AI workloads, connectivity is no longer a secondary consideration — it is a primary design criterion. Traditional facilities were often located based on real estate availability, with network connectivity provisioned after the fact. This leaves organizations dependent on limited carriers, exposed to single points of network failure, and far from the internet exchange points that minimize latency.

What Purpose-Built Facilities Deliver
  • Carrier-neutral meet-me rooms (MMRs) enabling direct cross-connects to dozens of network providers, cloud on-ramps (AWS Direct Connect, Azure ExpressRoute, Google Cloud Interconnect), and CDN providers.
  • Dark fiber access with diverse entry points: Physically separate conduit pathways entering the building from different directions, eliminating the risk of a single fiber cut.
  • On-net access to major cloud providers, reducing latency and cost for hybrid cloud architectures by keeping traffic off the commodity internet.
  • High-density fiber infrastructure designed for 400G, 800G, and beyond — not retrofitted from legacy copper or lower-grade fiber plant.
  • Low-latency positioning near major IXPs and metropolitan fiber hubs, making network performance a competitive advantage.
Key Takeaway

Connectivity determines how fast your data moves and how much you pay to move it. Purpose-built facilities treat network access as a core infrastructure investment — not an add-on.

05 — Security & Compliance

Regulatory requirements for data handling, privacy, and security continue to grow more complex. Purpose-built facilities increasingly differentiate on security architecture and compliance posture as primary design objectives — not checkbox items added after commissioning.

  • Multi-factor biometric access control at every secure perimeter, with full audit logging and video retention.
  • Man-trap vestibule entry systems preventing tailgating and ensuring individual credentialing at every access point.
  • Seismically braced, blast-resistant construction where regulatory or geographic requirements demand it.
  • Dedicated compliance zones: Physically isolated cages, suites, or modules purpose-built for HIPAA, FedRAMP, PCI-DSS, and similar frameworks.
  • 24/7/365 on-site security staffing with documented incident response procedures and regular third-party audits.
06 — Operational Efficiencyy

Power Usage Effectiveness (PUE) — the ratio of total facility power to the power delivered to IT equipment — is a fundamental efficiency metric. Legacy facilities commonly operate at PUEs of 1.5 to 2.0 or higher, meaning for every watt delivered to compute, an additional 0.5–1.0 watts is consumed by overhead systems like cooling and lighting.

Purpose-built modern facilities routinely achieve PUEs of 1.1–1.3, driven by efficient cooling design, LED lighting, and intelligent power management. Over the lifetime of a facility, this difference translates to millions of dollars in operating cost and a dramatically reduced environmental footprint.

2.0+
Legacy PUE
1.1–1.3
Purpose-Built PUE
50%+
Efficiency Gain
Key Takeaway

Sustainability is no longer just about corporate responsibility — it is a cost and competitive advantage. Purpose-built facilities deliver measurably better efficiency from day one.

Steel on the Roof. The World’s First Liquid-Cooled CLS Is Taking Shape.

The structural framework for our 10MW data hall chiller system is installed and ready — a milestone years in the making.

The photographs below tell the story of what purpose-built really looks like. At our Wall Township, NJ campus, the structural steel framework is fully installed on the roof of our 10MW data hall — engineered by Bala from the slab up to carry the industrial-scale chiller systems that will make NJFX the world’s first Cable Landing Station with native liquid cooling for GPU-dense AI workloads.

NJFX rooftop structural steel framework for chiller installation, aerial view showing full grid layout

Steel framework complete on the roof of the 10MW data hall — load-rated and ready for chiller installation

NJFX wind-resistant screening bars rated to withstand 155mph Category 5 hurricane conditions

Wind-resistant screening bars engineered to withstand 155 mph Category 5 hurricane conditions — purpose-built for resilience from the outside in

Close-up perspective through steel beams and louvered canopy toward rooftop chiller infrastructure

Every beam, every connection — engineered for the loads that liquid cooling demands

This team built something no one has built before — a Cable Landing Station ready for liquid-cooled GPU infrastructure. Our customers can now land subsea traffic and run AI workloads in the same carrier-neutral campus. That combination doesn’t exist anywhere else in the world, and we’ve been building toward it from day one.

— Gil Santaliz, CEO & Founder, NJFX

What you see in these images isn’t just steel. It’s the product of structural engineering, utility coordination, load calculations, and construction sequencing — all completed to support future infrastructure. The wind-resistant screening bars are rated to withstand 155 mph Category 5 hurricane conditions, and the structural steel grid is load-certified for industrial chiller tonnage. This is the kind of preparation that separates purpose-built facilities from legacy retrofits: when the equipment arrives, the building is ready for it.

Bringing a 10MW data hall online within a Cable Landing Station is not a single event. It is a precise sequence of interdependent milestones, each one enabling the next. NJFX has been ready at every step:

  • Structural engineering completed — Rooftop steel grid designed and certified for industrial chiller load ratings
  • Steel installation complete — Heavy-gauge structural framework installed and inspected across the full footprint
  • Louvered screening structure installed — Airflow management canopy in place above the mechanical yard
  • Utility power confirmed — 10MW load letter secured from utility partner with near-term energization schedule
  • Chiller installation underway — Industrial cooling equipment delivery and commissioning in progress
  • Data hall energization — Full 10MW capacity ready for customer deployment
The Infrastructure of Tomorrow, Available Today

Traditional data center infrastructure was designed for a different era of computing. That infrastructure served the industry well — but the demands of AI, high-density compute, and modern cloud-native workloads have fundamentally changed what “good” looks like.

Purpose-built facilities are not simply upgraded versions of their predecessors. They are purpose-engineered environments where every system — cooling, power, structure, connectivity, security — is designed as an integrated whole to support the most demanding workloads available today and the ones coming tomorrow.

For organizations evaluating their infrastructure strategy, the question is no longer whether purpose-built facilities offer advantages. The question is how long you can afford to operate without them.

Ready to Learn More?

Contact the NJFX team to discuss how our purpose-built campus can support your workloads.

Talk to an Expert

Built for What’s Next: Traditional Digital Infrastructure vs. Purpose-Built Facilities Read More »

Uninterrupted by Design: 100% Uptime Through Every Season

Uninterrupted by Design

100% Uptime Through Every Season

February 26, 2026

When winter storms sweep across New Jersey, they often leave behind closed highways, delayed transportation and neighborhoods bracing for potential outages. Snow accumulates along the coast, winds intensify, and communities prepare for disruption.

Inside NJFX’s purpose-built cable landing station and colocation campus, operations continue without pause.

For ten consecutive years, NJFX has maintained 100 percent uptime. Through blizzards, coastal storms, heat waves and extreme weather events, the facility has operated continuously. The record reflects careful engineering and disciplined execution rather than favorable conditions.

Reliability at NJFX is intentional. It is designed into every system and reinforced by a team trained to perform under pressure.

Infrastructure Designed for Continuity

At the foundation of NJFX’s operations is a layered redundancy model. The facility operates with N+1 configuration across critical infrastructure systems. Every essential component has reserve capacity available if the primary system requires support.

Dual utility feeds provide diversified power sources to the campus. On-site generation systems are positioned to engage immediately if external supply is compromised. Power is conditioned and distributed through engineered pathways that eliminate single points of failure.

Environmental systems operate with equal precision. Cooling infrastructure is calibrated to maintain stable performance during fluctuating seasonal conditions. Sensors continuously monitor temperature, humidity and load levels throughout the facility. Real-time data allows operators to detect and address anomalies before they escalate.

Preventative maintenance follows a disciplined schedule. Equipment is tested under load. Systems are inspected routinely. Contingency protocols are reviewed regularly. Preparedness is not seasonal. It is constant.

Operations as a Coordinated System

While infrastructure provides the framework, operational excellence sustains performance.

NJFX maintains a fully staffed, 24-hour on-site operations team. Monitoring platforms run continuously, providing live visibility into power distribution, cooling metrics and system health. Alerts are structured to ensure immediate response if performance thresholds shift.

The team functions with defined roles and cross-trained expertise. Procedures are documented clearly. Communication protocols are established and practiced. Each member understands not only individual responsibilities but also how their work supports the broader system.

When severe weather is forecast, preparations begin early. Backup systems are verified. Fuel levels are confirmed. Staffing rotations are adjusted to guarantee on-site presence regardless of road conditions. Supplies are stocked in advance of anticipated storms.

During recent blizzards that moved through New Jersey, NJFX remained fully operational. While travel advisories were issued across the region, the campus stayed accessible. Driveways and walkways were cleared consistently. Access control systems remained active. Security operations continued without interruption.

Customers who could not travel safely relied on NJFX’s remote hands services. On-site technicians performed equipment inspections, verified connections and executed tasks on behalf of client teams. This approach minimized risk for customers while maintaining continuity of service.

The coordination between systems and personnel operates with precision. Each process supports the next. Monitoring informs action. Redundancy supports stability. The team executes with discipline.

A Decade of Performance Under Pressure

Ten years of uninterrupted uptime in critical infrastructure is a measurable achievement. In an environment where seconds of downtime can affect financial markets, cloud platforms and enterprise networks, consistency carries weight.

NJFX’s resilience has been tested repeatedly by seasonal extremes along the Atlantic coast. Heavy snowfall, ice accumulation and high winds have challenged regional utilities and transportation systems. Inside the facility, operations have remained stable.

Connectivity moves through NJFX regardless of conditions outside. Subsea cables converge with terrestrial networks at the site, carrying traffic across continents and throughout the United States. Enterprises depend on that continuity.

The reliability is not situational. It is structural and operational.

Never Down

The NJFX campus may appear quiet from the outside, particularly during a snowstorm. Inside, systems continue running with precision. Power flows through redundant paths. Cooling systems maintain environmental stability. Monitoring screens remain active around the clock.

Infrastructure and team operate as one coordinated unit.

For ten years, through every season and under every condition, NJFX has delivered uninterrupted performance. The weather may shift. The demand for connectivity may increase. The standard remains the same.

NJFX operates without interruption.

Uninterrupted by Design: 100% Uptime Through Every Season Read More »

The Grid Can’t Move as Fast as AI

The Grid Can’t Move as Fast as AI

North America Data Center Vacancy Holds at 1% as Power Constraints Reshape Infrastructure Growth

February 23, 2026

In the accelerating race to build artificial intelligence infrastructure the constraint is no longer capital. It is electricity.

Across North America, power has become the defining variable in data center development. In many major markets developers may secure allocations on paper, but delivery timelines stretch to 2028 and beyond. Interconnection queues grow longer. Transformer procurement can take years. Substation expansions require regulatory review and coordinated investment.

The grid moves deliberately. AI does not.

A Market Running Ahead of Supply

The market data makes clear just how tight conditions have become.

According to JLL, the North America data center sector has reached an inflection point. Vacancy is locked at a record-low 1 percent for the second consecutive year.

In practical terms, that figure signals a market with virtually no buffer. There is no meaningful excess capacity waiting on the sidelines. No cushion to absorb sudden spikes in demand. The industry is expanding at historic levels yet it remains effectively full.

This is not a temporary fluctuation. It reflects sustained, structural demand driven by cloud computing, enterprise digital transformation and, increasingly, artificial intelligence.

At the same time, JLL reports that 64 percent of the 35-gigawatt construction pipeline now extends beyond traditional mature markets, underscoring how developers are being forced to look outward in search of deliverable megawatts.

Even with that geographic expansion, available capacity remains limited to small, fragmented blocks, offering little flexibility for hyperscalers or AI deployments that require large, contiguous power allocations.

Most tenants securing space today are contracting for deliveries in 2027 or 2028.  Further evidence that forwards demand is deep and durable.

The AI Effect on Infrastructure

The scale of infrastructure required to support AI is unprecedented.

High-density GPU clusters demand significant electrical loads and advanced cooling systems. Deployments that once required 5 or 10 megawatts now require multiples of that. Campus-scale planning has become the norm rather than the exception.

At the same time, capital spending among leading technology companies continues to rise sharply, with hundreds of billions of dollars allocated toward AI chips, servers and data center expansion. Demand is not the issue. Deliverable power is.

Electric grids were not designed to accommodate exponential spikes in concentrated industrial load on short notice. Transmission upgrades, generation capacity and substation builds require multi-year planning cycles. The physical infrastructure that supports digital growth cannot be scaled at the same velocity as software innovation.

Pre-Leasing as a Market Signal

When tenants are signing contracts two to three years ahead of occupancy, it reflects more than optimism. It reflects caution.

The JLL data shows a market operating in forward-commitment mode. Companies are securing capacity early to avoid being pushed further down increasingly crowded interconnection timelines.

The statistic speaks clearly: inventory that can be delivered in the near term is scarce.

Power certainty has become a deciding factor. In this constrained environment, confirmed delivery timelines carry outsized importance.

NJFX’s carrier neutral connectivity hub has secured 10 megawatts of additional power with delivery scheduled by the end of this year. The allocation is backed by a formal load letter from its utility partner, confirming energization within a defined timeframe.

A load letter signals committed capacity and scheduled delivery. Not a projection tied to future grid upgrades, but power aligned with near-term availability. In a market where some projects face energization dates extending to 2028 or later, confirmation within the current year provides clarity for organizations planning AI and high-density deployments.

A New Definition of Readiness

For years, readiness in the data center industry meant shovel-ready land, proximity to fiber and access to tax incentives.

Today, readiness is measured differently.

It is defined by confirmed megawatts, documented delivery schedules and infrastructure aligned with grid reality. It is the difference between theoretical capacity and energized capacity. Between long-term projections and near-term execution.

In a market where most new deployments are contracted years ahead and available inventory is fragmented, the facilities positioned to support the next phase of AI growth will not simply be those under construction.

They will be those that can turn on the power — on time.

In this environment, megawatts are not just a utility metric. They are the new currency of digital infrastructure.

The Grid Can’t Move as Fast as AI Read More »

All Hazard Preparation, Safety, Recovery Training (Gray Zone)

Gray Zone Threats

February 14, 2026

Gray Zone Threats (GZT) refer to ambiguous, malicious activities that fall short of open warfare but undermine stability and integrity, often involving cyberattacks, disinformation, espionage, or covert operations. In New Jersey’s Communications Sector, these threats pose significant risks to critical infrastructure, including telecommunications networks, internet services, and emergency communication systems.

Impact on Critical Infrastructure:

Disruption of Communications: GZTs can compromise or disrupt communication channels, hindering emergency response and coordination during crises.

Cyberattacks and Espionage: Malicious actors may infiltrate network systems to steal sensitive information or disable critical services, creating vulnerabilities that can be exploited further.

Disinformation Campaigns: Spread of false information can undermine public trust and interfere with operational decision-making.

Undermining Confidence: Persistent gray zone activities erode confidence in communication systems’ resilience, potentially destabilizing social and economic stability.

 Mitigation Strategies: 

  • Strengthening cybersecurity defenses.
  • Enhancing cooperation between government and private sector.
  • Implementing robust monitoring and response plans.
  • Promoting resilience and redundancy in communication networks.

Overall, Gray Zone Threats challenge the security and reliability of New Jersey’s Communications Sector, requiring vigilant, adaptive, and coordinated efforts to safeguard critical infrastructure.

Please see below Gray Zone Threats, including examples:

Gray Zone threats refer to strategies and activities that lie between traditional war and peaceful competition. They are characterized by ambiguity, deniability, and often involve non-military tools to achieve strategic objectives without triggering a full-scale conflict.

Overview of Gray Zone Threats

  • Definition: Actions intended to erode stability or influence an opponent without crossing the threshold of open warfare. They often involve cyberattacks, disinformation, economic pressure, and covert operations.
  • Characteristics:
    • Ambiguous attribution
    • Plausible deniability for actors involved
    • Use of asymmetric tactics
    • Slow, persistent, and adaptable operations

Application to Critical Infrastructure (CI)

  • Threats to CI include cyberattacks on communications, water systems, energy grids, transportation, and healthcare infrastructure.
  • Gray Zone tactics may involve:
    • Cyber intrusions causing disruptions or damage
    • Supply chain interference
    • Disinformation campaigns to undermine public trust
    • Covert influence operations targeting policy or operational decision-makers

Application to the Communications Sector

  • The communications sector is critical for societal functioning, making it a prime target.
  • Gray Zone tactics include:
    • Disrupting or degrading communication networks via cyber means
    • Spreading misinformation or disinformation through social media and other platforms
    • Exploiting vulnerabilities in telecommunications infrastructure
    • Using clandestine influence campaigns to sway public opinion or political outcomes

Implications

  • These threats challenge traditional detection and attribution methods.
  • Need for robust cybersecurity, intelligence sharing, and resilience planning.
  • Emphasis on understanding and countering ambiguous threats to protect national security and public safety.

Current Gray Zone threats facing the United States encompass a range of activities aimed at undermining national security, economic stability, and societal cohesion without provoking full-scale conflict. Key threats include:

  1. Cyberattacks and Cyberespionage
    • State-sponsored hacking campaigns targeting government agencies, critical infrastructure, and private sector entities.
    • Examples include intrusion attempts on energy grids, financial systems, and supply chains.
  2. Disinformation and Influence Campaigns
    • Efforts by Peer or Near-Peer Adversaries (Russai, China, Iran) to spread misinformation via social media platforms.
    • Aimed at sowing discord, influencing elections, and eroding public trust.
  3. Economic Coercion and Sanctions Evasion
    • Use of economic pressure, sanctions, and covert financial activities to influence U.S. policy.
    • Activities like cryptocurrency-based money laundering to evade detection.
  4. Covert Operations and Espionage
    • Attempts to clandestinely gather intelligence or influence policy through agents, private allies, or proxy groups.
  5. Maritime and Territorial Ambitions
    • Near Peer (China) actions in the Indo-Pacific, involve gray zone tactics like reclamation and harassment without direct military confrontation.
  6. Information Warfare
    • Manipulating online narratives, promoting propaganda, and leveraging social divisions to weaken societal cohesion.
  7. Biological and Environmental Manipulation
    • Potential manipulation of environmental factors or biological agents to create instability or influence regions indirectly.

These threats often blend military, intelligence, cyber, economic, and informational domains, making them complex and challenging to counter. The U.S. government continues to enhance its resilience and detection capabilities to address these evolving Gray Zone activities.

Gray Zone threats to Subsea Infrastructure and Cable Landing Stations (CLSs)

  • Remain particularly concerning due to their critical role in global communications, economic stability, and national security. These threats leverage ambiguity, deniability, and asymmetric tactics to undermine or disrupt these vital assets without triggering traditional military responses. A detailed overview includes:
  1. Cyberattacks on Sub-Sea Infrastructure and CLSs
  • Targeted Cyber Intrusions: Adversaries may attempt to infiltrate network management systems of subsea cables and landing stations, causing disruptions, data interception, or sabotage.
  • Supply Chain Exploitation: Compromising hardware or software components during manufacturing or deployment to introduce vulnerabilities.
  • Advanced Persistent Threats (APTs): State-sponsored groups could establish long-term access to monitor traffic or prepare for future disruptions, blending espionage with potential sabotage.
  1. Covert Physical Operations
  • Undersea Cable Tampering or Sabotage: Gray Zone actors might engage in clandestine cutting, tapping, or interference with subsea cables, often under cover of darkness or through disguised vessels, aiming to degrade communications without immediate attribution.
  • Vessel Incursions or Harassment: Disguised or non-military vessels may loiter near cable landing sites or anchor points, gathering intelligence or attempting physical interference.
  1. Disinformation and Influence Campaigns
  • Misleading Narratives: Propaganda to undermine confidence in the security or reliability of subsea communications, potentially creating panic or mistrust.
  • Operational Deception: Disinformation about the state of infrastructure or the presence of threats, complicating detection and response efforts.
  1. Exploitation of Vulnerabilities
  • Network Vulnerabilities: Use of cyber exploits targeting known weaknesses in SCADA (Supervisory Control and Data Acquisition) or other control systems managing cable operations.
  • Legitimacy Exploitation: Leveraging legal or diplomatic channels to create plausible deniability or delays in response, e.g., claiming routine maintenance or environmental concerns.
  1. Economic and Strategic Leverage
  • Restricting or Disabling Critical Links: Gray Zone actors might threaten or partially disable subsea cables to exert economic or political pressure, even if overtly denying involvement.
  • Influence in Policy and Decision-Making: Using cyber and influence tactics to sway regulatory or strategic decisions affecting subsea infrastructure.

Implications and Countermeasures:

  • Enhanced Cybersecurity: Robust encryption, intrusion detection systems, continuous monitoring, and regular audits of infrastructure management systems.
  • Physical Security & Surveillance: Deployment of underwater sensors, maritime patrols, and vessel monitoring around key cable landing sites.
  • International Cooperation: Sharing intelligence, best practices, and coordinated responses among nations and private operators.
  • Resilience Planning: Diversification of routes, redundant systems, and rapid repair capabilities to minimize impact from Gray Zone disruptions.
  • Attribution and Response Readiness: Developing capabilities to attribute attacks accurately and respond proportionally within the Gray Zone framework to deter future activities.

In summary, Gray Zone threats to Subsea Infrastructure and Cable Landing Stations are multifaceted, involving cyber, physical, informational, and strategic tactics. Addressing these requires a comprehensive, layered approach encompassing technological, diplomatic, and operational measures to safeguard these critical components of global communications.

Overall Analysis:

There are many recent examples of how Gray Zone tactics are employed across multiple domains and are often interconnected—cyber operations complement disinformation efforts, and economic pressures support territorial ambitions. They are designed to undermine U.S. influence, destabilize societal cohesion, and assert strategic advantages without provoking direct military conflict. The ongoing evolution of these threats requires comprehensive resilience, advanced detection, and international cooperation.

All Hazard Preparation, Safety, Recovery Training (Gray Zone) Read More »

Hello!

Login to your account

small_c_popup.png

Let's have a chat

Learn how we helped 100 top brands gain success.