The word "data center" gets used to describe everything from a room full of servers in a retrofitted warehouse to a purpose-built, carrier-neutral interconnection hub sitting directly on a live submarine cable. The difference matters enormously to how your data moves, how your latency performs, and how resilient your network actually is.
The data center industry has evolved rapidly, but not uniformly. Today, the market encompasses a wide and often misunderstood range of facility types — each designed with different priorities, different customers, and radically different infrastructure DNA.
Understanding where a facility sits on this spectrum isn't just academic. For enterprises, cloud providers, content networks, and carriers, choosing the wrong type of facility can mean paying for capacity that doesn't actually serve your traffic the way you think it does.
Converted & Traditional Facilities
The earliest era of commercial data centers wasn't purpose-built — it was improvised. Office buildings, industrial warehouses, and even old telephone switching stations were retrofitted with raised floors, basic cooling systems, and whatever connectivity could be pulled in from the street. These facilities met the demand of a market that hadn't yet decided what a data center should actually be.
Many of these converted facilities are still operating today. They tend to serve smaller regional colocation markets, enterprise tenants with legacy infrastructure commitments, or cost-sensitive workloads that don't require high power density or carrier diversity. The economics can work — older buildings often carry lower lease costs — but the infrastructure trade-offs are real and compounding.
Power distribution in converted buildings is typically constrained by the original electrical design of the structure. Cooling is often bolted on rather than integrated, resulting in hot spots, inefficiency, and limited headroom for density upgrades. Connectivity depends entirely on which carriers chose to extend fiber to the location, often leaving tenants with limited provider choice.
Best Suited For
- Legacy enterprise applications
- Low-density storage workloads
- Cost-sensitive regional colocation
- Disaster recovery and archival
Watch Out For
- Single-carrier connectivity risk
- Power density ceilings below 5kW/rack
- Cooling inefficiency and PUE overhead
- Limited physical expansion options
Purpose-Built Colocation
The purpose-built colocation facility became the defining model of the modern data center industry. Designed from the ground up with data center operations in mind, these facilities introduced engineering rigor that retrofitted buildings simply couldn't replicate: N+1 or 2N power redundancy, precision cooling with structured airflow management, Tier III and Tier IV uptime certifications, and physical security infrastructure built to standards far beyond a commercial office environment.
Purpose-built colos became the backbone of enterprise IT infrastructure through the 2000s and 2010s. Enterprises could outsource the complexity of running their own data rooms while maintaining control over their hardware, their software, and their connectivity choices. Carrier-neutral facilities in this category — where multiple providers compete for cross-connect business — became particularly valuable, as they enabled tenants to build multi-provider network architectures without owning the real estate themselves.
The standard power density for purpose-built colo has historically ranged from 5 to 10kW per rack, with higher-density zones available in newer builds. This works for most enterprise workloads but the arrival of GPU-accelerated computing has exposed the limits of facilities not designed for the thermal profiles of AI infrastructure.
Best Suited For
- Enterprise hybrid IT infrastructure
- Multi-carrier network interconnection
- Regulated industries requiring compliance
- Dedicated hosting and managed services
Watch Out For
- Connectivity path to global subsea networks
- Power headroom for evolving density needs
- Latency to end users in non-metro markets
- Lock-in to metro fiber ecosystems
High-Power & Hyperscale Facilities
The emergence of large language models, GPU clusters, and cloud hyperscalers fundamentally changed what the market demands from data center infrastructure. High-power and hyperscale facilities represent the industry's response: massive campuses designed around compute density rather than multi-tenant flexibility, capable of delivering 30 to 100+ kW per rack and supporting liquid cooling architectures that air-cooled facilities simply cannot match.
Hyperscale facilities — the campuses operated by or built for AWS, Microsoft, Google, and their peers — are purpose-engineered for scale. They operate on campus models where power, cooling, and networking are vertically integrated and optimized for the specific workloads running inside. The economics work at hyperscale because the volume of compute justifies the capital investment in custom infrastructure.
High-power AI-oriented colocation is a newer category: facilities built with the power density and cooling capacity of hyperscale infrastructure, but operated as multi-tenant colocation environments. These facilities serve AI labs, ML engineering teams, and enterprises running inference workloads at scale. They are an important and fast-growing segment — but like all data centers, they depend on the connectivity infrastructure that gets traffic to and from their compute. That connectivity story is often the weakest part of the pitch.
Best Suited For
- GPU cluster training and inference
- Cloud hyperscaler deployments
- Large-scale ML model hosting
- High-performance compute (HPC)
Watch Out For
- Ingress/egress costs at hyperscaler scale
- Limited control over network routing
- Distance from subsea entry points
- Power procurement and grid dependency
Edge & Local Data Centers
Edge computing emerged as an answer to a latency problem that centralized data centers — no matter how well-designed — cannot solve by themselves: the distance between where data is processed and where users actually are. Edge data centers are intentionally distributed, positioned in secondary and tertiary markets, inside mobile network operator facilities, and increasingly at the cell tower level to bring compute closer to the end point of traffic.
For use cases where milliseconds matter — content delivery, real-time gaming, autonomous vehicle communication, industrial IoT, and augmented reality — edge infrastructure is not optional. A hyperscale facility in Northern Virginia cannot serve a user in Denver with sub-10ms round-trip times. An edge node in Denver can. The distributed model sacrifices the economies of scale that large campuses enjoy, but it trades that for the latency performance that certain applications require.
The strategic challenge with edge infrastructure is backhaul: how does data get from the edge node to the core network, and from the core network to the global internet? Edge nodes are, by definition, downstream from the primary network. They depend on the health and capacity of the networks connecting them back to the backbone — which means the quality of the backbone matters enormously, even if users never see it directly.
Best Suited For
- CDN node deployment and caching
- Real-time gaming and streaming
- Autonomous systems and IoT
- Mobile network operator (MNO) integration
Watch Out For
- Backhaul dependency and single-path risk
- Inconsistent power and cooling standards
- Fragmented management across nodes
- Limited carrier choice at edge locations
Each of these facility types serves a real market need. But a critical question often goes unasked: where does your data actually come from, and how does it get there?
Most conversations about data centers focus on what happens inside the building — power density, cooling efficiency, uptime certifications. Fewer conversations address what happens outside: how fiber arrives, from where, and how many independent paths exist to the global internet.
For most data centers, connectivity is a secondary consideration. Carriers bring fiber in, operators sell cross-connects, and customers assume the network is "good enough." In many cases, it is. But for organizations whose traffic is genuinely global — reaching users across the Atlantic, into Latin America, or across Southeast Asia — the origin of that fiber changes everything.
Approximately 97% of intercontinental internet traffic travels over submarine fiber optic cables. The landing stations where those cables come ashore are among the most strategically significant pieces of telecommunications infrastructure on the planet — yet most data centers have no direct relationship with them at all.
A traditional cable landing station is exactly what it sounds like: the physical point where a submarine cable comes ashore. Historically, these were passive — protected, secured, and largely inaccessible. The cable arrived, connected to a carrier's network, and traffic flowed out through that single relationship.
That model worked when the internet was simpler.
Today's global traffic demands require direct, competitive access to the subsea layer — not just the ability to buy bandwidth from a carrier who happens to have a relationship with a cable. The difference between reaching a cable station passively through a carrier and sitting directly on that infrastructure is the difference between renting a phone and owning the switchboard.
Most data centers connect to the internet.
NJFX sits at the point where the internet begins.
NJFX is not a traditional data center. It is not a cable landing station in the passive, legacy sense. It is something the industry hadn't fully defined before it existed: a carrier-neutral, active interconnection hub built directly on a live submarine cable landing station where subsea meets land, and where carriers, content providers, cloud networks, and enterprises can connect directly, competitively, and on their own terms.
Located in Wall Township, New Jersey, NJFX sits at the landing point for multiple submarine cable systems including the highest-capacity routes connecting North America to Europe and Latin America. But unlike a legacy cable landing station, NJFX operates as an open, neutral exchange where customers can cross-connect directly to any cable system, any carrier, or any other party in the facility.
This matters because subsea capacity is not a commodity when you can access it directly. Bandwidth purchased from a carrier who has already traversed cable infrastructure carries overhead — in cost, in latency, and in dependency on that single supplier's routing decisions. At NJFX, participants access the cable systems directly, making their own routing decisions, negotiating their own capacity, and building genuinely resilient, multi-path global networks.
The rise of cloud computing led many organizations to believe that geography had been abstracted away. For applications where latency tolerances are wide and traffic stays on terrestrial networks. But for global content delivery, real-time financial data, international voice and video, and any application where the Atlantic or Pacific is part of the path, physics still wins.
Light travels through fiber at roughly two-thirds the speed of light in a vacuum. That means every kilometer matters. New Jersey's geography — sitting directly on the Eastern Seaboard at the closest point between North America and Europe — isn't a marketing claim. It is a physical reality that translates directly into measurable latency advantage for every packet crossing that ocean.
Edge data centers have emerged to address the last-mile latency problem. High-power facilities have emerged to address the compute-density problem. Neither solves the global routing problem. NJFX addresses something different and more foundational: the point at which global traffic first touches land, and the terms on which it does so.
As AI workloads scale and become increasingly distributed the strategic importance of where data enters and exits terrestrial networks only grows. The question isn't simply where to colocate servers. It's where to anchor your global routing strategy.
Hyperscale facilities offer compute density. Purpose-built colocation offers reliability and redundancy. Edge facilities offer proximity to users. NJFX offers something none of those provide on their own: direct, neutral access to the global subsea infrastructure that all of them ultimately depend on.
The most sophisticated global networks don't just choose where to put their servers. They choose where to control their connectivity — and they anchor that control at the point where subsea infrastructure meets the terrestrial network. That point is NJFX.
Not all data centers are equal. They weren't designed to be. A converted warehouse in a metro market serves a different purpose than a Tier 3 colocation campus, which serves a different purpose than a hyperscale AI facility, which serves a different purpose than a local edge node. Understanding these differences is the first step toward building infrastructure strategy that actually matches your traffic patterns and business requirements.
What NJFX represents is the layer below all of those — the interconnection foundation that makes global networks function. Unlike the passive cable landing stations of the past, NJFX is an active, open, and competitive environment where participants don't just arrive at the subsea layer. They own their place in it.
That's not a subtle distinction. For the networks that understand, it's a strategic advantage.
See Why the World's Networks Connect at NJFX
Learn how direct access to submarine cable infrastructure changes the economics and performance of global connectivity.
Connect With Our Team