Newsroom


Deprecated: preg_replace(): Passing null to parameter #3 ($subject) of type array|string is deprecated in /mnt/data/vhosts/casite-1404165.cloudaccess.net/httpdocs/wp-includes/kses.php on line 1805

Verizon, NJFX, TaTa, Althea, and TeleCall IoT Market Panel

Verizon, NJFX, TaTa, Althea, and Telecall IoT Market Panel

Joined together by GCCM in miami. The panel focused on the importance of IoT Connectivity the challenges and opportunities

Source: Carrier Community

October 10, 2024

Topic Points:

• How IOT players are addressing the demand for global IOT connectivity? 

• The key business models and services that will drive revenues 

• Build an Ecosystem to serve true Digital Transformation that goes beyond Optimization needs from the IoT

• Creating new revenue opportunities for MNOs

• Current main challenges to defend infrastructures and services: how new technologies (IOT, Big Data, 5G…) will change the security arena

• Which role 5G can play in private networks and IOT solutions being introduced to the market by the players

• The strategic partnerships needed to be built up the future demands and meet current requirement

Verizon, NJFX, TaTa, Althea, and TeleCall IoT Market Panel Read More »

njfx ai data center nj

What is a cable landing station?

What is a cable landing station?

Cables may do the running, but cable landing stations do the heavy lifting

By Niva Yadav 

Full Article: Data Center Dynamics

September 3, 2024
njfx ai data center nj

Communication between countries was once reliant on ships, pen and paper, and subject to unfortunate weather conditions. Then came the subsea cable.

After some delays, the first trans-Atlantic subsea cable became operational in 1859. And though its tenure was brief, its impact has survived to today.

The more than 450 subsea cables in service today span more than 1.3 million kilometers and enable the transmission of data within seconds. However, these mighty cables would be rendered obsolete if not for the estimated 1,400 cable landing stations (CLS) connecting them to dry land.

A CLS is a facility, usually located along a coastline or shoreline, responsible for taking data from a subsea cable and connecting it to terrestrial infrastructure.

It may be the wires that run across the sea, but arguably, it is the cable landing stations doing all the heavy lifting.

What is a cable landing station facility?

The CLS carries traffic from subsea cables to infrastructure, such as satellite links, fiber optic cables, and microwave towers. From there, the data makes its way to customers and/or data centers where it is stored, processed, and distributed.

CLS 1.png

Quinalt Indian Nation CLS in Washington, US– Quinault Indian Nation | Toptana Technologies

Gil Santaliz, CEO of CLS operator NJFX, says that the stations are “where cables physically come out of the ocean,” as well as receiving power to transport data across the sea, with the data handed off to terrestrial networks within that country.

The amount of data being transmitted across those fibers can be as much as 40TB per second, and Santaliz says financially and logistically data cannot be transmitted from terrestrial cables to subsea ones without meeting at the CLS first.

The CLS provides the data with the shortest and most efficient route in and out of the country. Brian Lavallée, senior director at networking business Ciena, says: “Without submarine cables, and the data centers they interconnect, there would be no Internet today, as continents would be isolated islands of terrestrial connectivity.”

The facilities are also responsible for monitoring the cable status, ensuring there are no outages, and operating at peak performance.

Architecturally speaking, Thomas Fabre, senior director of network investment at Exa Infrastructure, says the landing stations we see today are not too dissimilar from small data center buildings located on the shoreline.

That is not to say a CLS is always a conventional permanent building. Modular data center provider DXN has been known to deploy modular cable landing stations in shipping container-type pods in locations across Australia and the Micronesian islands.

Barcelona Cable Landing Station - AFR-IX Telecom.jfif

Barcelona cable landing station– Barcelona Cable Landing Station | AFR-IX Telecom

NJFX, in comparison, has an entire cable landing station campus, explains Santaliz. The company’s current CLS campus has seven doors and layers of security before reaching the actual facility. The building is “hidden in plain sight,” manned 24/7, and has more than 50 cameras across the site.

What happens inside a CLS?

When the cable leaves the ocean it enters beach manhole, which Exa’s Fabre describes as a “very large chamber, no more than 2x2x2 meters, where the cable is secured onto land.”

The cable then “continues to what we call the cable landing station, through a duct, which in the industry we call a fronthaul system,” he says.

Whilst Fabre says the fronthaul system can be located up to 20km away from the beach manhole, operators typically want the system as close to the manhole as possible to reduce potential for breakages or other errors.

The fronthaul system separates the cable to retrieve the copper core and the fiber. The copper core is channeled to the dry plant inside the CLS. Here, it is fed through the power-feeding equipment (PFE).

The PFE does what it says on the tin. Santaliz says the copper core will “hit that PFE and that copper core is going to be energized. That’s how you make sure the signal gets regenerated every 50km or so to make sure it travels across the ocean.”

Repeaters, explains Fabre, are laid “every 60 to 80 km along the subsea cable” and “amplify the optical signals so that it can reach from one shore to another.”

The CLS also has repeaters, found inside the wet plant, to allow the fiber pairs to be rerouted to other destinations and other landing points.

Where there is a fronthaul system, there is also a backhaul system. The backhaul system offloads the data to terrestrial networks connected to the CLS, ending up eventually at a data center where the data is stored, processed, and redistributed.

What is an open cable landing station?

To understand what an open CLS is, we need to go back in time, says Fabre.

He says: “Historically, the cables were greeted by a consortium of telecommunications companies, and back in those days, there were only incumbent telcos, like BT, Orange, Telefonica, and Telecom Italia.

“They would team up together and share the cost of the investment where the cable landed in a particular country. For instance, in the UK, it would have been BT who operated the CLS close to the shore and provided a backhaul to send the traffic from the CLS to the largest city where there is a data center.”

This was what was known as a closed system. In contrast, an open cable landing station is when a cable lands at the CLS and has multiple routes out of the station via various network operators and different terrestrial fiber options. Fabre says an open cable landing station can be thought of “like a carrier-neutral data center.”

NJFX data center

Inside NJFX’s cable landing station– New Jersey Fiber Exchange

Lavallée adds that open cable systems “allow cable operators to choose the wet plant – the network equipment on the seabed – from one vendor and the submarine line terminal equipment from another.” This creates a more competitive environment to accelerate technological innovation, he explains.

Santaliz added that these buildings were typically only made for one cable system. Modern CLS can host multiple cables in one facility.

NJFX’s business model means that the owner of the CLS does not necessarily take an ownership stake in the cable; the CLS owner remains neutral.

“In our case, the cable can leave 30, 40, or even 50 ways out of the cable landing stations. And our numbers are about 35 network operators and 20 plus terrestrial fiber cables coming in the other side of that cable station,” she says.

Santaliz argues this provides the cable system with diversity and resilience to traverse entire countries.

Where are cable landing stations located?

Cable landing stations cannot be just plonked anywhere. They must be located on stretches of coastline where cables can land safely without getting damaged.

Santaliz says cables can get damaged when rocks fall from underwater mountain ranges. “You have seismic activity, and then cables end up getting cut and rubbing on the edge of these mountain ranges,” he explains. “They get frayed, and they get compromised.”

Operators, therefore, have to be selective about where to construct their landing stations. Typically, they are built along coastlines with gently sloping and sandy seabeds, so they can be buried.

Ciena

Marine traffic is also a risk to cables, so criteria are in place to ensure the cables will not be disturbed by ships or trawler operations.

Santaliz explains there is also a risk in choosing a new location to build a cable landing station.

“You might not be lucky,” he says. “And you might have cables that are being cut every year.”

Somewhere like New Jersey, in comparison, is a stable and proven location to land subsea cables, explains Santaliz.

“New Jersey has been hosting cables since the 1800s. In the US, it’s a proven territory where cables can cross the Atlantic Ocean,” he says.

Santaliz adds that Myrtle Beach in South Carolina looks like a good spot to build a cable landing station, but “the jury is still out” because the city’s existing CLS is only a year old.

US data center firm DC Blox developed the first CLS on Myrtle Beach, and has plans for a second in the pipeline. Google and Meta have both reserved slots at the facility.

How close does a CLS have to be to a data center?

Cable landing stations have been typically located in close proximity to data centers, allowing for the easy transmission of data.

Lavallée says that landing stations have been historically close to network hubs to facilitate efficient connectivity to population centers, but now the focus is on being close to hyperscale data centers.

Fabre notes that in the data center hub of Marseille, there is “a technical solution” to land your cable directly at a data center. However, the disadvantage to this solution is creating a “monopoly” on that cable, whereby it becomes a closed system again, only having access to one operator.

NJFX says its campus avoids such disadvantages by operating what it calls “a cable landing station colocation campus.” It incorporates a CLS and a data center in one site, and the backhaul system is therefore not required to trolley the data to the nearest data center. Instead, the processing and distribution all take place directly at the CLS.

But what happens if there is no data center near the CLS? In this instance, an open CLS is all the more important, Fabre says.

Exa Infrastructure is currently expanding its CLS in Genoa; a city with very little data center presence.

 

Genoa

Genoa– Getty Images

“Most of the traffic in Genoa comes from Milan, which is some 150km away,” Fabre says. Adding more routes out of the Genoa CLS ensures Internet resiliency for the city. If one cable were to go down between Genoa-Milan, there would be other routes for securing connectivity.

Another reason for expanding the Genoa CLS lies in the fact that cables now require bigger backhaul systems to offload larger amounts of data. Fabre adds that previous cables only had eight fiber pairs. Nowadays, we are seeing 16 to 24 fiber pairs per cable.

What next for the CLS?

The first dedicated cable landing station was built in 1850 in Valentia Island, Ireland. Today, Egypt, Marseille, Tokyo, and Singapore are some of the biggest hubs hosting the most cables.

Marseille was traditionally thought of as the gateway to Europe and was a preferred location for cables because it is also a hub for data centers. In Asia, Singapore has 29 cables planned and operational landing on the small island. The country announced plans to double its submarine capacity in June last year. Egypt is also a key location for submarine cables. Around 17 percent of the world’s Internet traffic passes through the country.

SeaMeWe-6 Marseille

The SeaMeWe-6 cable landing in Marseille– Rohitash Bhaskar on LinkedIn

For Santaliz, AI is going to change the CLS game. The NJFX campus is designed as a data center that hosts cables, and soon other operators may catch on, meaning that the campus has the capability to host AI applications at the CLS.

He says: “The next logical step in the development of AI will be Edge AI. And it just so happens that NJFX was designed as a data center, so we’ll have 5MW of IT capacity to support the AI [inference] workloads of one or two customers.”

Fabre says we are seeing more and more fiber pairs per submarine cable, adding that such demand requires increasingly large backhaul systems. He also says it is becoming rare to see the launch of cables without a hyperscaler on board, particularly in Europe. He said part of this is because operators still want to be as close to a data center as possible, and having a hyperscaler on board the project means you are guaranteed to have somewhere to backhaul the traffic to.

What is clear is that for as long as subsea cables traverse the seas, cable landing stations are not going anywhere.

What is a cable landing station? Read More »

High energy data center in US. NJFX data center supporting AI edge inference in New Jersey

Gen AI requiring massive loads of power, water, and how US data centers are controlling

GenAI requires massive amounts of power and water, in the U.S Data Centers.

Published by: Katie Tarasov

Full Article: CNBC
August 26, 2024
High energy data center in US. NJFX data center supporting AI edge inference in New Jersey

Thanks to the artificial intelligence boom, new data centers are springing up as quickly as companies can build them. This has translated into huge demand for power to run and cool the servers inside. Now concerns are mounting about whether the U.S. can generate enough electricity for the widespread adoption of AI, and whether our aging grid will be able to handle the load.

“If we don’t start thinking about this power problem differently now, we’re never going to see this dream we have,” said Dipti Vachani, head of automotive at Arm. The chip company’s low-power processors have become increasingly popular with hyperscalers like Google, Microsoft, Oracle and Amazon— precisely because they can reduce power use by up to 15% in data centers.

Nvidia’s latest AI chip, Grace Blackwell, incorporates Arm-based CPUs it says can run generative AI models on 25 times less power than the previous generation.

“Saving every last bit of power is going to be a fundamentally different design than when you’re trying to maximize the performance,” Vachani said.

This strategy of reducing power use by improving compute efficiency, often referred to as “more work per watt,” is one answer to the AI ​​energy crisis. But it’s not nearly enough.

One ChatGPT query uses nearly 10 times as much energy as a typical Google search, according to a report by Goldman Sachs. Generating an AI image can use as much power as charging your smartphone

This problem isn’t new. Estimates in 2019 found training one large language model produced as much CO2 as the entire lifetime of five gas-powered cars

The hyperscalers building data centers to accommodate this massive power draw are also seeing emissions soar. Google’s latest environmental reportshowed greenhouse gas emissions rose nearly 50% from 2019 to 2023 in part because of data center energy consumption, although it also said its data centers are 1.8 times as energy efficient as a typical data center. Microsoft’s emissions rose nearly 30% from 2020 to 2024, also due in part to data centers. 

And in Kansas City, where Meta is building an AI-focused data center, power needs are so high that plans to close a coal-fired power plant are being put on hold.

Chasing Power

There are more than 8,000 data centers globally, with the highest concentration in the U.S. And, thanks to AI, there will be far more by the end of the decade. Boston Consulting Group estimates demand for data centers will rise 15%-20% every year through 2030, when they’re expected to comprise 16% of total U.S. power consumption. That’s up from just 2.5% before OpenAI’s ChatGPT was released in 2022, and it’s equivalent to the power used by about two-thirds of the total homes in the U.S.

CNBC visited a data center in Silicon Valley to find out how the industry can handle this rapid growth, and where it will find enough power to make it possible.

“We suspect that the amount of demand that we’ll see from AI-specific applications will be as much or more than we’ve seen historically from cloud computing,” said Jeff Tench, Vantage Data Center’s executive vice president of North America and APAC.

Many big tech companies contract with firms like Vantage to house their servers. Tench said Vantage’s data centers typically have the capacity to use upward of 64 megawatts of power, or as much power as tens of thousands of homes.

“Many of those are being taken up by single customers, where they’ll have the entirety of the space leased to them. And as we think about AI applications, those numbers can grow quite significantly beyond that into hundreds of megawatts,” Tench said .

Santa Clara, California, where CNBC visited Vantage, has long been one of the nation’s hot spots for clusters of data centers near data-hungry clients. Nvidia’s headquarters was visible from the roof. Tench said there’s a “slowdown” in Northern California due to a “lack of availability of power from the utilities here in this area.”

Vantage is building new campuses in Ohio, Texas and Georgia.

“The industry itself is looking for places where there is either proximate access to renewables, either wind or solar, and other infrastructure that can be leveraged, whether it be part of an incentive program to convert what would have been a coal-fired plant into natural gas, or increasingly looking at ways in which to offtake power from nuclear facilities,” Tench said.

Vantage Data Centers is expanding a campus outside Phoenix, Arizona, to offer 176 megawatts of capacity
Vantage Data Centers

Some AI companies and data centers are experimenting with ways to generate electricity on site.

OpenAI CEO Sam Altman has been vocal about this need. He recently invested in a solar startup that makes shipping-container-sized modules that have panels and power storage. Altman has also invested in nuclear fission startup Oklo which aims to make mini nuclear reactors housed in A-frame structures, and in the nuclear fusion startup Helion. 

Microsoft signed a deal with Helion last year to start buying its fusion electricity in 2028. Google partnered with a geothermal startup that says its next plant will harness enough power from underground to run a large data center. Vantage recently built a 100-megawatt natural gas plant that powers one of its data centers in Virginia, keeping it entirely off the grid.

Hardening The Grid

The aging grid is often ill-equipped to handle the load even where enough power can be generated. The bottleneck occurs in getting power from the generation site to where it’s consumed. One solution is to add hundreds or thousands of miles of transmission lines. 

“That’s very costly and very time-consuming, and sometimes the cost is just passed down to residents in a utility bill increase,” said Shaolei Ren, associate professor of electrical and computer engineering at the University of California, Riverside.

One $5.2 billion effort to expand lines to an area of ​​​​Virginia known as “data center alley” was met with opposition from local ratepayers who don’t want to see their bills increase to fund the project.

Another solution is to use predictive software to reduce failures at one of the grid’s weakest points: the transformer.

“All electricity generated must go through a transformer,” said VIE Technologies CEO Rahul Chaturvedi, adding that there are 60 million-80 million of them in the U.S.

The average transformer is also 38 years old, so they’re a common cause for power outages. Replacing them is expensive and slow. VIE makes a small sensor that attaches to transformers to predict failures and determine which ones can handle more load so it can be shifted away from those at risk of failure. 

Chaturvedi said business has tripled since ChatGPT was released in 2022, and is poised to double or triple again next year.

VIE Technologies CEO Rahul Chaturvedi holds up a sensor on June 25, 2024, in San Diego. VIE installs these on aging transformers to help predict and reduce grid failures.
VIE Technologies
 

Cooling Servers Down

Generative AI data centers will also require 4.2 billion to 6.6 billion cubic meters of water withdrawal by 2027 to stay cool, according to Ren’s research. That’s more than the total annual water withdrawal of half of the U.K.

“Everybody is worried about AI being energy intensive. We can solve that when we get off our ass and stop being such idiots about nuclear, right? That’s solvable. Water is the fundamental limiting factor to what is coming in terms of AI,” said Tom Ferguson, managing partner at Burnt Island Ventures.

Ren’s research team found that every 10-50 ChatGPT prompts can burn through about what you’d find in a standard 16-ounce water bottle

Much of that water is used for evaporative cooling, but Vantage’s Santa Clara data center has large air conditioning units that cool the building without any water withdrawal.

Another solution is using liquid for direct-to-chip cooling.

“For a lot of data centers, that requires an enormous amount of retrofit. In our case at Vantage, about six years ago, we deployed a design that would allow for us to tap into that cold water loop here on the data hall floor,” Vantage’s Tench said.

Companies like Apple, Samsung and Qualcomm have touted the benefits of on-device AI, keeping power-hungry queries off the cloud, and out of power-strapped data centers.

“We’ll have as much AI as those data centers will support. And it may be less than what people aspire to. But ultimately, there’s a lot of people working on finding ways to un-throttle some of those supply constraints,” Tench said.

______

NJFX Future Ready Infrastructure

NJFX has strategically developed an infrastructure capable of supporting AI customers and high-density applications, positioning itself as a leader in the industry. With a 5MW data center designed specifically for high-density AI compute requirements, NJFX offers advanced liquid cooling solutions that can be implemented upon request, optimizing both power efficiency and environmental sustainability. This robust infrastructure not only meets the demands of AI workloads but also ensures seamless connectivity, creating an unrivaled hub for edge AI inference.

In line with this commitment to AI innovation, NJFX is hosting a significant AI event in October, in collaboration with Bulk Infrastructure, EXA Infrastructure, and Supermicro. This event will showcase solutions tailored for AI applications in the enterprise and financial markets, further solidifying NJFX’s role as a critical player in the AI ecosystem. Through these efforts, NJFX continues to demonstrate its dedication to providing the infrastructure and expertise needed to drive the future of AI.

Gen AI requiring massive loads of power, water, and how US data centers are controlling Read More »

AI Inferencing happening at NJFX in New Jersey Data Center

The Difference Between Deep Learning Training and Inference

the Difference Between Deep Learning Training and Inference?

Published by: Michael Copeland

Full Article: NVIDIA
August 22, 2024
AI Inferencing happening at NJFX in New Jersey Data Center

More specifically, the trained neural network is put to work out in the digital world using what it has learned — to recognize images, spoken words, a blood disease, predict the next word or phrase in a sentence, or suggest the shoes someone is likely to buy next, you name it — in the streamlined form of an application. This speedier and more efficient version of a neural network infers things about new data it’s presented with based on its training. In the AI lexicon this is known as “inference.”

So let’s break down the progression from AI training to AI inference, and how they both function.

Training Deep Neural Network

While the goal is the same – knowledge — the educational process, or training, of a neural network is (thankfully) not quite like our own. Neural networks are loosely modeled on the biology of our brains — all those interconnections between the neurons. Unlike our brains, where any neuron can connect to any other neuron within a certain physical distance, artificial neural networks have separate layers, connections, and directions of data propagation.

When training a neural network, training data is put into the first layer of the network, and individual neurons assign a weighting to the input — how correct or incorrect it is — based on the task being performed.

To learn more, check out NVIDIA’s AI inference solutions for the data center, self-driving cars, video analytics and more.

In an image recognition network, the first layer might look for edges. The next might look for how these edges form shapes — rectangles or circles. The third might look for particular features — such as shiny eyes and button noses. Each layer passes the image to the next, until the final layer and the final output determined by the total of all those weightings is produced.

But here’s where the training differs from our own. Let’s say the task was to identify images of cats. The neural network gets all these training images, does its weightings and comes to a conclusion of cat or not. What it gets in response from the training algorithm is only “right” or “wrong.”

Deep Learning Training Is Compute Intensive

And if the algorithm informs the neural network that it was wrong, it doesn’t get informed what the right answer is. The error is propagated back through the network’s layers and it has to guess at something else. In each attempt it must consider other attributes — in our example attributes of “catness” — and weigh the attributes examined at each layer higher or lower. Then it guesses again. And again. And again. Until it has the correct weightings and gets the correct answer practically every time. It’s a cat.

Training can teach deep learning networks to correctly label images of cats in a limited set, before the network is put to work detecting cats in the broader world.

Now you have a data structure and all the weights in there have been balanced based on what it has learned as you sent the training data through. It’s a finely tuned thing of beauty. The problem is, it’s also a monster when it comes to consuming compute. For example, GPT-3 with 175 billion parameters requires roughly 300 zettaflops, which is 300,000 billion billion math operations across the entire training cycle. Try getting that to run on a smartphone.

That’s where inference comes in.

Congratulations! Your Neural Network Is Trained and Ready for Inference

What you had to put in place to get your properly weighted neural network to learn — in our education analogy all those pencils, books, teacher’s dirty looks — is now way more than you need to get any specific task accomplished.

If anyone is going to make use of all that training in the real world, and that’s the whole point, what you need is a speedy application that can retain the learning and apply it quickly to data it’s never seen. That’s inference: taking smaller batches of real-world data and quickly coming back with the same correct answer (really a prediction that something is correct).

There are two main approaches to taking that hulking neural network and modifying it for speed and improved latency in applications that run across other networks.

How AI Inferencing Works

How is inferencing used? Just turn on your smartphone. Inferencing is used to put deep learning to work for everything from speech recognition to categorizing your snapshots.

The first approach looks at parts of the neural network that don’t get activated after it’s trained. These sections just aren’t needed and can be “pruned” away. The second approach looks for ways to fuse multiple layers of the neural network into a single computational step.

It’s akin to the compression that happens to a digital image. Designers might work on these huge, beautiful, million pixel-wide and tall images, but when they go to put it online, they’ll turn into a jpeg. It’ll be almost exactly the same, indistinguishable to the human eye, but at a smaller resolution. Similarly with inference you’ll get almost the same accuracy of the prediction, but simplified, compressed and optimized for runtime performance.

What that means is we all use inference all the time. Your smartphone’s voice-activated assistant uses inference, as do image search and spam filtering applications. Facebook’s image recognition and Amazon’s and Netflix’s recommendation engines all rely on inference.

GPUs, thanks to their parallel computing capabilities — or ability to do many things at once — are good at both training and inference.

Systems trained with GPUs allow computers to identify patterns and objects as well as — or in some cases, better than — humans (see “Accelerating AI with GPUs: A New Computing Model”).

After training is completed, the networks are deployed into the field for “inference” — classifying data to “infer” a result. Here too, GPUs — and their parallel computing capabilities — offer benefits, where they run billions of computations based on the trained network to identify known patterns or objects.

The parallel computing of GPUs also provides multi-factor speedups in traditional machine learning, using algorithms like gradient-boosted decision trees, for both training and inference.

You can see how these models and applications will just get smarter, faster and more accurate. Inference will bring new applications to every aspect of our lives. It seems the same admonition applies to AI as it does to our youth — don’t be a fool, stay in school. Inference awaits.

The Difference Between Deep Learning Training and Inference Read More »

Edge AI in New Jersey NJFX facility

What is Edge AI

What Is Edge AI and How Does It Work?

Recent strides in the efficacy of AI, the adoption of IoT devices and the power of edge computing have come together to unlock the power of edge AI.

Published by: Tiffany Yeung

Full Article: NVIDIA
August 8, 2024
Edge AI in New Jersey NJFX facility

Countless analysts and businesses are talking about and implementing edge computing, which traces its origins to the 1990s, when content delivery networks were created to serve web and video content from edge servers deployed close to users.

Today, almost every business has job functions that can benefit from the adoption of edge AI. In fact, edge applications are driving the next wave of AI computing in ways that improve our lives at home, at work, in school and in transit.

Learn more about what edge AI is, its benefits and how it works, examples of edge AI use cases, and the relationship between edge computing and cloud computing.

What Is Edge AI? 

Edge AI is the deployment of AI applications in devices throughout the physical world. It’s called “edge AI” because the AI computation is done near the user at the edge of the network, close to where the data is located, rather than centrally in a cloud computing facility or private data center.

Since the internet has global reach, the edge of the network can connote any location. It can be a retail store, factory, hospital or devices all around us, like traffic lights, autonomous machines and phones.

Edge AI: Why Now? 

Organizations from every industry are looking to increase automation to improve processes, efficiency and safety.

To help them, computer programs need to recognize patterns and execute tasks repeatedly and safely. But the world is unstructured and the range of tasks that humans perform covers infinite circumstances that are impossible to fully describe in programs and rules.

Advances in edge AI have opened opportunities for machines and devices, wherever they may be, to operate with the “intelligence” of human cognition. AI-enabled smart applications learn to perform similar tasks under different circumstances, much like real life.

The efficacy of deploying AI models at the edge arises from three recent innovations.

  1. Maturation of neural networks: Neural networks and related AI infrastructure have finally developed to the point of allowing for generalized machine learning. Organizations are learning how to successfully train AI models and deploy them in production at the edge.
  2. Advances in compute infrastructure: Powerful distributed computational power is required to run AI at the edge. Recent advances in highly parallel GPUs have been adapted to execute neural networks.
  3. Adoption of IoT devices: The widespread adoption of the Internet of Things has fueled the explosion of big data. With the sudden ability to collect data in every aspect of a business — from industrial sensors, smart cameras, robots and more — we now have the data and devices necessary to deploy AI models at the edge. Moreover, 5G is providing IoT a boost with faster, more stable and secure connectivity.

Why Deploy AI at the Edge? What Are the Benefits of Edge AI? 

Since AI algorithms are capable of understanding language, sights, sounds, smells, temperature, faces and other analog forms of unstructured information, they’re particularly useful in places occupied by end users with real-world problems. These AI applications would be impractical or even impossible to deploy in a centralized cloud or enterprise data center due to issues related to latency, bandwidth and privacy.

The benefits of edge AI include:

  • Intelligence: AI applications are more powerful and flexible than conventional applications that can respond only to inputs that the programmer had anticipated. In contrast, an AI neural network is not trained how to answer a specific question, but rather how to answer a particular type of question, even if the question itself is new. Without AI, applications couldn’t possibly process infinitely diverse inputs like texts, spoken words or video.
  • Real-time insights: Since edge technology analyzes data locally rather than in a faraway cloud delayed by long-distance communications, it responds to users’ needs in real time.
  • Reduced cost: By bringing processing power closer to the edge, applications need less internet bandwidth, greatly reducing networking costs.
  • Increased privacy: AI can analyze real-world information without ever exposing it to a human being, greatly increasing privacy for anyone whose appearance, voice, medical image or any other personal information needs to be analyzed. Edge AI further enhances privacy by containing that data locally, uploading only the analysis and insights to the cloud. Even if some of the data is uploaded for training purposes, it can be anonymized to protect user identities. By preserving privacy, edge AI simplifies the challenges associated with data regulatory compliance.
  • High availability: Decentralization and offline capabilities make edge AI more robust since internet access is not required for processing data. This results in higher availability and reliability for mission-critical, production-grade AI applications.
  • Persistent improvement: AI models grow increasingly accurate as they train on more data. When an edge AI application confronts data that it cannot accurately or confidently process, it typically uploads it so that the AI can retrain and learn from it. So the longer a model is in production at the edge, the more accurate the model will be.

How Does Edge AI Technology Work?

Lifecycle of an edge AI application.

For machines to see, perform object detection, drive cars, understand speech, speak, walk or otherwise emulate human skills, they need to functionally replicate human intelligence.

AI employs a data structure called a deep neural network to replicate human cognition. These DNNs are trained to answer specific types of questions by being shown many examples of that type of question along with correct answers.

This training process, known as “deep learning,” often runs in a data center or the cloud due to the vast amount of data required to train an accurate model, and the need for data scientists to collaborate on configuring the model. After training, the model graduates to become an “inference engine” that can answer real-world questions.

In edge AI deployments, the inference engine runs on some kind of computer or device in far-flung locations such as factories, hospitals, cars, satellites and homes. When the AI stumbles on a problem, the troublesome data is commonly uploaded to the cloud for further training of the original AI model, which at some point replaces the inference engine at the edge. This feedback loop plays a significant role in boosting model performance; once edge AI models are deployed, they only get smarter and smarter.

What Are Examples of Edge AI Use Cases? 

AI is the most powerful technology force of our time. We’re now at a time where AI is revolutionizing the world’s largest industries.

Across manufacturing, healthcare, financial services, transportation, energy and more, edge AI is driving new business outcomes in every sector, including:

  • Intelligent forecasting in energy: For critical industries such as energy, in which discontinuous supply can threaten the health and welfare of the general population, intelligent forecasting is key. Edge AI models help to combine historical data, weather patterns, grid health and other information to create complex simulations that inform more efficient generation, distribution and management of energy resources to customers.
  • Predictive maintenance in manufacturing: Sensor data can be used to detect anomalies early and predict when a machine will fail. Sensors on equipment scan for flaws and alert management if a machine needs a repair so the issue can be addressed early, avoiding costly downtime.
  • AI-powered instruments in healthcare: Modern medical instruments at the edge are becoming AI-enabled with devices that use ultra-low-latency streaming of surgical video to allow for minimally invasive surgeries and insights on demand.
  • Smart virtual assistants in retail: Retailers are looking to improve the digital customer experience by introducing voice ordering to replace text-based searches with voice commands. With voice ordering, shoppers can easily search for items, ask for product information and place online orders using smart speakers or other intelligent mobile devices.

What Role Does Cloud Computing Play in Edge Computing? 

AI applications can run in a data center like those in public clouds, or out in the field at the network’s edge, near the user. Cloud computing and edge computing each offer benefits that can be combined when deploying edge AI.

The cloud offers benefits related to infrastructure cost, scalability, high utilization, resilience from server failure, and collaboration. Edge computing offers faster response times, lower bandwidth costs and resilience from network failure.

There are several ways in which cloud computing can support an edge AI deployment:

  • The cloud can run the model during its training period.
  • The cloud continues to run the model as it is retrained with data that comes from the edge.
  • The cloud can run AI inference engines that supplement the models in the field when high compute power is more important than response time. For example, a voice assistant might respond to its name, but send complex requests back to the cloud for parsing.
  • The cloud serves up the latest versions of the AI model and application.
  • The same edge AI often runs across a fleet of devices in the field with software in the cloud

Learn more about the best practices for hybrid edge architectures.

The Future of Edge AI 

Thanks to the commercial maturation of neural networks, proliferation of IoT devices, advances in parallel computation and 5G, there is now robust infrastructure for generalized machine learning. This is allowing enterprises to capitalize on the colossal opportunity to bring AI into their places of business and act upon real-time insights, all while decreasing costs and increasing privacy.

We are only in the early innings of edge AI, and still the possible applications seem endless.

What is Edge AI Read More »

AI Power Prices Shaping NY NJ Data Centers

AI, Power Prices Push New York Data Centers Down A Unique Path

The artificial intelligence-driven data center boom will hit New York eventually, but it will look very different from other major data center markets.

Published by: Dan Rabb, Data Centers

Full Article: BISNOW
July 28, 2024

New York has yet to see the AI-driven explosion of data center development that has emerged in other top industry hubs, due in large part to the New York market’s high energy prices. AI will be a significant growth catalyst for data centers in New York, industry leaders said at Bisnow’s DICE: Northeast, July 18 at the Astor Ballroom in Manhattan, but that growth is likely to manifest differently than in any other primary market. 

Rather than massive cloud and AI campuses that account for the bulk of the industry’s recent growth, they say New York will see increased demand for colocation facilities and data centers specializing in access to fiber networks that connect the market to AI infrastructure largely being built elsewhere.  

“This market is doing really well for a lot of reasons that have nothing to do with power,” said Bob DeSantis, CEO of colocation provider 365 Data Centers. “New York just has so much volume. It’s expensive, but there’s already such a desire to have proximity that you add a little AI to that demand, and it overcomes any issues on the power side.”

The New York data center market, which includes New Jersey and Connecticut, is the seventh-largest data center hub in the U.S. As of the beginning of the year, the region had more than 700 megawatts of total data center inventory, the majority of it in New Jersey, and around 100 megawatts under construction, according to JLL. New York data centers have seen robust demand growth over the past two years, led by the financial services sector, with a growing share of leasing from major cloud providers. 

While the fundamentals of the area’s AI landscape are strong, New York hasn’t had the kind of unprecedented inventory growth and development pipeline seen in other primary markets. 

In Northern Virginia, Atlanta and Hillsboro, Oregon, data center inventory grew by 107%, 118% and 334%, respectively, between 2020 and the end of 2023, according to CBRE. It’s a record pace of growth that was first driven by surging demand for cloud services but has accelerated further as tech firms — led by Amazon, Microsoft, Google and Meta — engage in an AI arms race that is expected to surpass $1T in total infrastructure spending. Yet in the same time period, data center inventory in the New York Tri-State area grew by just 29%, the slowest pace among primary markets. 

The reason the AI data center bump has seemingly skipped New York comes down to the price of power, executives said at DICE: Northeast. Energy costs have always been a top siting consideration in a sector where facility size is measured in megawatts rather than square feet, but AI has dramatically increased the amount of power used by the largest data centers and has subsequently made power pricing paramount for hyperscale developers.

Companies like Amazon and Microsoft are building their largest AI data centers anywhere they can find the cheapest power. Markets attracting major hyperscale investment like Atlanta, Dallas and Chicago had average power rates last year of less than 7 cents per kilowatt hour, according to JLL. New York, by contrast, was more than twice as expensive, with an average rate of 16 cents. New Jersey, which has the cheapest power in the Tri-State market, is still relatively expensive at 11 cents per kilowatt hour.  

 “The large deployments of AI are largely in areas where power costs are down and power is more readily available from utilities,” said Phillip Koblence, co-founder and chief operating officer of colocation firm NYI. “But the New York market, this is where a lot of the data that is being manipulated by AI is created because this is where the eyeballs are and this is where the internet is most evolved.” 

Indeed, New York and the broader Northeast region is the country’s densest population center and therefore has the largest volume of consumers watching Netflix, interacting with ChatGPT and generating real-time data through phones, Apple Watches and other smart devices. Perhaps more importantly, the disproportionate number of financial institutions, major corporations and other large organizations that are based in New York represent an enormous amount of data that is only going to increase — along with its performance requirements — with AI adoption.  

Placeholder
The view of Midtown from Brookfield’s One Manhattan West

Much of this growing flood of data will be processed in cheaper power markets, but first it has to get there. This means more demand for carrier hotels and connectivity-focused colocation facilities, many of which have proprietary private networks that allow faster data transfer speed, known as latency, from New York to other major data center hubs.

Connectivity has always been a big part of New York’s digital infrastructure ecosystem, experts say, but it is primed for significant growth along with AI adoption. 

“The AI boom is going to inherently benefit our market, but the driver of market growth here is going to be entirely based on connectivity to enable AI,” Koblence said. “All this AI and digital infrastructure growth is enabled by data being created in your pockets and on your rings and your watches and being transported to these large AI farms in places where the power is cheaper.”

Not all AI deployments will be located outside the New York market. Industry leaders expect AI adoption will boost demand for colocation facilities in the Tri-state area beyond what is expected elsewhere. 

This is largely due to the outsized presence of financial services firms in the New York data center ecosystem, along with health care organizations like hospital systems and pharmaceutical companies and major educational and research institutions, said 365’s DeSantis. While many companies utilize public cloud from companies like Amazon Web Services for their AI infrastructure, these sectors have huge amounts of proprietary or private data for which the public cloud presents a security or compliance risk, pushing them toward colocation providers.  

“There’s a lot of proprietary applications that those type of industries run, and there’s a lot of personal information,” DeSantis said. “Those aren’t cloud-first strategy types of data sets. Those are colocation types of data sets.”

Many of these colocation AI deployments for New York-based enterprises are going to New Jersey due to the lower power cost and other pricing advantages, and DICE panelists indicated they expect this trend to accelerate.

Digital Realty, Equinix, CoreSite and Iron Mountain plan to add a combined 145 megawatts in New Jersey by 2027, according to JLL. Other providers are building facilities in New York outside the city, such as DataBank’s development in Rockland County

This enterprise demand for colocation capacity exists in the more expensive New York market largely due to the financial services sector, said Jeffrey Moerdler, a longtime data center and telecom attorney and a member at Mintz.

Financial firms are executing latency-sensitive trades and other transactions where hundredths of a second make a difference. Achieving this kind of low latency performance requires having the company’s computing infrastructure as close as possible. 

“So much of the financial services industry, the brokerage industry and trading are in New York, and much of that data can’t be pushed out of the region and sent to Iowa,” Moerdler said. “It has to stay here and be processed regionally because of the latency problem.”

AI Power Prices Shaping NY NJ Data Centers Read More »

Tampnet partners with NJFX, increasing diversity for USA and European customers

Tampnet partners with NJFX, increasing diversity for USA and European customers

Press Release

May 15th, 2024

Stavanger, May 15th, 2024 – Tampnet, the foremost provider of offshore high-capacity networks, is excited to announce the establishment of a Point of Presence (PoP) at NJFX’s carrier-neutral cable landing station in Wall, New Jersey. NJFX was strategically selected as the connectivity HUB and 4G/5G core site to enable low-latency communications to the emerging windfarms along the East Coast of America.  This new PoP at NJFX  further enhances Tampnet Carrier’s position to deliver connectivity to the US market and customers transmitting data between US and European sites.

This  collaboration further underscores Tampnet’s commitment to delivering top-tier connectivity solutions to NJFX customers spanning industries such as Oil & Gas, Wind Energy, Maritime, and the Carrier market.

Tampnet’s unwavering dedication to innovation and sustainability is reflected in its efforts towards a carbon-neutral future. By transitioning to energy-efficient 4G and 5G technology, Tampnet is spearheading the digital transformation in the offshore industry, ensuring safer and more efficient operations through advanced wireless sensors for condition monitoring, predictive maintenance, and remote operations.

 Cato Lammenes, VP and Head of Tampnet Carrier said: “With the addition of NJFX to our American footprint, this new connection hub supports our strategy for increased diversity within our 4G/5G core as well as providing additional services and routes for our global clients transmitting data between the European regions and the USA.”

Establishing a PoP within NJFX’s dynamic ecosystem grants Tampnet and its clientele direct, on-demand access to key submarine cable systems including Havfrue/AEC-2, Seabras-1, TGN1, and TGN2. This translates to unparalleled connectivity across the Americas, Europe, and the Caribbean.

“We are delighted by this strategic collaboration with Tampnet, solidifying their presence within our thriving ecosystem,” comments Felix Seda, General Manager at NJFX. “Tampnet’s choice of NJFX as their core US connectivity hub is testament to our commitment to providing unmatched connectivity solutions on the East Coast.”

By establishing a foothold at the NJFX facility, Tampnet aims to fortify its network capabilities and meet the evolving connectivity needs of its clientele. This symbiotic partnership promises enhanced connectivity options, further catalyzing the digital evolution across global industries.

About Tampnet:

Tampnet, founded in 2001 in Stavanger, Norway, operates the world’s largest offshore high-capacity communication network, serving clients in Oil & Gas, Wind Energy, Maritime, and Carrier sectors.

Tampnet Carrier’s unique network routes traverse 8 countries, connecting over 40 core data centres across 12 markets throughout Europe and the United States. Dual-path capability between Norway, Europe and UK is their key differentiator, providing diverse routing through Great Britain and via Sweden and Denmark.  This high-speed terrestrial and subsea network enables low latency, reliability, redundancy and secure connectivity solutions for the most demanding industries.  The NORFEST subsea route brings greater resiliency, flexibility and scalability to Nordic infrastructure, with direct connectivity to 10 key cities along the Norwegian coast, and Nordic data centre hubs powered by renewable energy along.

With a steadfast commitment to sustainability, Tampnet upgrades infrastructure to energy-efficient 4G and 5G technology, striving towards a carbon-neutral future.

For more information and media inquiries:

Cato Lammenes

Email:  [email protected]

Website:  www.tampnet.com

About NJFX:

Located in Wall, New Jersey, NJFX is the innovative leader in carrier-neutral colocation and subsea infrastructure, setting a new standard for interconnecting carrier-grade networks outside any major U.S. city. Our campus hosts over 35 global and U.S. operators, including multinational banks that rely on us for their “never down” network strategies. The NJFX campus is also where the major cloud operators have their global backbones physically connecting to transatlantic cables to Europe and South America. NJFX customers requiring transparency and true diversity can interconnect at a layer one level with their preferred network connectivity partners.

 

For more information and media inquiries:

Emily Newman

Email: [email protected]

Website: njfx.net

 

Tampnet partners with NJFX, increasing diversity for USA and European customers Read More »

Red Sea conflict

WSJ Covers Red Sea Conflict Threatening Key Subsea Cables

Red Sea Conflict Threatens Key Internet Cables

Maritime attacks complicate repairs on underwater cables that carry the world’s web traffic

Article by Drew Fitzgerald

Full Story here:  Wall Street Journal
March 3, 2024

Red Sea conflict

Conflict in the Middle East is drawing fresh attention to one of the internet’s deepest vulnerabilities: the Red Sea.

Most internet traffic between Europe and East Asia runs through undersea cables that funnel into the narrow strait at the southern end of the Red Sea. That chokepoint has long posed risks for telecom infrastructure because of its busy ship traffic, which raises the likelihood of an accidental anchor drop striking a cable. Attacks by Iran-backed Houthis in Yemen have made the area more dangerous.

The latest warning sign came Feb. 24, when three submarine internet cables running through the region suddenly dropped service in some of their markets. The cuts weren’t enough to disconnect any country but instantly worsened web service in India, Pakistan and parts of East Africa, said Doug Madory, director of internet analysis at network research firm Kentik.

It wasn’t immediately clear what caused the cutoffs. Some telecom experts pointed to the cargo ship Rubymar, which was abandoned by its crew after it came under Houthi attack on Feb. 18. The disabled ship had been drifting in the area for more than a week even after it dropped its anchor. It later sank.

Yemen’s Houthi-backed telecom ministry in San’a issued a statement denying responsibility for the submarine cable failures and repeating the government is “keen to keep all submarine telecom cables…away from any possible risks.” The ministry didn’t comment on the Rubymar attack.

Mauritius-based cable owner Seacom, which owns one of the damaged lines, said fixing it will demand “a fair amount of logistics coordination.” Its head of marketing, Claudia Ferro, said repairs should start early in the second quarter, though complications from permitting, regional unrest and weather conditions could move that timeline. 

“Our team thinks it is plausible that it could have been affected by anchor damage, but this has not been confirmed yet,” Ferro said. 

Cable ships’ lumbering speed makes draping new lines near contested waters a dangerous and expensive task. The cost to insure some cable ships near Yemen surged earlier this year to as much as $150,000 a day, according to people familiar with the matter.

Yemen’s nearly decadelong civil war further complicates matters. Houthi rebels control much of the western portion of the country along the Red Sea, while the country’s internationally recognized government holds the east. Companies building cables in the region have sought licenses from regulators on both sides of the conflict to avoid antagonizing either authority, other people familiar with the matter say.

The mounting cost of doing business also threatens tech giants’ efforts to expand the internet. The Google-backed Blue Raman system and Facebook’s 2Africa cable both pass through the region and remain under construction. Two more telecom company-backed projects also are scheduled to build lines through the Red Sea.

Most of the internet’s intercontinental data traffic moves by sea, according to network research firm TeleGeography. Submarine cables can be simpler and less expensive to build than overland routes, but going underwater comes with its own risks. Cable operators report about 150 service faults a year mostly caused by accidental damage from fishing and anchor dragging, according to the International Cable Protection Committee, a U.K.-based industry group.

“Having alternative paths around congested areas such as the Red Sea has always been important, though perhaps magnified in times of conflict,” ICPC general manager Ryan Wopschall said.

Several internet companies have considered ways to diversify their connections between Europe, Africa and Asia. Routes across Saudi Arabia, for instance, could skirt the waters around Yemen altogether. But many national regulators charge high fees or impose other hurdles that make sticking to tried-and-true routes more attractive. 

“The industry, as with any industry, reacts to the conditions set upon it, and routing in Yemen waters is a result of this,” Wopschall said.

Benoit Faucon contributed to this article.

Write to Drew FitzGerald at [email protected]

 

WSJ Covers Red Sea Conflict Threatening Key Subsea Cables Read More »

NJFX Edge AI Inference New Jersey

Why operators and enterprises will need an AI data center strategy

Why operators and enterprises will need an AI data center strategy

Ivo Ivanov (CEO at DE-CIX), Data Center Dynamics
February 1, 2024

NJFX Edge AI Inference New Jersey

As Mobile World Congress (MWC) 2024 draws near, the integration and impact of artificial intelligence (AI) in our digital economy cannot be overstated.

AI has always been a hot topic in the mobile industry, but this year it’s more than just an emerging trend; it’s a central pillar in the evolving landscape of telecommunications.

The democratization of generative AI tools such as ChatGPT and PaLM, and the sheer availability of high-performance Large Language Models (LLMs) and machine learning algorithms, means that digital players are now queuing up to explore their value and potential use cases.

The race to uncover and extract this value means that many market participants are now getting directly involved in using or building digital infrastructure.

The likes of Apple and Netflix walked this path almost a decade ago, and now banks, automotive companies, logistics enterprises, fintech operators, retailers, and healthcare specialists are all embarking on the same journey. The benefits are simply too good to pass up.

Crucially, we’re not just talking about enterprises owning a bit of code or developing new AI use cases; we’re talking about these companies having a genuine stake in the infrastructure they’re using. That means their attention is turning to things like data sovereignty, network performance, latency, security, and connection speed. They need to make sure that the AI use cases they’re pursuing are going to be well accommodated long into the future.

The need for network controllability

Enterprises are no longer mere spectators in the AI arena; they are active stakeholders in the infrastructure that powers their AI applications.

For instance, a retail company employing AI for personalized customer experiences must command not only the algorithms but also the underlying data handling and processing frameworks to ensure real-time, effective customer engagement.

This shift toward controllability underscores the importance of data security, compliance adaptability, and operational customization.

It’s about having the capability to quickly adjust to evolving market demands and regulatory environments, as well as optimizing systems for peak performance.

In essence, controllability is becoming a fundamental requirement for enterprises, signifying a shift from passive participation to proactive management in the network landscape.

Low latency is no longer optional

In the high-stakes world of AI, where milliseconds can determine outcomes, latency becomes a make-or-break element.

For example, in the financial sector, where AI is used for high-frequency trading, even a slight delay in data processing can result in significant performance losses. Similarly, for healthcare providers using AI for real-time patient monitoring, latency directly impacts the quality of care and patient outcomes.

Enterprises are therefore prioritizing low-latency networks to ensure that their AI applications function at optimal efficiency and accuracy. This focus on reducing latency is about more than speed; it’s about creating a seamless, responsive experience for end-users and maintaining a competitive edge in an increasingly AI-driven market.

As AI technologies continue to advance, the ability of enterprises to manage and minimize latency will become a key factor in harnessing the full potential of these innovations.

Localization will become mission-critical

Previously only talked about in the context of content delivery networks (CDNs) and cloud models, localization now plays a crucial role in AI performance and compliance. A striking example of this is Dubai’s journey in localizing Internet routes.

From almost no local Internet routes a decade ago to achieving 90 percent today, Dubai has dramatically reduced latency from 200 milliseconds to a mere three milliseconds for accessing global content.

This shift highlights the performance benefits of localization, but there are legal imperatives too. With regions like Europe and India enforcing strict data sovereignty laws, managing data correctly within specific jurisdictions has become more important as data volumes have increased.

The deployment of AI models, and by proxy the networks accommodating them, must therefore align with local market needs, demanding a sophisticated level of localization that businesses are now paying attention to.

Multi-cloud interoperability

AI is also reshaping how enterprises approach cloud computing, especially in the context of multi-cloud environments. AI’s intensive training and processing often occur within a specific cloud infrastructure.

Yet, the ecosystem is more intricate, as numerous applications are either feeding data to, or utilizing data from, these AI models are likely distributed across different cloud platforms.

This scenario underscores the critical need for seamless interoperability and low-latency communication between these cloud environments.

A robust multi-cloud strategy, therefore, isn’t just about leveraging diverse cloud services; it’s about ensuring these services work in harmony as they facilitate AI operations.

All of these factors; controllability, latency, localization, and cloud interoperability will become increasingly important to enterprises as use cases develop. Take self-driving cars for instance. Latency and the real-time exchange of data are obviously critical here, but so are cloud interoperability and data sovereignty.

A business cannot serve an AI-powered driver assistance system from one region if the car is in another. These systems also learn and adapt to individual driving patterns, and handle sensitive personal information, making compliance with regulations like the General Data Protection Regulation (GDPR) in the EU not just a legal obligation but a trust-building imperative.

Networking and interconnections

If data center operators want to win business from these AI-hungry, data-driven enterprises, they need to move their focus beyond mere servers, power, and cooling space.

Forward-looking data centers are now evolving to support their enterprise customers more effectively by providing direct connectivity to cloud services.

This is ideally achieved through housing or providing direct access to interconnection platforms in the form of an Internet Exchange (IX) and/or Cloud Exchange.

This will allow different networks to interconnect and exchange traffic directly and efficiently, bypassing the public Internet, which reduces latency, improves bandwidth, and enhances overall network performance and security.

Enterprises are more invested than ever in the connectivity infrastructure powering their services, and to win customers, data centers are going to need to take a more collaborative and customizable approach to data handling and delivery.

This isn’t just a response to immediate challenges; it’s a proactive blueprint for a future where AI’s potential is fully realized.

Why operators and enterprises will need an AI data center strategy Read More »

The New Wave of SMART Cables

New Wave of SMART Cables

By Srikapardhi, TelecomTalk
January 31, 2024

Once operational, the system will provide not only a supplementary telecom cable to New Caledonia, extending to Australia and Fiji, but also a vital component in environmental monitoring.

Prima, in collaboration with Alcatel Submarine Networks (ASN), announces the signing of a contract for the establishment of the first SMART subsea cable system. OMS will be responsible for the marine installation of this system, which is set to be deployed and operational in 2026. The system will enhance digital connectivity and seismic monitoring in the Pacific region, the joint statement said.

Collaborative Innovation

Once operational, the system will provide not only a supplementary telecom cable to New Caledonia, extending to Australia and Fiji, but also a vital component in environmental monitoring.

The integration of four advanced Climate Change Nodes (CC Nodes) into the subsea cable system will facilitate real-time monitoring of seismic activities and efficient tsunami detection, particularly in the seismically volatile New Hebrides Trench. Additionally, this technology is expected to transform warning systems across the Pacific, enhancing security and preparedness against natural disasters.

Environmental Monitoring Advancements

Prima emphasised the key supporters of this project, including the French Government for its “unwavering commitment and encouragement”, the Government of Vanuatu that entrusted Prima with the implementation of this hybrid cable, and OPT NC that supported the project, especially in the Lifou landing.

For its part, ASN also collaborated with the SMART Joint Task Force (JTF) for their consistent support and expertise in developing SMART cable projects. “By merging telecommunications with environmental monitoring technologies, this endeavour will substantially enhance the safety, connectivity, and scientific insight of the Pacific region,” the joint statement said.

Prima is a telecommunications and data infrastructure company based in Port Vila, Vanuatu. Alcatel Submarine Networks (ASN) offers an extensive service portfolio including project management, installation, and commissioning, along with marine and maintenance operations performed by ASN’s wholly-owned fleet of cable ships.

What are SMART Cables?

nstrumenting the deep ocean has been a challenge for ocean scientists for decades.

The Science Monitoring And Reliable Telecommunications (SMART) Subsea Cables initiative seeks to revolutionize deep ocean observing by equipping transoceanic telecommunications cables with sensors to provide novel and persistent insights into the state of the ocean, at a modest incremental cost.

 Smart Cables

The Science Monitoring And Reliable Telecommunications (SMART) Subsea Cables initiative seeks to revolutionize deep ocean observing by equipping transoceanic telecommunications cables with sensors to provide novel and persistent insights into the state of the ocean to monitor climate change including ocean heat content, circulation, and sea level rise, provide early warning for earthquakes and tsunamis, and monitor seismic activity for earth structure and related hazards. 

The Joint Task Force

The SMART Subsea Cables initiative is led by a Joint Task Force (JTF) made up of three United Nations organizations: the International Telecommunications Union (ITU), the World Meteorological Organization (WMO), and the Intergovernmental Oceanographic Commission (IOC) of the United Nations Educational, Scientific and Cultural Organization (UNESCO). The JTF is responsible for charting a path for the implementation of SMART monitoring capabilities into new cable installations worldwide.

International Program Office

The SMART International Program Office (IPO) is the executive branch of the JTF and is responsible for carrying out its recommendations in pursuit of broad SMART adoption. In this role the IPO acts in oversight and managerial capacities as the unifying executive agency bridging the many relevant stakeholder communities pertinent to SMART implementation. 

The New Wave of SMART Cables Read More »

small_c_popup.png

Let's have a chat

Learn how we helped 100 top brands gain success.