Industry News

AI Inferencing happening at NJFX in New Jersey Data Center

The Difference Between Deep Learning Training and Inference

the Difference Between Deep Learning Training and Inference?

Published by: Michael Copeland

Full Article: NVIDIA
August 22, 2024
AI Inferencing happening at NJFX in New Jersey Data Center

More specifically, the trained neural network is put to work out in the digital world using what it has learned — to recognize images, spoken words, a blood disease, predict the next word or phrase in a sentence, or suggest the shoes someone is likely to buy next, you name it — in the streamlined form of an application. This speedier and more efficient version of a neural network infers things about new data it’s presented with based on its training. In the AI lexicon this is known as “inference.”

So let’s break down the progression from AI training to AI inference, and how they both function.

Training Deep Neural Network

While the goal is the same – knowledge — the educational process, or training, of a neural network is (thankfully) not quite like our own. Neural networks are loosely modeled on the biology of our brains — all those interconnections between the neurons. Unlike our brains, where any neuron can connect to any other neuron within a certain physical distance, artificial neural networks have separate layers, connections, and directions of data propagation.

When training a neural network, training data is put into the first layer of the network, and individual neurons assign a weighting to the input — how correct or incorrect it is — based on the task being performed.

To learn more, check out NVIDIA’s AI inference solutions for the data center, self-driving cars, video analytics and more.

In an image recognition network, the first layer might look for edges. The next might look for how these edges form shapes — rectangles or circles. The third might look for particular features — such as shiny eyes and button noses. Each layer passes the image to the next, until the final layer and the final output determined by the total of all those weightings is produced.

But here’s where the training differs from our own. Let’s say the task was to identify images of cats. The neural network gets all these training images, does its weightings and comes to a conclusion of cat or not. What it gets in response from the training algorithm is only “right” or “wrong.”

Deep Learning Training Is Compute Intensive

And if the algorithm informs the neural network that it was wrong, it doesn’t get informed what the right answer is. The error is propagated back through the network’s layers and it has to guess at something else. In each attempt it must consider other attributes — in our example attributes of “catness” — and weigh the attributes examined at each layer higher or lower. Then it guesses again. And again. And again. Until it has the correct weightings and gets the correct answer practically every time. It’s a cat.

Training can teach deep learning networks to correctly label images of cats in a limited set, before the network is put to work detecting cats in the broader world.

Now you have a data structure and all the weights in there have been balanced based on what it has learned as you sent the training data through. It’s a finely tuned thing of beauty. The problem is, it’s also a monster when it comes to consuming compute. For example, GPT-3 with 175 billion parameters requires roughly 300 zettaflops, which is 300,000 billion billion math operations across the entire training cycle. Try getting that to run on a smartphone.

That’s where inference comes in.

Congratulations! Your Neural Network Is Trained and Ready for Inference

What you had to put in place to get your properly weighted neural network to learn — in our education analogy all those pencils, books, teacher’s dirty looks — is now way more than you need to get any specific task accomplished.

If anyone is going to make use of all that training in the real world, and that’s the whole point, what you need is a speedy application that can retain the learning and apply it quickly to data it’s never seen. That’s inference: taking smaller batches of real-world data and quickly coming back with the same correct answer (really a prediction that something is correct).

There are two main approaches to taking that hulking neural network and modifying it for speed and improved latency in applications that run across other networks.

How AI Inferencing Works

How is inferencing used? Just turn on your smartphone. Inferencing is used to put deep learning to work for everything from speech recognition to categorizing your snapshots.

The first approach looks at parts of the neural network that don’t get activated after it’s trained. These sections just aren’t needed and can be “pruned” away. The second approach looks for ways to fuse multiple layers of the neural network into a single computational step.

It’s akin to the compression that happens to a digital image. Designers might work on these huge, beautiful, million pixel-wide and tall images, but when they go to put it online, they’ll turn into a jpeg. It’ll be almost exactly the same, indistinguishable to the human eye, but at a smaller resolution. Similarly with inference you’ll get almost the same accuracy of the prediction, but simplified, compressed and optimized for runtime performance.

What that means is we all use inference all the time. Your smartphone’s voice-activated assistant uses inference, as do image search and spam filtering applications. Facebook’s image recognition and Amazon’s and Netflix’s recommendation engines all rely on inference.

GPUs, thanks to their parallel computing capabilities — or ability to do many things at once — are good at both training and inference.

Systems trained with GPUs allow computers to identify patterns and objects as well as — or in some cases, better than — humans (see “Accelerating AI with GPUs: A New Computing Model”).

After training is completed, the networks are deployed into the field for “inference” — classifying data to “infer” a result. Here too, GPUs — and their parallel computing capabilities — offer benefits, where they run billions of computations based on the trained network to identify known patterns or objects.

The parallel computing of GPUs also provides multi-factor speedups in traditional machine learning, using algorithms like gradient-boosted decision trees, for both training and inference.

You can see how these models and applications will just get smarter, faster and more accurate. Inference will bring new applications to every aspect of our lives. It seems the same admonition applies to AI as it does to our youth — don’t be a fool, stay in school. Inference awaits.

The Difference Between Deep Learning Training and Inference Read More »

Edge AI in New Jersey NJFX facility

What is Edge AI

What Is Edge AI and How Does It Work?

Recent strides in the efficacy of AI, the adoption of IoT devices and the power of edge computing have come together to unlock the power of edge AI.

Published by: Tiffany Yeung

Full Article: NVIDIA
August 8, 2024
Edge AI in New Jersey NJFX facility

Countless analysts and businesses are talking about and implementing edge computing, which traces its origins to the 1990s, when content delivery networks were created to serve web and video content from edge servers deployed close to users.

Today, almost every business has job functions that can benefit from the adoption of edge AI. In fact, edge applications are driving the next wave of AI computing in ways that improve our lives at home, at work, in school and in transit.

Learn more about what edge AI is, its benefits and how it works, examples of edge AI use cases, and the relationship between edge computing and cloud computing.

What Is Edge AI? 

Edge AI is the deployment of AI applications in devices throughout the physical world. It’s called “edge AI” because the AI computation is done near the user at the edge of the network, close to where the data is located, rather than centrally in a cloud computing facility or private data center.

Since the internet has global reach, the edge of the network can connote any location. It can be a retail store, factory, hospital or devices all around us, like traffic lights, autonomous machines and phones.

Edge AI: Why Now? 

Organizations from every industry are looking to increase automation to improve processes, efficiency and safety.

To help them, computer programs need to recognize patterns and execute tasks repeatedly and safely. But the world is unstructured and the range of tasks that humans perform covers infinite circumstances that are impossible to fully describe in programs and rules.

Advances in edge AI have opened opportunities for machines and devices, wherever they may be, to operate with the “intelligence” of human cognition. AI-enabled smart applications learn to perform similar tasks under different circumstances, much like real life.

The efficacy of deploying AI models at the edge arises from three recent innovations.

  1. Maturation of neural networks: Neural networks and related AI infrastructure have finally developed to the point of allowing for generalized machine learning. Organizations are learning how to successfully train AI models and deploy them in production at the edge.
  2. Advances in compute infrastructure: Powerful distributed computational power is required to run AI at the edge. Recent advances in highly parallel GPUs have been adapted to execute neural networks.
  3. Adoption of IoT devices: The widespread adoption of the Internet of Things has fueled the explosion of big data. With the sudden ability to collect data in every aspect of a business — from industrial sensors, smart cameras, robots and more — we now have the data and devices necessary to deploy AI models at the edge. Moreover, 5G is providing IoT a boost with faster, more stable and secure connectivity.

Why Deploy AI at the Edge? What Are the Benefits of Edge AI? 

Since AI algorithms are capable of understanding language, sights, sounds, smells, temperature, faces and other analog forms of unstructured information, they’re particularly useful in places occupied by end users with real-world problems. These AI applications would be impractical or even impossible to deploy in a centralized cloud or enterprise data center due to issues related to latency, bandwidth and privacy.

The benefits of edge AI include:

  • Intelligence: AI applications are more powerful and flexible than conventional applications that can respond only to inputs that the programmer had anticipated. In contrast, an AI neural network is not trained how to answer a specific question, but rather how to answer a particular type of question, even if the question itself is new. Without AI, applications couldn’t possibly process infinitely diverse inputs like texts, spoken words or video.
  • Real-time insights: Since edge technology analyzes data locally rather than in a faraway cloud delayed by long-distance communications, it responds to users’ needs in real time.
  • Reduced cost: By bringing processing power closer to the edge, applications need less internet bandwidth, greatly reducing networking costs.
  • Increased privacy: AI can analyze real-world information without ever exposing it to a human being, greatly increasing privacy for anyone whose appearance, voice, medical image or any other personal information needs to be analyzed. Edge AI further enhances privacy by containing that data locally, uploading only the analysis and insights to the cloud. Even if some of the data is uploaded for training purposes, it can be anonymized to protect user identities. By preserving privacy, edge AI simplifies the challenges associated with data regulatory compliance.
  • High availability: Decentralization and offline capabilities make edge AI more robust since internet access is not required for processing data. This results in higher availability and reliability for mission-critical, production-grade AI applications.
  • Persistent improvement: AI models grow increasingly accurate as they train on more data. When an edge AI application confronts data that it cannot accurately or confidently process, it typically uploads it so that the AI can retrain and learn from it. So the longer a model is in production at the edge, the more accurate the model will be.

How Does Edge AI Technology Work?

Lifecycle of an edge AI application.

For machines to see, perform object detection, drive cars, understand speech, speak, walk or otherwise emulate human skills, they need to functionally replicate human intelligence.

AI employs a data structure called a deep neural network to replicate human cognition. These DNNs are trained to answer specific types of questions by being shown many examples of that type of question along with correct answers.

This training process, known as “deep learning,” often runs in a data center or the cloud due to the vast amount of data required to train an accurate model, and the need for data scientists to collaborate on configuring the model. After training, the model graduates to become an “inference engine” that can answer real-world questions.

In edge AI deployments, the inference engine runs on some kind of computer or device in far-flung locations such as factories, hospitals, cars, satellites and homes. When the AI stumbles on a problem, the troublesome data is commonly uploaded to the cloud for further training of the original AI model, which at some point replaces the inference engine at the edge. This feedback loop plays a significant role in boosting model performance; once edge AI models are deployed, they only get smarter and smarter.

What Are Examples of Edge AI Use Cases? 

AI is the most powerful technology force of our time. We’re now at a time where AI is revolutionizing the world’s largest industries.

Across manufacturing, healthcare, financial services, transportation, energy and more, edge AI is driving new business outcomes in every sector, including:

  • Intelligent forecasting in energy: For critical industries such as energy, in which discontinuous supply can threaten the health and welfare of the general population, intelligent forecasting is key. Edge AI models help to combine historical data, weather patterns, grid health and other information to create complex simulations that inform more efficient generation, distribution and management of energy resources to customers.
  • Predictive maintenance in manufacturing: Sensor data can be used to detect anomalies early and predict when a machine will fail. Sensors on equipment scan for flaws and alert management if a machine needs a repair so the issue can be addressed early, avoiding costly downtime.
  • AI-powered instruments in healthcare: Modern medical instruments at the edge are becoming AI-enabled with devices that use ultra-low-latency streaming of surgical video to allow for minimally invasive surgeries and insights on demand.
  • Smart virtual assistants in retail: Retailers are looking to improve the digital customer experience by introducing voice ordering to replace text-based searches with voice commands. With voice ordering, shoppers can easily search for items, ask for product information and place online orders using smart speakers or other intelligent mobile devices.

What Role Does Cloud Computing Play in Edge Computing? 

AI applications can run in a data center like those in public clouds, or out in the field at the network’s edge, near the user. Cloud computing and edge computing each offer benefits that can be combined when deploying edge AI.

The cloud offers benefits related to infrastructure cost, scalability, high utilization, resilience from server failure, and collaboration. Edge computing offers faster response times, lower bandwidth costs and resilience from network failure.

There are several ways in which cloud computing can support an edge AI deployment:

  • The cloud can run the model during its training period.
  • The cloud continues to run the model as it is retrained with data that comes from the edge.
  • The cloud can run AI inference engines that supplement the models in the field when high compute power is more important than response time. For example, a voice assistant might respond to its name, but send complex requests back to the cloud for parsing.
  • The cloud serves up the latest versions of the AI model and application.
  • The same edge AI often runs across a fleet of devices in the field with software in the cloud

Learn more about the best practices for hybrid edge architectures.

The Future of Edge AI 

Thanks to the commercial maturation of neural networks, proliferation of IoT devices, advances in parallel computation and 5G, there is now robust infrastructure for generalized machine learning. This is allowing enterprises to capitalize on the colossal opportunity to bring AI into their places of business and act upon real-time insights, all while decreasing costs and increasing privacy.

We are only in the early innings of edge AI, and still the possible applications seem endless.

What is Edge AI Read More »

AI Power Prices Shaping NY NJ Data Centers

AI, Power Prices Push New York Data Centers Down A Unique Path

The artificial intelligence-driven data center boom will hit New York eventually, but it will look very different from other major data center markets.

Published by: Dan Rabb, Data Centers

Full Article: BISNOW
July 28, 2024

New York has yet to see the AI-driven explosion of data center development that has emerged in other top industry hubs, due in large part to the New York market’s high energy prices. AI will be a significant growth catalyst for data centers in New York, industry leaders said at Bisnow’s DICE: Northeast, July 18 at the Astor Ballroom in Manhattan, but that growth is likely to manifest differently than in any other primary market. 

Rather than massive cloud and AI campuses that account for the bulk of the industry’s recent growth, they say New York will see increased demand for colocation facilities and data centers specializing in access to fiber networks that connect the market to AI infrastructure largely being built elsewhere.  

“This market is doing really well for a lot of reasons that have nothing to do with power,” said Bob DeSantis, CEO of colocation provider 365 Data Centers. “New York just has so much volume. It’s expensive, but there’s already such a desire to have proximity that you add a little AI to that demand, and it overcomes any issues on the power side.”

The New York data center market, which includes New Jersey and Connecticut, is the seventh-largest data center hub in the U.S. As of the beginning of the year, the region had more than 700 megawatts of total data center inventory, the majority of it in New Jersey, and around 100 megawatts under construction, according to JLL. New York data centers have seen robust demand growth over the past two years, led by the financial services sector, with a growing share of leasing from major cloud providers. 

While the fundamentals of the area’s AI landscape are strong, New York hasn’t had the kind of unprecedented inventory growth and development pipeline seen in other primary markets. 

In Northern Virginia, Atlanta and Hillsboro, Oregon, data center inventory grew by 107%, 118% and 334%, respectively, between 2020 and the end of 2023, according to CBRE. It’s a record pace of growth that was first driven by surging demand for cloud services but has accelerated further as tech firms — led by Amazon, Microsoft, Google and Meta — engage in an AI arms race that is expected to surpass $1T in total infrastructure spending. Yet in the same time period, data center inventory in the New York Tri-State area grew by just 29%, the slowest pace among primary markets. 

The reason the AI data center bump has seemingly skipped New York comes down to the price of power, executives said at DICE: Northeast. Energy costs have always been a top siting consideration in a sector where facility size is measured in megawatts rather than square feet, but AI has dramatically increased the amount of power used by the largest data centers and has subsequently made power pricing paramount for hyperscale developers.

Companies like Amazon and Microsoft are building their largest AI data centers anywhere they can find the cheapest power. Markets attracting major hyperscale investment like Atlanta, Dallas and Chicago had average power rates last year of less than 7 cents per kilowatt hour, according to JLL. New York, by contrast, was more than twice as expensive, with an average rate of 16 cents. New Jersey, which has the cheapest power in the Tri-State market, is still relatively expensive at 11 cents per kilowatt hour.  

 “The large deployments of AI are largely in areas where power costs are down and power is more readily available from utilities,” said Phillip Koblence, co-founder and chief operating officer of colocation firm NYI. “But the New York market, this is where a lot of the data that is being manipulated by AI is created because this is where the eyeballs are and this is where the internet is most evolved.” 

Indeed, New York and the broader Northeast region is the country’s densest population center and therefore has the largest volume of consumers watching Netflix, interacting with ChatGPT and generating real-time data through phones, Apple Watches and other smart devices. Perhaps more importantly, the disproportionate number of financial institutions, major corporations and other large organizations that are based in New York represent an enormous amount of data that is only going to increase — along with its performance requirements — with AI adoption.  

Placeholder
The view of Midtown from Brookfield’s One Manhattan West

Much of this growing flood of data will be processed in cheaper power markets, but first it has to get there. This means more demand for carrier hotels and connectivity-focused colocation facilities, many of which have proprietary private networks that allow faster data transfer speed, known as latency, from New York to other major data center hubs.

Connectivity has always been a big part of New York’s digital infrastructure ecosystem, experts say, but it is primed for significant growth along with AI adoption. 

“The AI boom is going to inherently benefit our market, but the driver of market growth here is going to be entirely based on connectivity to enable AI,” Koblence said. “All this AI and digital infrastructure growth is enabled by data being created in your pockets and on your rings and your watches and being transported to these large AI farms in places where the power is cheaper.”

Not all AI deployments will be located outside the New York market. Industry leaders expect AI adoption will boost demand for colocation facilities in the Tri-state area beyond what is expected elsewhere. 

This is largely due to the outsized presence of financial services firms in the New York data center ecosystem, along with health care organizations like hospital systems and pharmaceutical companies and major educational and research institutions, said 365’s DeSantis. While many companies utilize public cloud from companies like Amazon Web Services for their AI infrastructure, these sectors have huge amounts of proprietary or private data for which the public cloud presents a security or compliance risk, pushing them toward colocation providers.  

“There’s a lot of proprietary applications that those type of industries run, and there’s a lot of personal information,” DeSantis said. “Those aren’t cloud-first strategy types of data sets. Those are colocation types of data sets.”

Many of these colocation AI deployments for New York-based enterprises are going to New Jersey due to the lower power cost and other pricing advantages, and DICE panelists indicated they expect this trend to accelerate.

Digital Realty, Equinix, CoreSite and Iron Mountain plan to add a combined 145 megawatts in New Jersey by 2027, according to JLL. Other providers are building facilities in New York outside the city, such as DataBank’s development in Rockland County

This enterprise demand for colocation capacity exists in the more expensive New York market largely due to the financial services sector, said Jeffrey Moerdler, a longtime data center and telecom attorney and a member at Mintz.

Financial firms are executing latency-sensitive trades and other transactions where hundredths of a second make a difference. Achieving this kind of low latency performance requires having the company’s computing infrastructure as close as possible. 

“So much of the financial services industry, the brokerage industry and trading are in New York, and much of that data can’t be pushed out of the region and sent to Iowa,” Moerdler said. “It has to stay here and be processed regionally because of the latency problem.”

AI Power Prices Shaping NY NJ Data Centers Read More »

Tampnet partners with NJFX, increasing diversity for USA and European customers

Tampnet partners with NJFX, increasing diversity for USA and European customers

Press Release

May 15th, 2024

Stavanger, May 15th, 2024 – Tampnet, the foremost provider of offshore high-capacity networks, is excited to announce the establishment of a Point of Presence (PoP) at NJFX’s carrier-neutral cable landing station in Wall, New Jersey. NJFX was strategically selected as the connectivity HUB and 4G/5G core site to enable low-latency communications to the emerging windfarms along the East Coast of America.  This new PoP at NJFX  further enhances Tampnet Carrier’s position to deliver connectivity to the US market and customers transmitting data between US and European sites.

This  collaboration further underscores Tampnet’s commitment to delivering top-tier connectivity solutions to NJFX customers spanning industries such as Oil & Gas, Wind Energy, Maritime, and the Carrier market.

Tampnet’s unwavering dedication to innovation and sustainability is reflected in its efforts towards a carbon-neutral future. By transitioning to energy-efficient 4G and 5G technology, Tampnet is spearheading the digital transformation in the offshore industry, ensuring safer and more efficient operations through advanced wireless sensors for condition monitoring, predictive maintenance, and remote operations.

 Cato Lammenes, VP and Head of Tampnet Carrier said: “With the addition of NJFX to our American footprint, this new connection hub supports our strategy for increased diversity within our 4G/5G core as well as providing additional services and routes for our global clients transmitting data between the European regions and the USA.”

Establishing a PoP within NJFX’s dynamic ecosystem grants Tampnet and its clientele direct, on-demand access to key submarine cable systems including Havfrue/AEC-2, Seabras-1, TGN1, and TGN2. This translates to unparalleled connectivity across the Americas, Europe, and the Caribbean.

“We are delighted by this strategic collaboration with Tampnet, solidifying their presence within our thriving ecosystem,” comments Felix Seda, General Manager at NJFX. “Tampnet’s choice of NJFX as their core US connectivity hub is testament to our commitment to providing unmatched connectivity solutions on the East Coast.”

By establishing a foothold at the NJFX facility, Tampnet aims to fortify its network capabilities and meet the evolving connectivity needs of its clientele. This symbiotic partnership promises enhanced connectivity options, further catalyzing the digital evolution across global industries.

About Tampnet:

Tampnet, founded in 2001 in Stavanger, Norway, operates the world’s largest offshore high-capacity communication network, serving clients in Oil & Gas, Wind Energy, Maritime, and Carrier sectors.

Tampnet Carrier’s unique network routes traverse 8 countries, connecting over 40 core data centres across 12 markets throughout Europe and the United States. Dual-path capability between Norway, Europe and UK is their key differentiator, providing diverse routing through Great Britain and via Sweden and Denmark.  This high-speed terrestrial and subsea network enables low latency, reliability, redundancy and secure connectivity solutions for the most demanding industries.  The NORFEST subsea route brings greater resiliency, flexibility and scalability to Nordic infrastructure, with direct connectivity to 10 key cities along the Norwegian coast, and Nordic data centre hubs powered by renewable energy along.

With a steadfast commitment to sustainability, Tampnet upgrades infrastructure to energy-efficient 4G and 5G technology, striving towards a carbon-neutral future.

For more information and media inquiries:

Cato Lammenes

Email:  [email protected]

Website:  www.tampnet.com

About NJFX:

Located in Wall, New Jersey, NJFX is the innovative leader in carrier-neutral colocation and subsea infrastructure, setting a new standard for interconnecting carrier-grade networks outside any major U.S. city. Our campus hosts over 35 global and U.S. operators, including multinational banks that rely on us for their “never down” network strategies. The NJFX campus is also where the major cloud operators have their global backbones physically connecting to transatlantic cables to Europe and South America. NJFX customers requiring transparency and true diversity can interconnect at a layer one level with their preferred network connectivity partners.

 

For more information and media inquiries:

Emily Newman

Email: [email protected]

Website: njfx.net

 

Tampnet partners with NJFX, increasing diversity for USA and European customers Read More »

Red Sea conflict

WSJ Covers Red Sea Conflict Threatening Key Subsea Cables

Red Sea Conflict Threatens Key Internet Cables

Maritime attacks complicate repairs on underwater cables that carry the world’s web traffic

Article by Drew Fitzgerald

Full Story here:  Wall Street Journal
March 3, 2024

Red Sea conflict

Conflict in the Middle East is drawing fresh attention to one of the internet’s deepest vulnerabilities: the Red Sea.

Most internet traffic between Europe and East Asia runs through undersea cables that funnel into the narrow strait at the southern end of the Red Sea. That chokepoint has long posed risks for telecom infrastructure because of its busy ship traffic, which raises the likelihood of an accidental anchor drop striking a cable. Attacks by Iran-backed Houthis in Yemen have made the area more dangerous.

The latest warning sign came Feb. 24, when three submarine internet cables running through the region suddenly dropped service in some of their markets. The cuts weren’t enough to disconnect any country but instantly worsened web service in India, Pakistan and parts of East Africa, said Doug Madory, director of internet analysis at network research firm Kentik.

It wasn’t immediately clear what caused the cutoffs. Some telecom experts pointed to the cargo ship Rubymar, which was abandoned by its crew after it came under Houthi attack on Feb. 18. The disabled ship had been drifting in the area for more than a week even after it dropped its anchor. It later sank.

Yemen’s Houthi-backed telecom ministry in San’a issued a statement denying responsibility for the submarine cable failures and repeating the government is “keen to keep all submarine telecom cables…away from any possible risks.” The ministry didn’t comment on the Rubymar attack.

Mauritius-based cable owner Seacom, which owns one of the damaged lines, said fixing it will demand “a fair amount of logistics coordination.” Its head of marketing, Claudia Ferro, said repairs should start early in the second quarter, though complications from permitting, regional unrest and weather conditions could move that timeline. 

“Our team thinks it is plausible that it could have been affected by anchor damage, but this has not been confirmed yet,” Ferro said. 

Cable ships’ lumbering speed makes draping new lines near contested waters a dangerous and expensive task. The cost to insure some cable ships near Yemen surged earlier this year to as much as $150,000 a day, according to people familiar with the matter.

Yemen’s nearly decadelong civil war further complicates matters. Houthi rebels control much of the western portion of the country along the Red Sea, while the country’s internationally recognized government holds the east. Companies building cables in the region have sought licenses from regulators on both sides of the conflict to avoid antagonizing either authority, other people familiar with the matter say.

The mounting cost of doing business also threatens tech giants’ efforts to expand the internet. The Google-backed Blue Raman system and Facebook’s 2Africa cable both pass through the region and remain under construction. Two more telecom company-backed projects also are scheduled to build lines through the Red Sea.

Most of the internet’s intercontinental data traffic moves by sea, according to network research firm TeleGeography. Submarine cables can be simpler and less expensive to build than overland routes, but going underwater comes with its own risks. Cable operators report about 150 service faults a year mostly caused by accidental damage from fishing and anchor dragging, according to the International Cable Protection Committee, a U.K.-based industry group.

“Having alternative paths around congested areas such as the Red Sea has always been important, though perhaps magnified in times of conflict,” ICPC general manager Ryan Wopschall said.

Several internet companies have considered ways to diversify their connections between Europe, Africa and Asia. Routes across Saudi Arabia, for instance, could skirt the waters around Yemen altogether. But many national regulators charge high fees or impose other hurdles that make sticking to tried-and-true routes more attractive. 

“The industry, as with any industry, reacts to the conditions set upon it, and routing in Yemen waters is a result of this,” Wopschall said.

Benoit Faucon contributed to this article.

Write to Drew FitzGerald at [email protected]

 

WSJ Covers Red Sea Conflict Threatening Key Subsea Cables Read More »

NJFX Edge AI Inference New Jersey

Why operators and enterprises will need an AI data center strategy

Why operators and enterprises will need an AI data center strategy

Ivo Ivanov (CEO at DE-CIX), Data Center Dynamics
February 1, 2024

NJFX Edge AI Inference New Jersey

As Mobile World Congress (MWC) 2024 draws near, the integration and impact of artificial intelligence (AI) in our digital economy cannot be overstated.

AI has always been a hot topic in the mobile industry, but this year it’s more than just an emerging trend; it’s a central pillar in the evolving landscape of telecommunications.

The democratization of generative AI tools such as ChatGPT and PaLM, and the sheer availability of high-performance Large Language Models (LLMs) and machine learning algorithms, means that digital players are now queuing up to explore their value and potential use cases.

The race to uncover and extract this value means that many market participants are now getting directly involved in using or building digital infrastructure.

The likes of Apple and Netflix walked this path almost a decade ago, and now banks, automotive companies, logistics enterprises, fintech operators, retailers, and healthcare specialists are all embarking on the same journey. The benefits are simply too good to pass up.

Crucially, we’re not just talking about enterprises owning a bit of code or developing new AI use cases; we’re talking about these companies having a genuine stake in the infrastructure they’re using. That means their attention is turning to things like data sovereignty, network performance, latency, security, and connection speed. They need to make sure that the AI use cases they’re pursuing are going to be well accommodated long into the future.

The need for network controllability

Enterprises are no longer mere spectators in the AI arena; they are active stakeholders in the infrastructure that powers their AI applications.

For instance, a retail company employing AI for personalized customer experiences must command not only the algorithms but also the underlying data handling and processing frameworks to ensure real-time, effective customer engagement.

This shift toward controllability underscores the importance of data security, compliance adaptability, and operational customization.

It’s about having the capability to quickly adjust to evolving market demands and regulatory environments, as well as optimizing systems for peak performance.

In essence, controllability is becoming a fundamental requirement for enterprises, signifying a shift from passive participation to proactive management in the network landscape.

Low latency is no longer optional

In the high-stakes world of AI, where milliseconds can determine outcomes, latency becomes a make-or-break element.

For example, in the financial sector, where AI is used for high-frequency trading, even a slight delay in data processing can result in significant performance losses. Similarly, for healthcare providers using AI for real-time patient monitoring, latency directly impacts the quality of care and patient outcomes.

Enterprises are therefore prioritizing low-latency networks to ensure that their AI applications function at optimal efficiency and accuracy. This focus on reducing latency is about more than speed; it’s about creating a seamless, responsive experience for end-users and maintaining a competitive edge in an increasingly AI-driven market.

As AI technologies continue to advance, the ability of enterprises to manage and minimize latency will become a key factor in harnessing the full potential of these innovations.

Localization will become mission-critical

Previously only talked about in the context of content delivery networks (CDNs) and cloud models, localization now plays a crucial role in AI performance and compliance. A striking example of this is Dubai’s journey in localizing Internet routes.

From almost no local Internet routes a decade ago to achieving 90 percent today, Dubai has dramatically reduced latency from 200 milliseconds to a mere three milliseconds for accessing global content.

This shift highlights the performance benefits of localization, but there are legal imperatives too. With regions like Europe and India enforcing strict data sovereignty laws, managing data correctly within specific jurisdictions has become more important as data volumes have increased.

The deployment of AI models, and by proxy the networks accommodating them, must therefore align with local market needs, demanding a sophisticated level of localization that businesses are now paying attention to.

Multi-cloud interoperability

AI is also reshaping how enterprises approach cloud computing, especially in the context of multi-cloud environments. AI’s intensive training and processing often occur within a specific cloud infrastructure.

Yet, the ecosystem is more intricate, as numerous applications are either feeding data to, or utilizing data from, these AI models are likely distributed across different cloud platforms.

This scenario underscores the critical need for seamless interoperability and low-latency communication between these cloud environments.

A robust multi-cloud strategy, therefore, isn’t just about leveraging diverse cloud services; it’s about ensuring these services work in harmony as they facilitate AI operations.

All of these factors; controllability, latency, localization, and cloud interoperability will become increasingly important to enterprises as use cases develop. Take self-driving cars for instance. Latency and the real-time exchange of data are obviously critical here, but so are cloud interoperability and data sovereignty.

A business cannot serve an AI-powered driver assistance system from one region if the car is in another. These systems also learn and adapt to individual driving patterns, and handle sensitive personal information, making compliance with regulations like the General Data Protection Regulation (GDPR) in the EU not just a legal obligation but a trust-building imperative.

Networking and interconnections

If data center operators want to win business from these AI-hungry, data-driven enterprises, they need to move their focus beyond mere servers, power, and cooling space.

Forward-looking data centers are now evolving to support their enterprise customers more effectively by providing direct connectivity to cloud services.

This is ideally achieved through housing or providing direct access to interconnection platforms in the form of an Internet Exchange (IX) and/or Cloud Exchange.

This will allow different networks to interconnect and exchange traffic directly and efficiently, bypassing the public Internet, which reduces latency, improves bandwidth, and enhances overall network performance and security.

Enterprises are more invested than ever in the connectivity infrastructure powering their services, and to win customers, data centers are going to need to take a more collaborative and customizable approach to data handling and delivery.

This isn’t just a response to immediate challenges; it’s a proactive blueprint for a future where AI’s potential is fully realized.

Why operators and enterprises will need an AI data center strategy Read More »

The New Wave of SMART Cables

New Wave of SMART Cables

By Srikapardhi, TelecomTalk
January 31, 2024

Once operational, the system will provide not only a supplementary telecom cable to New Caledonia, extending to Australia and Fiji, but also a vital component in environmental monitoring.

Prima, in collaboration with Alcatel Submarine Networks (ASN), announces the signing of a contract for the establishment of the first SMART subsea cable system. OMS will be responsible for the marine installation of this system, which is set to be deployed and operational in 2026. The system will enhance digital connectivity and seismic monitoring in the Pacific region, the joint statement said.

Collaborative Innovation

Once operational, the system will provide not only a supplementary telecom cable to New Caledonia, extending to Australia and Fiji, but also a vital component in environmental monitoring.

The integration of four advanced Climate Change Nodes (CC Nodes) into the subsea cable system will facilitate real-time monitoring of seismic activities and efficient tsunami detection, particularly in the seismically volatile New Hebrides Trench. Additionally, this technology is expected to transform warning systems across the Pacific, enhancing security and preparedness against natural disasters.

Environmental Monitoring Advancements

Prima emphasised the key supporters of this project, including the French Government for its “unwavering commitment and encouragement”, the Government of Vanuatu that entrusted Prima with the implementation of this hybrid cable, and OPT NC that supported the project, especially in the Lifou landing.

For its part, ASN also collaborated with the SMART Joint Task Force (JTF) for their consistent support and expertise in developing SMART cable projects. “By merging telecommunications with environmental monitoring technologies, this endeavour will substantially enhance the safety, connectivity, and scientific insight of the Pacific region,” the joint statement said.

Prima is a telecommunications and data infrastructure company based in Port Vila, Vanuatu. Alcatel Submarine Networks (ASN) offers an extensive service portfolio including project management, installation, and commissioning, along with marine and maintenance operations performed by ASN’s wholly-owned fleet of cable ships.

What are SMART Cables?

nstrumenting the deep ocean has been a challenge for ocean scientists for decades.

The Science Monitoring And Reliable Telecommunications (SMART) Subsea Cables initiative seeks to revolutionize deep ocean observing by equipping transoceanic telecommunications cables with sensors to provide novel and persistent insights into the state of the ocean, at a modest incremental cost.

 Smart Cables

The Science Monitoring And Reliable Telecommunications (SMART) Subsea Cables initiative seeks to revolutionize deep ocean observing by equipping transoceanic telecommunications cables with sensors to provide novel and persistent insights into the state of the ocean to monitor climate change including ocean heat content, circulation, and sea level rise, provide early warning for earthquakes and tsunamis, and monitor seismic activity for earth structure and related hazards. 

The Joint Task Force

The SMART Subsea Cables initiative is led by a Joint Task Force (JTF) made up of three United Nations organizations: the International Telecommunications Union (ITU), the World Meteorological Organization (WMO), and the Intergovernmental Oceanographic Commission (IOC) of the United Nations Educational, Scientific and Cultural Organization (UNESCO). The JTF is responsible for charting a path for the implementation of SMART monitoring capabilities into new cable installations worldwide.

International Program Office

The SMART International Program Office (IPO) is the executive branch of the JTF and is responsible for carrying out its recommendations in pursuit of broad SMART adoption. In this role the IPO acts in oversight and managerial capacities as the unifying executive agency bridging the many relevant stakeholder communities pertinent to SMART implementation. 

The New Wave of SMART Cables Read More »

NJFX AND EXA INFRASTRUCTURE FORGE STRATEGIC PARTNERSHIP TO BOLSTER TRANSATLANTIC CONNECTIVITY

NJFX and EXA Infrastructure Forge Strategic Partnership To Bolster Transatlantic Connectivity

WALL TOWNSHIP, NJ & LONDON, UK, 21 JANUARY 2024 – EXA Infrastructure, the largest dedicated digital infrastructure platform connecting Europe and North America, today has announced a strategic partnership with NJFX, a leader in carrier-neutral colocation and subsea infrastructure. This collaboration marks a significant step in bolstering global network connectivity with EXA establishing a new Point of Presence (PoP) at NJFX’s facility.

EXA Infrastructure, a London-based I Squared Capital portfolio company, operates a vast 142,000-kilometer fiber network spanning 34 countries and connecting 300 cities. With 13 Tier 3-equivalent data centers and several strategic sub-sea routes, including a low-latency transatlantic link, EXA’s network is a cornerstone of this partnership.

As part of this strategic presence at NJFX, EXA announced a partnership with Bulk for the Havfrue cable system. EXA will integrate Havfrue with their pan-European backbone network to provide direct connectivity to the Nordics avoiding major conventional transatlantic traffic passages.

EXA Infrastructure, VP Network Investments, Steve Roberts said: “As we embark on this strategic partnership with NJFX, we’re not just connecting infrastructure; we’re forging a pathway for our customers to traverse the digital landscape faster and more efficiently than ever before. We are excited to be partnering with NJFX and this collaboration amplifies opportunities for our customers to access Europe with unprecedented speed on the EXA network. We are committed to providing cutting-edge solutions in today’s digital era that is defined by connectivity.”

NJFX is distinguished for its unique strategy in linking carrier-grade networks beyond major U.S. cities, accommodating 35 international and domestic operators and growing. The NJFX campus is also where the major cloud and network operators have their global backbones physically connecting to transatlantic cables to Europe and South America. The alliance with EXA amplifies NJFX’s dedication to offering customers unmatched options in network connectivity.

Felix Seda, General Manager at NJFX, said: “We are proud to have EXA Infrastructure as part of our growing ecosystem integrating their expansive network with our robust connectivity infrastructure. By establishing a presence at the NJFX colocation campus, EXA customers are now able to leverage low latency routes to major connectivity hubs avoiding legacy chokepoints.”

This alliance represents a significant milestone in the telecommunications industry, offering existing and prospective customers enhanced network options. With an emphasis on diversity, capacity, and growth scalability, NJFX and EXA Infrastructure are committed to driving forward the future of global connectivity.

###

About NJFX

Located in Wall, New Jersey, NJFX is the innovative leader in carrier-neutral colocation and subsea infrastructure, setting a new standard for interconnecting carrier-grade networks outside any major U.S. city. Our campus hosts over 35 global and U.S. operators, including multinational banks that rely on us for their “never down” network strategies. The NJFX campus is also where the major cloud operators have their global backbones physically connecting to transatlantic cables to Europe and South America. NJFX customers requiring transparency and true diversity can interconnect at a layer one level with their preferred network connectivity partners. For more information, visit

NJFX.net

Media contacts:
Emily Newman
[email protected]

About EXA Infrastructure

Headquartered in London, EXA Infrastructure is a portfolio company of I Squared Capital and the largest dedicated digital infrastructure platform connecting Europe and North America and owns 142,000 kilometres of fibre network across 34 countries. EXA’s network connects 300 cities and offers 13 Tier 3-equivalent data centres, with sub-sea routes that include five transatlantic cables, one the lowest latency link between Europe and North America. For more information, see exainfra.net

Media contacts:

Alana Foster
EXA Infrastructure
[email protected]

NJFX AND EXA INFRASTRUCTURE FORGE STRATEGIC PARTNERSHIP TO BOLSTER TRANSATLANTIC CONNECTIVITY Read More »

NJFX Announces Thomas Schemly as New Enterprise Solutions Architect

NJFX Announces Thomas Schemly as New Enterprise Solutions Architect

“The team at NJFX has developed a state-of-the-art, Tier-3 carrier-grade data center located outside the NJ/NYC Metro Market. This facility surpasses current industry standards, promoting network transparency and collaboration among our ecosystem. I look forward to driving the enterprise architecture and continuing NJFX’s tradition of excellence.” – Thomas Schemly

WALL TOWNSHIP, New Jersey — NJFX, the innovative leader in carrier-neutral colocation and subsea infrastructure, announces the appointment of Thomas Schemly as Enterprise Solutions Architect.

Mr. Schemly’s expertise in network design across various sectors has been crucial to his success in strategic partnership development. His deep understanding of global network architecture has been instrumental in designing robust networks for the financial industry. Mr. Schemly’s career is marked by successful partnerships, managing major accounts, and spearheading sales strategies that drives enterprise and cloud solutions.

NJFX General Manager, Felix Seda said, “Over the last seven years, NJFX has provided me the opportunity to forge close relationships with our ecosystem of carriers and understand the nuances of their network architecture. Tom’s experience and insight into the needs of the enterprise community will bring a unique perspective and solidify NJFX’s value in creating an interconnection hub focused on network transparency.”

NJFX has created a new purpose built standard for interconnecting carrier grade networks outside of any major U.S. city. Today, 35 global and U.S. based operators are present at NJFX with multinational banks starting to deploy their core network nodes for a “never down strategy”. The NJFX campus is also where the major cloud operators have their global backbones physically connecting to transatlantic cables to Europe and South America. Enterprise customers looking for true diversity can interconnect at a layer one level with their preferred network connectivity partners.

“Tom’s addition to our team marks a significant stride in our unwavering commitment to the enterprise sector. His depth of knowledge and industry acumen is critical to maintaining the high standard of network security, transparency, and personal engagement that our C-level partners expect”, said Gil Santaliz, CEO of NJFX. “With real-time data transactions estimated 2 billion per minute globally, Tom’s role is instrumental in ensuring that enterprises achieve the network diversity essential in today’s digital ecosystem.”

Reflecting on his new role, Schemly comments, “With over 25 years of experience in designing and implementing intricate networks for financial and other various sectors, I am committed to a diversity-first approach. The team at NJFX has developed a state-of-the-art, carrier-grade data center located outside the NJ/NYC Metro Market. This facility surpasses current industry standards, promoting network transparency and collaboration among our ecosystem. I look forward to driving the enterprise architecture and continuing NJFX’s tradition of excellence.”

###

More In the News

Edge AI in New Jersey NJFX facility

What is Edge AI

Red Sea conflict threatens Key Internet Cables. Maritime attacks complicate repairs on underwater cables that carry the world’s web traffic.

Read More »

NJFX Announces Thomas Schemly as New Enterprise Solutions Architect Read More »

Submarine Cables: Risks and Security Threats

Submarine Cables: Risks and Security Threats

In a context of growing international tensions, the creation of a European program modelled on the US and Japanese programs, which aims to increase operations to deter attacks on these infrastructures and to develop a high-stakes construction and repair, has become very important.


Energy Industry Review: Written by Rona Rita David

99% of the internet network runs through submarine cables. It is estimated that over USD 10,000 billion in financial transactions run today through these “seabed highways”. This is especially the case of the main global financial exchange system, SWIFT (Society for Worldwide Interbank Financial Telecommunications), which has recently been banned for many Russian banks. The security of these transactions is a political, economic, and social problem. This is a major issue that has long been ignored. The extreme geographic concentration of the cables makes them particularly vulnerable. There are over 420 submarine lines in the world, totaling 1.3 million kilometres, over three times the distance from Earth to Moon. Record: 39,000 kilometres length for the SEA-ME-WE 3 cable, which links South-East Asia to Western Europe through the Red Sea.

Submarine internet cables have a crucial importance, like oil and gas pipelines. In the context of Russia’s invasion of Ukraine, the seabed is more than ever a battlefield that must be protected. Western armed forces are considering a nightmare scenario of total interruption of the Internet in Europe, as 99% of the global network runs through submarine cables.

Satellites account for only 1% of data exchanges. The reason is simple: they cost more than cables and are infinitely slower.

A hundred submarine cable breaks a year

These infrastructures are equally important today as oil and gas pipelines. But are they equally protected? Modern submarine cables use fibre-optic to transmit data with the speed of light. However, while in the near vicinity of the shore, cables are generally reinforced, the average diameter of a subsea cable is not much larger than that of a garden hose.

For several years, the major powers are fighting a “hybrid war”, half open, half secret, for the control of these cables. As Europe focuses increasingly more on threats to cybersecurity, investments in the security and resilience of physical infrastructure that are the basis of its communications with the world does not seem to be a priority today.

The fear to act will only generate the vulnerability of these espionage systems, interruptions of data flows and undermining the security of the continent. On average, there are over a hundred breaks of submarine cables every year, caused in general by the fishing boats that pull the anchors. It is difficult to measure intentional attacks, but the movements of some ships have started to draw attention since 2014, their route following submarine telecommunication cables.

The first attacks of the modern age date back in 2017: it is about the cables between the UK and the US and between France and US. Although these attacks remain unknown to the general public, they are no less worrying and prove the capacity of external powers to separate Europe from the rest of the world. In 2007, Vietnamese fishermen cut a subsea cable to recover composite materials and try to resell them. Vietnam lost this way almost 90% of connectivity with the rest of the world for a period of three weeks. 

Potential risks

Creating a European program to increase EU’s capabilities to prevent attacks on this infrastructure and repair the damages that they could cause is more urgent than ever. Russian “fishing” or “oceanographic” vessels and which are, generally, collectors of information, are increasingly traversing the coasts of France and Ireland through which these “information highways” pass. Yantar, an “oceanographic” vessel that has an AS-37 mini-submarine, was able to submerge in August 2021 to a depth of 6,000 meters off the Irish coast, following the route of Norse and AEConnect-1 cables, which link Europe to the United States. Russia, which had cut the Ukrainian cables in 2014, would therefore have the capacity to repeat the operation for the whole Europe.

A map of submarine cables around the world

TeleGeography, a US telecommunications consultancy, has created the Submarine Cable Map portal, an interactive map of all submarine cables unfolding around the world, with data about the companies that own them, such as Google, Facebook, Amazon, Verizon, or AT&T. On the map, we can see that a key highway is in the Atlantic Ocean, which links Europe and North America. In the meantime, the Great Pacific Highway links the United States of America to Japan, China, and other Asian countries. From Miami, several cables connect Central and South America. In the case of Mexico, for example, most cables run from the east of the country, cross the Gulf of Mexico to Florida and from there they connect to Central and South America.

Even if we have the tendency to believe that our smartphones, computers, and other cars are interconnected by space, most – almost 99% of all internet traffic – is thus carried by global telecommunications sublines. There are over 420 cables in the world, totalling 1.3 million kilometres, over three times the distance from Earth to Moon. Record: 39,000 kilometres length for the SEA-ME-WE 3 cable, which links South-East Asia to Western Europe through the Red Sea.

Cutting submarine cables, an old and proven practice of war

Recent attacks on cables carrying voice and data traffic between North America and Europe lead to the idea that they seem to be undergoing a new development. France and the United Kingdom had already dealt with this experience on the part of the Germans during the First World War. These infrastructures were part of the global cable telegraph network. Similarly, the United States cut wartime cables as a means of disrupting the ability of an enemy power to command and control distant forces.

The first such attacks took place in 1898, during the Spanish-American War. That year, in the Gulf of Manila (Philippines), the USS Zafiro cut the cable connecting Manila to the Asian continent to isolate the Philippines from the rest of the world, as well as the cable connecting Manila to the Philippine city of Capiz. Other spectacular cable attacks took place in the Caribbean, plunging Spain into the dark during the conflict in Puerto Rico and Cuba, which contributed greatly to the final victory of the United States.

Russia interested in NATO’s subsea infrastructure

Russia seems to materialize the concerns at the highest level in this field. In 2015, the presence of Russian vessel Yantar along the US coast, near the cables, did not fail to arouse tensions between the two states. At the end of 2017, the situation repeated.

“We are now seeing Russian underwater activity in the vicinity of undersea cables that I don’t believe we have ever seen. Russia is clearly taking an interest in NATO and NATO nations’ undersea infrastructure,” said Admiral Andrew Lennon, commander of the organization’s submarine forces. It’s like going back to the days of the Cold War… To the point where Policy Exchange has devoted an entire chapter of its “Russia Risk” report to this topic. The think tank recalls the episode of the annexation of Crimea in 2014, when the peninsula was isolated from the rest of Ukraine by physically cutting off communications.

“If the relative weakness of the Russian position makes a conventional conflict with NATO unlikely, fibre-optic cables can be a target for Russia. We should prepare for an increase in hybrid actions in the maritime field, not only in Russia, but also in China and Iran,” underlines the former commander of the NATO allied forces, the American Admiral James G. Stravridis.

Three major security risks

The first risk factor is the growing volume of data flowing through cables, which encourages third countries to spy on or disrupt traffic.

The second risk factor is the increasing capital intensity of these facilities, which leads to the creation of international consortia involving up to dozens of owners. These owners are separated from the entities that produce the cable components and from those that position the cables along the ocean floor. Timeshare makes it possible to reduce costs substantially, but at the same time allows the entry in these consortia of state actors who could use their influence to disrupt data flows, or even to interrupt them in a conflict scenario. At the other end of the spectrum, GAFAMs (Google, Apple, Facebook, Amazon, and Microsoft) now have the financial and technical capacity to build their own cables. Thus, the Dunant cable, which links France to the United States, is entirely owned by Google. The Chinese giants have also embarked on a strategy of submarine conquest: this is the case of the Peace cable, which connects China to Marseilles, owned by the Hengtong company, considered by the Chinese government as a model of “civilian-military”.

Another threat is espionage, which requires specially equipped submarines, or submarines operating from ships, capable of intercepting, or even modifying, data passing through fibre-optic cables without damaging them. So far, only China, Russia and the United States have such means.

The most vulnerable point of submarine cables, however, is where they reach land: the landing stations Thus, the town of Lège-Cap-Ferret, where the interface room between the Franco-American cable “Amitié” will be built, has recently become a veritable nest of spies, according to informed sources.

But the most worrying trend is that more and more cable operators are using remote management systems for their networks. Cable owners are excited about the staff cost savings. However, these systems have poor security, which exposes submarine cables to cyber security risks.

Solutions in case of multiple attacks

The US executive has recently investigated possible risks in the event of multiple attacks. In addition to expanding the SSGP grant program, it has encouraged the Maritime Administration to involve various civil society associations, such as the International Propeller Club, in programs designed to minimize these threats. The idea is to create a kind of “submarine cable militia” capable of responding quickly in a crisis.

The Propeller Club has more than 6,000 members and has recently provided $ 3.5 billion in aid to the maritime industry in the fight against Covid-19. Similarly, the creation of a “submarine cable Airbus” capable of competing with GAFAMs, whose market share could increase from 5% to 90% in six years, can obviously become a reality only if Europe pays attention to this topic.

In a context of growing international tensions, the creation of a European program modelled on the US and Japanese programs, which aims to increase operations to deter attacks on these infrastructures and to develop a high-stakes construction and repair, has become very important.

More In the News

Edge AI in New Jersey NJFX facility

What is Edge AI

Red Sea conflict threatens Key Internet Cables. Maritime attacks complicate repairs on underwater cables that carry the world’s web traffic.

Read More »

Submarine Cables: Risks and Security Threats Read More »

Hello!

Login to your account

small_c_popup.png

Let's have a chat

Learn how we helped 100 top brands gain success.