Six Things I Learned about Edge Computing in Five Years at Azion

06/06/2023

Six Things I Learned about Edge Computing in Five Years at Azion

By Rogerio Mariano, Interconnection, Edge & Submarine Cable | Global Head Azion

I looked around and realized that it’s been five years since I became Director of Network Planning at an edge computing startup with the mission to promote the hyperconnected economy by facilitating the construction and operation of modern applications around the world.

This company is Azion. From the start, my story with Azion has been marked by learning —something I continue to do daily—, vision, planning and progress, building POPs and reshaping interconnection. I am extremely fortunate to be surrounded by a team of exceptional people of the highest level within the company. At Azion, we learn that we have a real opportunity to help shape the future of edge infrastructure and play a modest but significant role in enabling applications that are, in fact, national technological sovereignty. In the last 25 years, I experienced the transition from ATM, Token-Ring and FDDI to Ethernet (shoutout to anyone who has operated a Cisco WS-C8510 or a General DataComm!); I witnessed the birth of the MPLS Forum; I loved working on the design, implementation, and operation of inner-core and outer-core backbones; I read books by Sam Halabi; I saw companies such as Nortel and Bay Networks disappear; I witnessed the evolution of photonic and submarine networks (have you ever heard of SDM?), and the Cloud market mature. But believe me, the move to the Edge is one of the most relevant transitions I have seen in the technology sector in recent times.

As 2023 unfolds, I would like to share six takeaways from my nearly five years at Azion, a personal perspective ranging from the evolution of edge computing applications to their impact on the underlying digital infrastructure.

Takeaway #1: The Edge created a new interconnection model.

In recent years, we have seen a shift from traditional interconnection models to models that cater more to services and applications at the Edge. Generally speaking, the global market has become accustomed to four interconnection models:

  1. ON-NET (servers in an “eyeball”, usually broadband service network).
  2. Private Network Interconnect (PNI) connections, i.e., direct connections between private networks.
  3. Internet Exchange Points (IXPs).
  4. IP Transit, which is a service in which an Internet Service Provider (ISP) allows traffic to flow through the network to its final destination.

However, with the arrival of the Edge (and edge computing), access to the end user, and consequently more and more content, arrives on ON-NET and via PNIs to the detriment of connections through IXPs. While IXP traffic will grow more slowly than the overall Internet, IXPs will remain an important part of the ecosystem for the following:

  1. Long-tail connections (for example, an ISP in the interior of Mato Grosso which connects to the traffic matrix of an IXP in São Paulo).
  2. Low traffic volumes.

An important aspect to note is that we are currently witnessing this shift in the interconnection process and that the majority of “Cloud to Cloud” and “Cloud to Edge” traffic is through private networks, not through the public Internet. This generates a new interconnection structure and architecture, where we have:

Content owner -> Cloud on-ramp -> Cloud-Edge -> PNI -> Edge -> PNI -> ON-NET

With this new interconnection structure, most of these connections will be private and will bypass the public Internet. While the public Internet will continue to exist, the majority of the traffic will shift to private networks. This leaves us with some unanswered questions, such as “Is competition between IXPs and private networks always beneficial or can it also lead to market fragmentation?” and “What will be an IXP’s the value proposition if traffic remains on private networks?”

The answer is simple: IXPs need to move to the Edge and, in fact, they are already doing so.

Today, there are 35 IX.br points in Brazil and several other private IXPs are being set up throughout the country. Each node of this edge infrastructure, whether an end-user device, a specific technical facility, a corporate datacenter, a colocation datacenter or even a webscale datacenter, serves as a potential location for workloads. Fiber optics interconnection and networks, FWA (%G) obviously play a critical role in connecting all nodes at high speed, thus allowing developers to optimize their application architectures.

From a datacenter point of view, as edge architecture matures, we are seeing the emergence of two main categories of colocation facilities that fill in the gaps between the Core and end users: aggregation points and proximity points. Aggregation points are large sites used to aggregate workloads and distribute data traffic flows, while proximity points provide a first point of trust that is close to end users.

Aggregation points tend to appear naturally in places where traffic is exchanged or where large amounts of data are stored. They facilitate connection to edge nodes, which are more specific to the application and selected on an ad hoc basis considering their proximity to end users and access to the network, both local access to end users and backhaul connectivity to the rest of an edge computing platform, such as Azion, for example.

Takeaway #2: Edge computing is not new, but applications continue to evolve (a lot!).

The need to store and process data close to where it is generated and consumed is not new. However, the real-time requirements of some applications, the exponential growth of the absolute volume of data being processed, and the cost associated with moving data across networks are shaping an architectural paradigm shift in the direction of more distributed data processing.

In this context, we are witnessing longstanding trends that are increasing the demand for edge computing and the requirements associated with the underlying digital infrastructure:

Applications are leveraging more sensory input and collecting data with very high resolution and sample rates, exponentially increasing data generation and the need for local storage and computing power.

In some cases, applications have become smarter, and AI (Artificial Intelligence) requires more processing power close to the location where the data is generated or consumed.

New applications such as IoT, virtual reality, and augmented reality, or new enabling technologies such as FWA (and the entire 5G structure), Wi-Fi6, or even the new low-orbit satellite technologies, are constantly creating more use cases (for example, smart cities) that push technical requirements towards infrastructure that is close to the end user.

In certain use cases (for example, online gaming and telemedicine), progressive processing power is distributed across the connectivity networks, close to the location where the application is consumed, not on the end user’s device itself. This reduces connectivity costs and improves the latency and performance of data-intensive applications.

Takeaway #3: The definition of “edge” can vary in each use case.

In its broader sense, edge computing involves deploying computing power close to the end users, and the edge is the location where this processing power can be deployed.

The edge is not an abstraction! Consequently, the location of the edge and its purpose will always depend on the applications that use it.

Given that each application has a different architecture as well as a different set of requirements for the underlying digital infrastructure, the definition of “edge” is quite subjective, and it is unlikely to converge towards a standard definition. Discussions about the exact meaning of “Cloud” have been going on since the concept of “fog computing” appeared many years ago or even earlier. However, since then, a definition has emerged, as the technology was implemented by Cloud Providers in a relatively uniform manner.

Today, the most widely adopted cloud computing architectures will converge to include well-defined elements such as availability zones and access nodes.

The evolution of the Edge will likely be more diversified than that of the Cloud, as each use case has a different architecture, and this reduces the likelihood of the development of a uniform approach to workload positioning. As a result, there are those who believe that edge infrastructure will remain more ad hoc than cloud infrastructure is today.

In particular, the location of edge implementations will vary, with many applications being deployed on end devices, some at specific technical facilities such as towers, factories, or oil rigs, and others in datacenters or even cable landing stations (CLS).

Takeaway #4: Edge and cloud computing are converging and integrating. When edge computing began to “succeed,” it was positioned as an alternative to cloud computing.

Now that edge computing has matured and been widely adopted, it is becoming increasingly clear that these two trends complement each other. The migration of applications that can be deployed in a centralized location to the public cloud is well underway. Cloud Providers are considering expanding the model towards more distributed applications or creating a new model based on local zones and tailored to edge computing, but with similar properties as the existing model.

This complements a parallel movement to further distribute web-scale network nodes for various resources, such as cloud access and/or traffic aggregation/distribution, often referred to as “the tail,” in more and more carrier-neutral datacenters closer to end users. A movement similar to what Google (with GGC), Meta (with MNA), and Netflix (with OCA) have done since 2013 with their Edge-CDN systems, as the behavior of streaming whether VoD (SVOD, TVOD , AVOD, PVOD…) or Linear Channel is analogous to the edge.

In response to this trend of geographical distribution of computing power and network nodes, there is a continuous and significant push by Cloud Providers for more decentralized locations. This movement is known in the market as “Cloud Out.”

Recently, a complementary movement known as “Edge In” has also emerged, where many applications that were previously processed by end users (at customer facilities or on end devices) are migrating to third-party datacenters capable of offering greater availability and resilience, and in some cases, access to public cloud, but are still located close to end users.

These two movements are rapidly converging and meeting in the middle, building what some call “Core-Edge Architecture.” This creates a distributed layer of digital infrastructure that acts as a mesh that connects the core and the end users.

Takeaway #5: Edge and sustainability (ESG) have a lot in common. As technology creates an increasing demand for capability in more locations, there is an opportunity to fundamentally rethink how datacenters fit into the rest of the environment.

Bringing computing/processing power closer to where the data is being generated has the potential to help improve sustainability. As datacenters struggle to use their waste heat, at the edge there are more potential consumers for locally produced, highly distributed, high-quality heat.

Relatively small edge datacenters with facilities that are ideal for the implementation of large-scale energy storage solutions which, in turn, will promote the development of additional renewable energy generation.

At the same time, data sovereignty requirements play a role in driving more distributed data storage and processing. Increasingly stringent data privacy regulations (LGPD, GDPR, etc.) and consumer protection require that data be stored and processed locally or within the borders of each country.

Contrary to other regions, the geographic fragmentation of Latin America and Brazil inherently implies a much more distributed digital infrastructure, with more emphasis on national and regional implementations.

Takeaway #6: Edge requirements are dictating the next surge in digital infrastructure.

The rise of edge computing places a new set of requirements on global digital infrastructure, which was originally developed to focus on enabling connectivity (both long distance and last mile) for businesses and consumers, and which subsequently evolved to focus on embracing large computing power in ever-growing datacenters, which will become the cornerstone of a globally interconnected digital economy.

How workload positioning is shaping edge infrastructure is critical for this new turning point, as different workloads are being deployed in vastly different locations at the network edge based on application architecture.

As I mentioned earlier, we are witnessing the rise of a distributed layer of digital infrastructure that acts as connective tissue between the core and end users, with different applications being implemented at various nodes along that spectrum, depending on latency, performance, reliability, size, and data sovereignty requirements. Because data is processed at the edge, infrastructure becomes more agile and business oriented. This also improves the efficiency of the distribution of income from the resources, simplifying and reducing the cost of managing the ICT infrastructure, and taking advantage of several other benefits that are inherent to this structure, including:

ultra-low latency,

extremely high resilience,

localized compliance,

distributed intelligence, and

increased security perimeter.

Implementing a highly distributed infrastructure and adopting a replicable and sustainable model entails significant complexity, particularly in a challenging macroeconomic environment marked by inflationary pressures and the complexities of the global supply chain. However, edge computing networks will continue to advance and shape the new surge of digital infrastructure.

To conclude, I would like to recommend reading the Azion blog and the LF Edge blog, which can be valuable resources for those looking to learn more about edge computing.

https://www.azion.com/pt-br/blog

https://www.lfedge.org/

Subscribe
Notify of

0 Comments
Inline Feedbacks
View all comments