Christian took us through a journey of recent technological evolution, from network programmability to the emergence of AI models that enable the automation of tasks that go beyond human capabilities.
One of the most powerful ideas from the talk was that today’s network is the first human-created system that can no longer be fully understood by humans themselves. From the variability of BGP to the mix of 4G, 5G, edge computing, submarine connectivity, network slices, and SDN, the complexity has reached a point where it becomes unmanageable without data-driven support tools.
This is where Machine Learning comes in — not as a trend, but as a necessity. Christian showed how data-based techniques can detect anomalies, predict patterns, and suggest actions where traditional approaches fall short. Among the most mature or promising applications he mentioned are intrusion and anomaly detection, traffic classification by application type, traffic engineering and congestion prediction, BGP policy optimization, adaptive video streaming, XR and VR, and endpoint congestion control.
It is worth noting that many of these tasks are not new — what is new is the approach: learning from data instead of guessing with static rules.
What I found especially valuable was his practical suggestion for how to start applying AI in network operations. There is no single “killer app” for AI in networks. “Instead, there are many small operational pain points that can be addressed if we have well-collected data and a clear willingness to experiment,” he pointed out.
In the second part of his talk, Christian explained how artificial intelligence can be applied to networks. Without turning it into a theoretical lesson, he clearly described the three main families of Machine Learning techniques now used for network analysis and management. First, supervised techniques rely on data that has already been labeled. For example, we know certain records represent normal traffic while others correspond to suspicious events. Using these examples, the model “learns” to classify new, unseen events based on past patterns. This method works best when reliable, well-structured historical data is available — which, as he noted, is not always the case in the real world.
Second, there are unsupervised techniques, where the model does not receive any labels — it is simply provided with data and expected to identify patterns, clusters, or similar behaviors on its own. This can help reveal users with unusual consumption patterns, anomalous traffic flows, or application segments that require special handling within the network. When network flows repeat with certain characteristics, they can be grouped (red, blue, green) and routed through different paths according to their profile. This is a way to optimize resources without the need for direct human intervention.
Finally, there is reinforcement learning, where the system learns through trial and error, receiving rewards or penalties based on the outcome of its decisions. Christian illustrated this with the classic example of a child touches a hot surface and does not do it again. In networking, this could involve an agent selecting traffic routes, evaluating results (congestion, latency, packet loss), and fine-tuning its strategy over time. What is interesting is that, when properly trained, this type of model can learn to “play” with the network in an optimal way without having received explicit instructions.
It All Starts with Data
One idea that Christian emphasized repeatedly — and which I fully agree with — is that without good data, there can be no real intelligence. This point is especially relevant for our region. Many networks operate without a clear policy for observability and the storage of historical data, which limits their future ability to apply intelligent models.
On the other hand, the techniques that Christian presented can be applied to a wide range of network scenarios, covering everything from reactive measures to predictive capabilities — including intrusion and anomaly detection, traffic classification by application or user type, QoE prediction for critical services like video, and real-time optimization of routing policies and resource allocation.
It is remarkable that these tools are already accessible — many are open source and backed by strong academic and technical communities. Still, they demand not only reliable data but also close cooperation between business experts and technical specialists who can put these tools to work.
In the final part of his presentation, Christian made it very clear that machine learning models have significant limitations, especially when applied to networks. For instance, he pointed out that every network is unique — a true “snowflake” — which means that models are not always transferable from one environment to another. He also noted that if the data are imperfect, fragmented, or lack a standardized format, model performance will vary over time and require continuous retraining. Most importantly, the results must be interpretable and reliable, which is not always the case with complex models such as deep neural networks. “It is not enough for the model to work; we also need to understand why it made that decision,” he said, highlighting the importance of transparency in operational environments.
The two final examples he shared were powerful because they connected theory and academic research with real-world impact.
First, he described a study in which his team was able to predict — up to seven seconds in advance — when a mobile connection would degrade and interrupt video playback. Network data, GPS location, and quality of experience (QoE) observations were enough to train a model that could now be valuable for operators or even streaming platforms.
The second case, even more robust, showed how they managed to build a Network Digital Twin of an optical network to predict minor amplifier failures before they happened. This development was not only published and patented but has already been incorporated by an international network equipment manufacturer. It was the perfect example of every researcher’s dream: taking innovation from the lab into the real world.
His presentation made one thing very clear — and this strongly resonates with what we see in our region: artificial intelligence and machine learning are no longer futuristic ideas or passing trends; they are essential tools for managing networks that have grown so complex that, in many cases, they are simply beyond the reach of manual operations alone.
There are no magic solutions — it is about recognizing that with the right data, genuine technical collaboration, and a clear focus on real business problems, it is possible to build solutions that are measurable, scalable, and truly useful.
At LACNIC, we see this every day: Latin America has what it takes. We have technical talent, active universities, committed network operators, , and a community that understands its context deeply. What we often lack is not talent but the bridges — the connections between academia and operations, between data and action, and between real-world problems and proof of concept.