A Project to Improve Internet User Experience


A Project to Improve Internet User Experience

Argentina’s University of Palermo is working on an initiative to optimize latency and therefore improve Internet user experience.

Its technical innovation led this project to be selected as one of the winners of the grants offered by LACNIC’s FRIDA Program. The funds will be used to train and incorporate new researchers for the purpose of expanding the team’s ability to explore different alternatives and development opportunities. “The grant is a great motivator for participants, as it expands the project’s visibility and effect of their results,” noted Alejandro Popovsky, Dean of Engineering at the University of Palermo.

Why is optimizing latency important?

In general, the quality of an Internet service is associated with the bandwidth it offers and the overbooking of infrastructure by the provider, which determines the probability that a customer will reach the nominal bandwidth most of the time.

However, the quality of many services does not depend on the available bandwidth, as these services do not require the transmission of large amounts of information, but rather depend on the rapid response to the user’s request. In this case, latency is the most important factor affecting the perceived quality of the service.

The Internet is an example of this. Web pages are made up of dozens and sometimes hundreds of objects such as graphics which are downloaded separately. Each of these objects is a transaction against the web server, which means that a high latency is multiplied by the number of objects. This, in turn, results in very long download times for the page.

What can you tell us about the project by University of Palermo to develop an algorithm to improve latency?

In transactional communications, latency has two main components: server latency and network latency. In turn, network latency includes a component caused by the finite speed at which wires can transport information, plus a second component due to the time that data packets spend in output router queues. The latter is known as queuing delay.

Queuing delay generally accounts for the greater part of network latency and is caused by the TCP congestion control algorithm, which traditionally only seeks to optimize two goals: decreasing packet losses and maximizing throughput.

Because it only considers the two goals above, the TCP congestion control algorithm neglects two other, also very important goals: minimizing latency and the fair distribution of available capacity among connections that share network resources. The problem caused by traditional congestion control algorithms is often called bufferbloat, as they flood network device output buffers with packets.

The project by the University of Palermo seeks to add these two goals — latency optimization and fair distribution — to the first two goals of reducing packet losses and maximizing throughput in order to better represent the quality needs of Internet users.

What is the innovation proposed in the algorithm you intend to develop?

Other existing congestion control algorithms seek to optimize latency, yet their performance is generally very poor when they encounter a bottleneck on the network path shared with other traditional connections. Nevertheless, these algorithms are used, for example, in application update servers, which can operate using idle network capacity and, in their case, the priority is not how long they take but that they do not affect the traffic of the user of latency-sensitive services.

The congestion control algorithm developed by University of Palermo is the first to be based on estimating the proportion of available capacity used by the connection. Most algorithms use feedback based on the detection of packet losses, others are based on variations of round-trip times, and still others are based on bit rate measurements. Ours, however, is the first to estimate the proportion of utilized capacity.

The goal of our congestion control algorithm is to optimize latency and favor a fair distribution of capacity, but without being relegated when it shares the network path with poorly behaved connections that create bufferboat.

Who would benefit from improved latency?

Latency optimization not only benefits users who implement the new algorithm, but also users of transactional services from servers that utilize traditional algorithms and share the network bottleneck. Once the bufferbloat is installed in network device interfaces, it equally affects all users sharing the bottleneck.

Notify of

Inline Feedbacks
View all comments