Internet Innovation: A History of Constant Adjustment and Tailored Solutions

05/06/2024

Internet Innovation: A History of Constant Adjustment and Tailored Solutions
Designed by Freepik

By Carlos Martinez Cagnazzo, LACNIC CTO/Technology Manager

An interesting thing about the Internet is that, throughout its history, innovation and evolution have followed a substantially different path compared to other technologies. For the Internet, solutions have been invented as needed, unlike other design disciplines or technologies such as telephone networks, which have been built following the ‘top down’ paradigm and where specifications or general frameworks determine the functionalities, routes, or environments to be used.

In this sense, I would like to talk about the most famous computer network architecture: the ‘seven layers’ or Open Systems Interconnection (OSI) model. In the late 70s, people were already starting to realize that computers had to be interconnected. At that time, all that existed were proprietary mechanisms defined by major manufacturers of micro and minicomputers, such as IBM environments. Each manufacturer had its own standard for communication between computers.

Additional reading:

Over time, the discussion began to focus on the sustainability of this model, which was not even functional for the manufacturers themselves. It became necessary to define a standard network structure, and this need led to the creation of the OSI model. By then, traditional manufacturers turned to the International Organization for Standardization and began with a series of basic requirements. They then expanded on these and defined various instances and parameters needed for the model to function (in fact, the document was about 2,000 pages long).

The most interesting concept that came out of that model was the idea of ‘layers’: if data travels between two devices through a physical medium (electrical connection), how do those signals get effectively converted into a webpage? The answer is that they pass through a series of layers, each of which adds a different functionality.The first is the physical layer, which is responsible for the physical equipment such as cables and routers, enables data transfer, and is where the data is converted into a stream of bits. This is followed by the data link layer, responsible for transferring information on the same network and controlling errors and data flow, and the network layer, responsible for splitting the data on the sender’s device and reconstructing them on the recipient’s device when transmission occurs between two different networks. The model also includes the transport layer, the session layer, the presentation layer, and the application layer.

While the OSI model is conceptually very relevant, its implementation faced practical challenges: it took a long time to write and define all the parameters, so once the stack was complete and all the definitions were ready, everyone had already started using the TCP/IP protocol.

People continued to work on the OSI model, even defining link and network protocols. However, despite the significant time and money invested in defining these protocols and the fact that they work, they are not in use. The only OSI model protocol that has some utility among users and is often cited by purists as being used in telephone networks is the IS-IS protocol, an Interior Gateway Protocol (IGP) that uses link-state information to make routing decisions.

What do I mean by all this? That sometimes perfect can be the enemy of good. Is TCP/IP the best protocol out there? Definitely not — it has its problems and cons. But the truth is that it was available when people needed it. When the need for interconnection between computers first appeared, we can imagine that the solution was needed urgently and could not wait for a standard to be ready. If I must connect 20 ministries, multiple offices, or all the computers at a university, the urgency of my need will set the pace.

Another example that illustrates this point is that when the IETF realized in the mid-90s that IPv4 was not sustainable, they organized a contest where people could submit ideas on how IPV4 should evolve. One of the proposals was to use the ConnectionLess Network Protocol (CLNP), an OSI protocol for carrying data and error indications at the network level. Functionally, it is similar to IP but with bigger addresses, so it could have solved this issue. However, despite some working experience with the protocol, there were practically no implementations.

Furthermore, the CLNP issue was definitely influenced by the lack of an open standard. Open processes have been essential throughout the history of Internet innovation. Unlike closed and defined environments, open environments allow many more people to participate in decisions and are freely available for adoption, implementation, and updates. Within each industry, companies share open standards because they bring significant value to themselves and their customers. Standards are typically managed jointly by a group of stakeholders, with rules regarding the types of changes or updates users can make to ensure that the standard maintains interoperability and quality.

All of this is commonplace within the framework of meetings of the IETF, an organization that publishes technical documentation in the form of RFCs (Requests for Comments) defining the technical foundations of the Internet, such as addressing, routing and transport.RFCs recommend best operational practices and specify application protocols, and they often contain errata due to their open development process.

Returning to the main point of this article, back in 1969 when it all started, an entire series of functionalities had been conceived, specified, and detailed for the OSI stack. However, the person who sent the first data packet over the Internet didn’t need much more than that. Over time, as the need for interconnection grew and evolved, customized solutions were invented. These were recorded in several RFCs, including the one corresponding to the first Internet routing protocol published in 1988, the result of a specific need to exchange routing information between gateways which the authors later decided to document.

IP uses the ‘best effort’ approach. For example, when my computer sends a data packet, there is no guarantee that it will reach its destination. Basically, this works because the laws of statistics are in its favor: for 100 packets to arrive, 110 must be sent. What’s important is that the model works well and is efficient, which is one of the things that sets it apart from other protocols. It’s also worth noting that the OSI stack is a packet-based rather than a circuits-based model, but it includes a substantial control layer, which increases complexity for the user.

In contrast to the OSI model, the key issue here is the window of opportunity: people need solutions precisely when they need them, and this urgency characterized the work of the Internet pioneers who were coming up with solutions in response to specific requirements. Rather than following ‘top-down’ restrictions, the innovation that shaped the Internet was far more adaptive than standardized. Ultimately, this is what made the model much more agile and successful.

The views expressed by the authors of this blog are their own and do not necessarily reflect the views of LACNIC.

Subscribe
Notify of

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments