Notes on “Rethinking the design of the Internet: The end to end arguments vs. the brave new world”

In this paper by Clark, D., the author of “The Design Philosophy of the DARPA Internet Protocols” and Blumenthal, M., discuss the new things that put the original design principle at risk. Particularly, the guiding principles of the design of the internet, called of end-to-end arguments. They discuss various causes to this risk specially the loss of trust among individual users and the conflicts between end users and other third parties such as governments and ISPs.

In their discussion of the end-to-end arguments, to me these things are apparent: they are designing to allow expansions in the network without having to tear down the whole structure and rebuilding a new one. Consistent with the previous paper by Clark, D., the end-to-end arguments maintain the relatively small reliance of the communication state and the grunt work are done at the fringes.

Despite the fairly good design principles of the end-to-end arguments, the author has observed trends that makes reconsidering the original design principles worthwhile. The authors outlined several factors such as untrustworthiness of the users, more demanding applications, ISP service differentiation, rise of third party involvement, and the less sophisticated users. All of which seem to lead to some sort of disruption of the end-to-end argument design principle. The authors gave examples of scenarios where the end-to-end principle may not seem sufficient, such as filtering spam, and content caching. Currently, in our time, these are being dealt with in the servers, still they are at the ends of the network, and the end-to-end principles still apparently hold. Although this architecture of having intermediating servers may not seem like that of the original end-to-end argument model, the services are still not at the network level. It’s amusing to think that the thinking of the past several decades built this vast network and still the design principles still hold. Although we’re not sure how long these will hold, since there is an impending shift of the network to including everyday things, thus having more connecting nodes of the internet that don’t have the processing power or storage available to current devices. The internet of things maybe designed by using the current intermediate servers still since that is the structure that stood the test of time and is good enough. I’m not sure whether there would come a time when the network would perform much more than just storing and forwarding packets. I read that the current standard of IPv6 requires that the routers be able to perform the IPsec functions on the router level. This reduces load on the individual devices connected to them, and seems to divert from the original end-to-end arguments.

Trust among users was a foundation of the early internet. Similar in real life, trust cannot be handed to other individuals fully. How to deal with this presence of distrust is detailed in the paper. Some scenarios are observed by the authors regarding this issue of distrust among users. One particular example is the need to communicate among users where a lack of mutal total trust exists. In the real world, one can look up or verify the identity of the individual that they are dealing with. Though one is still not precisely sure that they can verify for sure who they’re dealing with. Another related scenario to this is the need for anonimity among users. Previous technologies do not allow for the level of anonimity that the internet can provide. Telephone calls can be traced since they are built on a physically switched network, though pay phones may allow anonymous phone calls. Similarly, letter writing can be anonymous but others resort to using wax stamps or dry seals to verify the authenticity of the letters. Today the issue of privacy is still being actively dealt upon.

Among the issue of privacy and anonymity is the presence of third parties that have interests different from the end users. Examples of these third parties are the governments and the ISPs. The authors have mentioned a possible arms race of the users and these third parties, if the third parties start asserting their claims to the users’ data. Much like citizens of the americas, I personally would like to protect my privacy. Though I would allow them to collect anonymous data on me to improve the service that they can give. If the aim of the governments was to protect its citizens of terrorist attacks and such, and there is a possible arms race where methods of trying to intervene on the users can be thwarted by the users, then it seems foolish for the governments to try and intervene at all. The internet is still built upon the principles of the end-to-end argument where the interpretation and handling of data is in the end hosts. Though the governments/ISPs are powerful organizations, that may have the resources to perform spying on the users, much like in the recent NSA controversies, they would seem to be wasting valuable resources in the development of these spying techniques. Despite this seemingly dead-end approach to spying, CCTV cameras ever present in other countries, such as Korea and the UK, seems like a good idea to me. I would like to be observed in most of the public places where crime is a possibility. Maybe the collection of data in the internet can proceed and proliferate in some areas that can be declared as public. Crimes can be a possibility too in private areas, and much of the internet is public, this is maybe why the illegal and conspicuous activities in the internet occur on the underground/private internet. One example of this is Silkroad, an online blackmarket where users trade illegal substances. This maybe part of the reasons why the governments want to intervene on the users’ traffic.

In the authors’ conclusions, it was mentioned that the original motto of the internet is forever gone. I agree that since the internet is becoming a public right, more and more people, who are not fit in the definition of trustable are being able to access the internet. They cannot be prevented from access, though it may be good to have a mechanism to prevent users from accessing in response to “bad behavior”. This blocking mechanism is currently happening today in the form of bans and blockages, though that user is not prevented access to the other resources of the internet.

The authors are very much looking forward to the developments of the internet. These include the non-technological advances, such as in law and accounting. Despite this I concur with them that the end-to-end arguments should be preserved for now, although in the future these concept might become obsolete too. I’m all for preserving the original design principles, but these original design principles might need to be obsoleted only in cases where the original concerns on innovation and flexibility is still being handled by a newer approach.

Show your support

Clapping shows how much you appreciated JB Lorenzo’s story.