Death of the Internet

Aaron Sempf
11 min readJan 7, 2015

Evolution of Interconnectivity

Over the past five years, information created and shared online has increased nine-fold.

This year, 2015, the Internet will connect one trillion devices.

As the Internet expands, we must ensure that it continues to be a platform for choice, competition, neutrality, a tool for social change and connectivity, drives innovation and infuses development across the globe.

But can the internet today support the growth of tomorrow?

From its meagre beginning, as an ARPA (now DARPA) and U.S. DoD project with the first message on the ARPANET sent on 29 October 1969, to the emergence of commercial ISPs in the late 1980s which gave rise to its evolution; The Internet, connecting the globe through peered networks, has become an indispensable part of daily life for more than two billion people around the world.

The ARPANET died in 1990 — decommissioned on 28 February 1990 — to give way to its evolution. So too will the Internet die, to make way for yet another evolution.

Requiem of the ARPANET

As the Internet grew into its own in the early 1990s from the emergence of ISPs and expanded and evolved to overtake the capabilities of the ARPANET, the Internet as we know it today is reaching the limit of its capabilities, in its current form — energy consumption, communication speed, data storage volume.

The Internet, also referred to as the Net, is made up of a multitude of services, from communications to virtual systems, most notably the Net is home to the World Wide Web or Web, one of the largest information-sharing sources in the world.

There are many factors that make the Internet what it is today, some of which are social, some political. But all are based on the same technological infrastructure.

Removing the Social and Political aspects from this discussion, for the Internet to survive the growth and expansion of information systems and Information-sharing models built on-top, it must evolve to overcome a few fundamental issues to make way for the Net of tomorrow.

The Fundamentals

The Internet has 3 fundamentals which form both its strength and its vulnerability

1: Domain Name System
At its core, the Internet relies on the Domain Name System (DNS).
DNS translates alphanumeric domain names into the numerical Internet Protocol (IP) addresses required by computers to identify each other according to TCP/IP.

DNS in-itself is its own network. If one DNS server can not resolve a domain name, it asks another server, and so on, until the correct IP address is returned.

If DNS was broken or worse subverted, we could no longer trust Domain Names. Redirection and Phishing would be easy and commonplace. Own the DNS and you own the Internet.

2: Large scale infrastructure
Infrastructure is the basic physical and organizational hardware needed for the operation of the Internet.

Hardware includes everything from the cables that carry terabits of information every second, to the core routers, servers, cell towers, satellites and radios of the backbone.

The backbone being defined as the principal data routes between large, strategically interconnected networks, which house the Internet exchange points and network access points that interchange Internet traffic between the countries, continents and across the oceans.

In 2008 India lost half its Internet capacity when two strands of fibre as thick as a thumb snapped. If an accident can make the Internet unusable for hundreds of millions, imagine what an intentional attack could do.

3: Routers and routing
The Internet’s self-healing mechanisms rely on the Border Gateway Protocol (BGP), for exchanging routing information between gateway hosts, each with its own router, in a network of autonomous systems.

Routing is the process of selecting best paths in a network and is performed using packet switching technology, which directs packet forwarding along a path, by hopping from node to node until the destination is reached.

The routing process usually directs forwarding on the basis of routing tables which maintain a record of the routes to various network destinations. To ensure all paths’ availability, a routing network must allow for continuous connections and reconfiguration around broken or blocked paths, using self-healing algorithms.

Thus, constructing routing tables, which are held in the router’s memory, are essential for efficient routing.

According to Cisco in a 2006 vulnerability report, “the most damaging attacks are caused by the deliberate misconfiguration of a trusted router.” Corrupted BGP not only stops the Internet from forwarding traffic, but when access to the routers is changed it interferes with the ability to get in and fix them.

Evolution

The current generation Internet is seeing the growth of the interconnection of things and devices: the Internet of Things (IoT). These devices range from any type of sensors or cameras to vehicles and machines.

IoT is increasing the connectedness of people and things on a scale that once was unimaginable. Connected devices outnumber the world’s population by 1.5 to 1.

The extended interconnection of devices results in a continuous stream of data that open up new insights, intelligence, and models.

The insights gained from this data in turn give rise to new services and applications that can complement the way things, users, groups, business, et al. interact across the Internet.

Evolution of Communication

In most cases IoT utilise Wireless Mesh Network (WMN) technology, and while the Internet is the world’s largest mesh network, much of it from the ISP/Carrier and beyond is still wired.

There are many reasons for the fundamental wired nature of the Internet — inability to keep up with the speed of changing technology over such a large scale, cost, to name a few — but one of the main reasons is related to another Internet fundamental, Routing, and the requirement of static (unchanging) connections.

However, for the Internet to support the exponential growth of interconnectivity it must evolve, and to evolve it must change the very nature of communication between nodes to allow for simpler and faster communication with lower power consumption.

The mesh of ecosystems that is the Internet of Things is the first step towards the evolution of Interconnectivity

As we know, the Internet is a mesh network built on a routing technique, however there is another technique that may suit what we need, known as Flooding, which in recent years has had advancements that make it a much more viable technique for large scale, large size, efficient communication.

In Flooding, instead of using a specific route for sending a message from one node to another, the message is sent to all nodes in the network.

The Flooding technique is simplistic and highly reliable. There are no sophisticated routing techniques since there is no routing. No routing means no network management, no need for self-discovery, no need for self-repair algorithms, and because the message is the “payload”, no overhead for conveying routing tables or routing information.

Signals arriving at each node through several propagation paths benefit from the inherent space diversity, thus maximizing the network robustness of handling obstructions, interferences, and resistance to multipath fading, with practically no single point of failure.

Despite these benefits, flooding the network with repeated messages has its own challenges. For transmitting data, the main questions are how data packet collisions “broadcast storms” are avoided, how the retransmitting process propagates the message efficiently toward its destination, and how the process ends, without an energy-wasting avalanche.

Potentially a synchronised-Flooding approach using a synergic combination of techniques incorporating time division multiple access combined with high-accuracy synchronization would allow us to solve these challenges.

Nodes transmit only relevant information, and retransmissions occur simultaneously so that the message propagates one hop in all directions at precisely the same time and avoids collisions, until the network reaches the set number of maximum hops and the message has flooded through the network.

In the Flooding-based scheme, a signal obstruction or even a limited number of signal obstructions will most likely not affect the operation at all because of the numerous redundant paths.

Consider the nature of the nodes in this network as both backbone communication nodes and IoT and devices, a wireless mesh network approach is the most effective network topology and communication technique for mass communication across interconnected devices; providing the ability to join and leave the network dynamically, and participate in the receiving and propagation of messages, without having to account for neighbour devices or route information.

Evolution of Power and Processing

The synchronised-Flooding approach offers a simplified infrastructure requiring only nodes and gateways — nodes that act as the link between multiple networks — and the transmission of only relevant data.

In routing-based networks, even though the infrastructure requirements are not as simple, the total number of operating nodes at any moment (when the network is transmitting) is always lower than in Flooding-based networks; so it would seem, routing consumes less energy.

On the other hand, Flooding-based messages are much more efficient, as they do not require the overhead associated with transmitting routing tables and commands, which increases with the number of nodes and hops.

No routing means that the controller is extremely simple, requiring minimal computing power and memory and thus low power consumption, low PCB real estate, and low cost.

Furthermore, the energy of the signals received from adjacent nodes adds up, so less power can be used for achieving the same range.

But with the increase of interconnected devices, comes an increase in the data streaming through the net coupled with the physical power requirements, there arises a need for improved processing speed of larger data size with reduced power consumption.

Fortunately, the technology required to handle this is not far away — maybe even as early as 2016 — with technologies such as HP The Machine focusing on the idea that current RAM, storage, and interconnect technology can’t keep up with modern Big Data processing requirements.

The base focus of The Machine, completely revolutionises current computer architecture, combining technologies that could solve both problems; using Memristors that could replace RAM and long-term flash storage, and Silicon Photonics could provide faster on- and off-motherboard buses.

The Machine will reinvent the fundamental architecture of computers to enable a quantum leap in performance and efficiency, while lowering costs over the long term and improving security.

Technologies combining hyper-fast, super-dense storage with higher data processing rates and lower power consumption would not only enable the ability to process much larger data sets, but also process the increased level of traffic that mesh synchronised-Flooding approach introduces.

Evolution of Application

With the evolution of infrastructure and the way devices and individuals communicate, so to will the information and application layers evolve.

The leap forward in performance and efficiency of communication and processing Big Data at the physical layer, builds the foundation required for improved shared processing across the internet at the application layer.

local apps work together to securely share information and solve problems as a distributed mesh

More advanced shared processing enables improved Machine Intelligence capabilities, which in its current state is enabling applications we are already familiar with, the likes of popular Intelligent Assistants Siri, Google Now, and Cortana.

These Intelligent Assistants are still in their infancy. The aspects of these assistants that we most readily recognise are their interfaces and mode of distribution — how and where we interact with them —

The experience of Intelligent Assistants that speak our language and communicate like a person has come to be their defining factor. But, they are overwhelmingly focused on natural language interfaces.

When it comes to the scope of what they are, or will be, capable of achieving, to assist and to learn and share information based on the user they assist. What they grown into will depend on the ability to learn through implicit communication and share and process across the Internet with other Intelligent Assistants, as a distributed system.

Implicit communication dominates. Assistants respond and react to our subtle contextual interactions, and to each other, within vast informational ecosystems.

The ability to learn and share about our needs and intentions based on the context of where we are and what we’re doing, as well as on our ability to make inferences based on associations, — the way we organize information or express interests — as a mesh system of specialised assistants, will reshape the way we interface and interact with the Internet and each other.

Every website, every service, every app, and across the internet of things, everything embodies a collection of tasks that may be supported by intelligent assistants. In this environment, the metaphor of personal assistants quickly fragments into systems that are much more akin to colonies of ants.

The Web will become an information model where information is provided as contextual according to time, place and activity of the individual, and answers from assistants instead of a menu of links.

Evolution of Interface

As interconnectivity across the Internet increases and the landscape of the Web changes from the colonisation of Intelligent ecosystems, the things and devices that make up the IoT that we use to interface and interact with, will continue to evolve as they have done in recent years, but with a direction towards this being discussed, where previously potentially undirected or aimless — Evolution of technology for profit, driven by marketeers; Or technology advances in academia for academia sake, without potential vision.

Seeing what 2014 has introduced as far as smarter things and devices, and understanding the evolution of technology so far. We can assume that technology is going to continue along the same path, improved processing as devices become smaller and more personal wearable devices, that all interconnect.

Already, the likes of Google and Sony are producing wearable devices that connect us to the Web of information in a real-world environment through the use of Augmented Reality (AR) feeding real-time information directly to the field of vision.

Art: Sean Hamilton Alexander

As wearable technology companies continue to design, improve and redesign the way we wear our devices and the way technology proliferates into everything we do, as Intelligent Assistants become more integrated into our daily lives, the ‘interface’ to which has the potential to become so integrated that we wont even recognise that they are there, monitoring, learning, processing and sharing.

Our view of the world will be augmented, by the Heads-Up-Display of Intelligent Assistants feeding us information that they share and receive from the mesh of IoT ecosystems around us.

Social and Political Road Blocks

While a lot of this can sound fanciful and sci-fi, this is where the pure technology is heading based on what technologies and practices are emerging or have established themselves and are evolving today.

The biggest road blocks to face in achieving this are the Social and Political issues, which often come down to ethics and control. I’m not going to discuss these at this point as I only want to discuss where the tech is going and what is possible from a conceptual point of view.

With all the social and political road blocks mixed in, we can assume this conceptual view of the evolution of connectivity is going to be difficult to achieve. But its necessary, to support the growth and to make sure the foundation of what we have achieved today, is still here 10 years from now, all be it a completely different landscape.

Regardless of the social or political restrictions placed upon them, people will always find a way to do what they want or achieve their own goals using whatever means are available.

But the main focus must be on the interconnectivity of everything using increased communication, and saving power.

Questions, Comments, Discussion… Please feel free to comment on this article, email me or pass it round. This article is meant as a conceptual discussion of where the internet and related technologies are heading, agree or disagree, as long as it provokes thought.

--

--

Aaron Sempf

Distributed and Intelligent systems research & development | Principal Solutions Architect @ AWS. (opinions are my own)