Three reasons why the Internet ‘virtuous circle’ is broken

The core theory underpinning ‘net neutrality’ is the idea of a ‘virtuous circle’. This theory proposes there is a positive reinforcing loop of growing Internet users and applications. Proponents of this theory argue that policy makers should seek to protect and promote the ‘virtuous circle’. The suggested means to achieve this is an ‘open Internet’ regulatory framework.
 
Here are three reasons why this theory and approach is flawed and needs reconsidering.
 
1. ‘Innovation’ is not the only societal goal
 
This objection is perhaps best put by the excellent Installing Social Order blog:
 
“Wu and his fellow defenders of net neutrality hold innovation as their highest value. These Internet Darwinians believe that a neutral network is simply a means to innovative ends. Here, as elsewhere, we should worry about policies that champion innovation at the expense of other values, such as maintenance or justice.”
 
By focusing solely on short-term ‘innovation’ we may cause long-term ‘digital ecology’ problems. This is not a mere theory. The Internet’s expedient approach[PDF] to architecture leads to a multitude of security, reliability, performance, sustainability and fairness crises.
 
There is a precedent for the ills that a regulatory fixation on a single variable can cause. The focus on ‘speed’ as a sole proxy for fitness-for-purpose has neglected other considerations, such as performance or security isolation of applications. This means the infrastructure we have built for Internet access now needs costly duplicates for other key applications, like smart meters.
 
So even on its own terms, the objective of ‘innovation’ for just Web-like applications is a folly, as it may pessimise infrastructure for other important uses. For instance, declaring ‘fast lanes’ as harmful to innovation because they take resources from traditional Internet uses denies any value to the alternative uses. Try telling a deaf person that reliable sign language is not a valid innovation!
 
There’s no reason to elevate classic Internet-type uses over other applications that may then be forced to use expensive substitute telecoms products, or simply not come into existence. The ‘virtuous circle’ suffers from an availability bias in the narrow types of progress it sees as legitimate.
 
2. Drives unfair and unjust resource rationing

Any network is a finite resource, and the computers attached to it are able to collectively saturate it. (With FTTH and 4G we’re at the point where individual users can saturate local backhaul resources.) Therefore there is scarcity, and any and all network use has a cost. Indeed, the mere opportunity to apply a load (even if no packets are sent) has a cost, in that there is a QoE risk to other users that otherwise wouldn’t exist.
 
The ‘open Internet’ model falsely presumes that those applying a load to the network are internalising their costs. If they aren’t, then promoting an ever-increasing number of performance ‘free riders’ must logically result in eventual technical and/or economic collapse. (There’s a related fallacy of zero costs of association, i.e. everyone being connected to everyone else has no routing, security, provisioning or other costs.)
 
The current Internet interconnection model encourages unconstrained ‘pollution’ of the shared resource, since the ‘cost of quality’ is not given a market price. Whilst Netflix may care greatly about the quality of experience of its own application, I bet you they aren’t measuring the QoE impact they have on other applications! They certainly aren’t paying for the resulting additional network idleness required to (partially) restore that QoE.
 
By mispricing resources, we are encouraging ‘beggar thy neighbour’ approaches to application design. Your ‘adaptive’ algorithm is my denial of service attack. The only possible result is rationing of performance with little regard to need or willingness to pay.
 
3. Perpetuates unsafe ‘frequentist’ design assumptions
 
An assumption of the ‘open Internet’ movement is that “more of the same leads to more of the same”. In other words, by preserving the past and current technical and economic structure of the Internet, future innovation will be as strong as past innovation.
 
This assumption fails on technical grounds. The types of demand we are putting on the future Internet (e.g. safety-critical ‘smart everything’ devices, virtual reality, distributed storage systems) are fundamentally different to the past. There is no reason to automatically assume that these needs will be met by the current technical and economic model.
 
Indeed, the network supply is hitting scaling limits of TCP/IP as there is no Moore’s Law for networks. As we go to ever-higher link speeds key technical design ratios are changing, and there is a “stochastic breakdown” due to more rapid state changes. This results in growing “non-stationarity” (a kind of “statistical noise”) that causes application failure.
 
The bottom line? Extrapolating the past is unsafe. We are nowhere near the Internet’s “end state”, so fossilising the current design is positive insane. Given the foreseeable changes ahead, we need to encourage more experimentation and design diversity (both technical and commercial) to ensure the Internet’s longevity.
 
Time for a rethink on broadband policy and innovation
 
The three issues raised here are not the only reasons to be sceptical of the ‘open Internet’ movement. For instance, professional economists have cast doubt on the implied belief that preventing short-term profit maximisation results in maximum long-term citizen welfare. I am not an economist, so I shall not claim to have the expertise to pass judgement.
 
Clearly there is a kernel of truth to a ‘virtuous circle’ of innovation. The problem is that it is factually simplistic and intellectually substandard as a theoretical model. The objections described above are enough to tell us that the leap to an ‘open Internet’ policy is based on flimsy reasoning. So what should we do about it?
 
I believe we need to hit the “emergency stop” on broadband regulation to collectively address three tasks:

  1. We need a common framework for defining the possibility space for societal “success”, even if we have natural disagreement as to where “success” lies within it.
  2. We need a shared understanding of network resource economics, grounded in robust science.
  3. We need a rational model to relate (future) network demand and supply, informed by the hard constraints of physics and mathematics as well as the soft constraints of technology.

If we can get these right then the broadband policy debate can advance and address issues of greater substance.

Want more fresh thinking? 
Then
sign up for my free mailing list.