Thanks for tuning into this post. To give a quick background on why I’m writing this, I’ve been researching several exciting venture capital sub-sectors over the past few months and have been detailing my findings through a series of blog posts (which you can find here). I’m currently in the process of learning about the blockchain space and thought the best place to start would be the history. In doing so, I realized that it would actually be informative to take a step back and start with the history of the modern internet as there will likely be parallels between the development of the centralized internet and “internet 2.0”. Therefore, this post is dedicated to understanding that history. I hope you enjoy it!
The Intergalactic Computer Network
The best place to start is with J.C.R. Licklider. Licklider was the first Director of the Information Processing Techniques Office (“IPTO”) at the United States Department of Defense’s Advanced Research Projects Agency (“ARPA”). Wow that was a mouthful. He was also the first person to conceptualize the internet. Licklider described his vision for an “Intergalactic Network” of interconnected computers in his 1963 memo to computer scientists, which essentially laid the groundwork for future scientists and practitioners.
A few years later, ARPA actually created the first iteration of the internet. In line with the level of creativity you’d expect from the Department of Defense, they called this network “ARPANET”. ARPANET was initially designed to connect 16 different universities and research centers and allow users to send packets of data using Interface Message Processors (similar to modern day routers). Surprisingly enough, this was actually intended to be a decentralized network. The idea was for it to be a peer-to-peer network that was robust enough to withstand the loss of a large portion of the connected nodes. Some say the network was developed to so that it could survive a nuclear attack; however, that point has since been refuted by the Internet Society. Either way, it’s clear that the network was intended to have an aspect of survivability in the face of a natural disaster or war.
The foundation of the network was laid in 1969 when the first four nodes were connected. They included UCLA’s Network Measurement Center, the Stanford Research Institute, U.C. Santa Barbara and the University of Utah. The first message was sent across the ARPANET in October 1969 by Charles Kline, a UCLA student programmer. Comically, the first message was “LO” because Charles was trying to “LOGIN” to the SRI computer. The first permanent ARPANET link was established a month later in November 1969. This early communication on the ARPANET was an incredibly important proof of concept; however, the network was still very limited in what it was able to do. It was limited to sending messages, sending files, and printing to a remote printer. While this limited functionality negatively impacted the early growth of the network, other research institutions and universities slowly came on board. By the end of 1971, the network grew from 4 nodes to 15 nodes. That number grew to 35 by the end of 1973 and 63 by the end of 1976.
TCP / IP
As ARPANET continued to grow throughout the mid-1970’s, researchers began to test a variety of protocols on the network (a protocol is a set of rules that governs how information is sent across the network). The most prominent protocol at that time was the Network Control Protocol (“NCP”), which was the first to establish reliable bi-directional links between nodes.
At around the same time, several other government and research institutions began to create their own unique networks. The problem was that none of these networks were able to communicate with each other. The protocols that they were using only allowed information to be sent within the network. This need for inter-network communication led to the development of the Transmission Control Protocol (“TCP”) and the Internet Protocol (“IP”) by Vinton Cerf in 1977. Let’s take two seconds to discuss these two protocols as they are incredibly important and still serve as the backbone of the modern internet. When you send a piece of information from one computer to another, the TCP and IP are what make sure that information gets where it’s supposed to go. The TCP takes the information from your computer’s applications, breaks it into a number of smaller data packets, gives those data packets headers (so that they can be repackaged in the right order when they get to the recipient’s computer), and sends those data packets to the IP. The IP then routes those data packets across the internet using the recipient’s IP address. Importantly, this TCP / IP protocol stack allowed networks to connect and seamlessly hand information off to each other. This made networks much more useful and fueled an increase in adoption.
Unfortunately, with that growth came new problems. Notably, it was difficult to monitor what information various users were accessing, which is a big issue if the network is connected to Department of Defense computers. To mitigate those concerns, the government split ARPANET into two separate networks: MILNET and ARPANET. MILNET was used for military purposes and ARPANET was mainly dedicated to academic research. Over time, other networks surpassed ARPANET in both size and popularity (we’ll discuss this in the next section). As a result, ARPANET eventually faded away and was officially decommissioned by the government in 1990 once the internet was effectively privatized.
As I mentioned above, ARPANET was the first iteration of the of the internet and laid an important foundation; however, there were a significant number of other government agencies and research institutions that began to develop their own unique networks to facilitate communication and data sharing. One such network that was crucial to the development of the internet and ultimately surpassed ARPANET’s capabilities was the National Science Foundation’s NSFNET. The NSFNET was initially created to connect researchers to the National Science Foundation’s five supercomputing centers. This happened in two stages. The first stage began in 1985 when the NSF funded, created, and connected its five supercomputing centers. These centers were the John von Neumann Center at Princeton, the San Diego Supercomputing Center at U.C. San Diego, the National Center for Supercomputing Applications at the University of Illinois (Urbana-Champaign campus), the Cornell Theory Center at Cornell University and the Pittsburgh Supercomputing Center, which was a joint effort between Carnegie Mellon and the University of Pittsburgh. The second stage of the project then connected the supercomputer network to various regional networks, which were subsequently connected to various university networks. This was all made possible by the invention of the TCP / IP protocol discussed above. Upon the completion of stage two in 1986, this network connected 200 different colleges and universities.
Following its initial development, the National Science Foundation tried to encourage adoption of the NSFNET by (i) offering grants to academic institutions that joined the network and (ii) encouraging regional networks to find commercial customers that were willing to pay for the service. As the number of computers in America exponentially increased, so did the NSFNET. Unfortunately, as the network scaled up, it also became increasingly congested, forcing the NSFNET to upgrade the backbone to 1.5 Mbits/s in 1987 and eventually to 45 Mbits/s in 1991 (which sounds like a lot but would be laughably insufficient by today’s standards!).
The growth of the NSFNET was crucially important in the internet’s development because it proved that you could scale a complex network of independently operated networks. As a result, the NSFNET essentially functioned as the bridge between the ARPANET era and the modern internet. In the next section we’ll talk about the shift from the “federally-funded backbone” to the current model of commercially operated networks.
Privatization of the Internet
As we mentioned, the next step was to transition from the government-run NSFNET and ARPANET to the commercial internet backbone that we use today. For readers who want a more detailed account, you can find a really good history written by Rajiv Shah and Jay Kesan from the University of Illinois here.
The transition from NSFNET to a commercial network was spurred by the NSF’s acceptable use policy. The policy prohibited the use of NFSNET for “purposes not in support of research and education”. This inspired the creation of for-profit spin-offs that provided internet connectivity to firms that were worried they’d violate the acceptable use policy. These businesses eventually interconnected their networks to create the Commercial Internet Exchange, a commercial alternative to NSFNET.
At around the same time, the NSF contracted out the management of its network to a few different firms (MERIT, MCI and IBM), leading to several debates of whether this created unfair competitive dynamics. As a result of those discussions, the government stepped in to decide the future of the network. Following several hearings and discussions, the government decided to amend the network’s acceptable use policy and open it up to commercial use, provided that this growth indirectly benefited research and education.
After opening up the network to commercial users, the government began to transition the ownership and management of the network to the private sector. Between 1989 and 1993, the government deliberated on what the appropriate course of action should be. In 1993, it announced a plan to award backbone services to commercial providers. The plan had three parts. First, there needed to be a “Routing Arbiter” (aka a network cop) to ensure consistent routing policies. Second, there needed to be a “very highspeed backbone service” that would replace the NSF as the internet’s backbone. Third, there needed to be multiple network access points (“NAPs”) rather than a central backbone (like the NSFNET) to ensure competition and sustainability. In 1994, the NAPs were awarded to a few private corporations, including Sprint (for New York), MFS (for D.C.), Ameritech (for Chicago), and Pacific Bell (for California). Over the next six to twelve months, the regional networks disconnected from the NSFNET and re-connected to the commercial providers’ network access points. On April 30, 1995, the NSFNET was decommissioned, ending the era of public network ownership.
So what happened from there? Once the internet was opened up for commercial use in the early 1990’s, it exploded. Early internet service providers like The World, America Online, CompuServe and The Source brought internet access to the masses through dial-up connections. When consumers began to demand faster connections, internet service providers realized they needed an upgrade. This led to the creation of broadband access by DSL. DSL continued to deliver the internet through phone lines but at a much faster pace. The roll out of DSL spurred massive growth in urban areas; however, it was still challenging to connect to rural areas. To access these more remote areas of the country, providers had to come up with something new, leading to the development of satellite internet, which ultimately connected the rest of the country.
This gives an overview of the development of the modern internet infrastructure; however, we’re still missing a crucial discussion about three pieces of technology that fueled the internet’s mass adoption and popularity in the 1990's.
The World Wide Web, Internet Browsers & Search Engines:
I know what you’re thinking: FINALLY! After reading about years of scientific research and lackluster networks, the internet finally gets interesting. I agree.
The internet was not able to gain popularity and mainstream adoption until the development of several tools that provided an attractive user interface and made it easily accessible. Those tools were the world wide web, internet browsers and search engines.
The first of these developments was the world wide web (“the web”). The web was created by Tim Berners Lee in 1989. Tim Berners Lee combined a few existing technologies to make the internet much more functional for end users. Those technologies include (i) hypertext, which allowed users to simply click a link to navigate to a different area of the network, (ii) Hypertext Markup Language (HTML), which allowed web pages to display different pictures, colors and fonts, and (iii) the Uniform Resource Locator (“URL”), which is a system of unique identifiers that allows users to directly locate content on the network. By combining these elements, Tim Berners Lee made the internet much easier to navigate and provided the ability to access content that spanned a variety of formats (including text, pictures, audio and video).
As a quick aside, I want to point out that this hopefully clarifies the difference between the internet and the world wide web. While they are commonly used interchangeably, the internet is simply a system of interconnected networks whereas the world wide web is a “global collection of documents and other resources, linked by hyperlinks and [URLs]”.
The second catalyzing technology was the web browser. I’m sure you know this already but web browsers are software applications that allows users to locate, access and display web pages. They take web pages written in various computer languages (HTML, CSS, etc.) and present them in a human readable format. The world wide web would not have been successful without web browsers to help users navigate it. Luckily, Tim Berners Lee led the charge on that front as well. In 1991, Tim created the first web browser called “WorldWideWeb”. That name is obviously very confusing so he later changed the name to “Nexus” to avoid any confusion between the browser and the internet. In 1992, the MidasWWW and Lynx browsers were released. One year after that, in 1993, the University of Illinois’ National Center for Supercomputing Applications created the Mosaic browser, which was the first browser that could display text and images together. While Mosaic is no longer around, it’s widely credited as the browser that popularized web browsing. One year following the release of Mosaic, several of the Mosaic team members (including Marc Andreessen of Andreessen Horowitz fame) spun out and created their own browser, which they called the Netscape Navigator . Netscape became wildly popular and quickly dominated the browser market in the mid-90’s. What’s interesting to note is that zero of the browsers that had gained popularity up to this point in history are still around today (which should be a word of caution to those that are investing in the latest and greatest blockchain applications…).
Everything changed with Microsoft’s release of IE 3 in 1996. While Microsoft had introduced a few prototypes up to this point, IE 3 is what put Internet Explorer on the map. By integrating the browser with the Windows operating systems and offering a number of new features such as mail and multimedia applications, the adoption of Internet Explorer took off. A few years later, in 1999, it became the leading web browser. Since then, Mozilla launched Firefox, Apple launched Safari and Google launched Chrome. While it is interesting to read about the browser wars that have occurred since 1991, the important takeaway is that the simplicity and accessibility of web browsers were a huge step in fueling mass adoption of the internet in the late 1990's.
The final piece of the puzzle was the search engine. There were a few “search engines” prior to the world wide web, including Archie and Gopher, but these typically just allowed you to search web indices for keywords. Also, they did not rank search results as the internet was small enough at that time that users could manually go through all of the results. Following the introduction of the web, there were a significant number of search engines that popped up, including Virtual Library, World Wide Web Wanderer, W3Catalog, Aliweb, JumpStation, InfoSeek, etc. None of these early engines differentiated themselves until the creation of WebCrawler in 1994, which allowed users to search for individual words on web pages. As you can imagine, that made the internet much easier to search. Soon after the release of WebCrawler, Yahoo! launched its web directory and Lycos was created by Michael Mauldin, each of which went on to gain some degree of popularity. I think we all know where this is headed though. A couple of years later, in 1996, Larry Page and Sergey Brin began working on BackRub, the predecessor to Google. In September 1997, they registered the Google.com domain name and, well, the rest is history.
Just to reiterate one more time, there are two key takeaways from this era in the internet’s history. The first is that the development of these three features made the internet easily accessible to the general public and offered an attractive user interface. This is what allowed it to gain traction outside of a small group of early evangelists. The second is that there were a significant number of early networks, browsers and search engines that became highly popular for a brief period of time but were ultimately surpassed and forgotten. Just something to think about in the current blockchain era.
Centralization of the Internet
This brings me to my last two questions about the modern internet: how did the internet become centralized and why does that matter?
We discussed this in greater detail in my previous post on cloud computing but in the late 1990’s and early 2000’s, a few companies sprung up that created massively powerful platforms. These business simplified the process of creating and viewing content on the internet. In doing so, they built up huge brands and benefited from economies of scale. This made it so difficult for others to challenge them that they effectively built monopolies. With that monopoly power, these businesses have been able to determine what content is allowed on the internet and how each user experiences the web, enabling them to have outsized influence over public sentiment. Additionally, by controlling the flow of information, internet users are entirely reliant on these large companies to protect their information.
Let’s walk through a couple of examples. First, Google created an incredibly popular search algorithm in the late 1990’s and early 2000’s, effectively monopolizing search. Today, a significant portion of the population uses Google on a daily basis to search the web, search maps, check their mail, store content on Google Drive, etc. From this information, Google can create detailed profiles of each user and sell that information to targeted advertisers for a lot of money, significantly influencing the way that users experience the internet in the process. Let’s look at a second case: Facebook. Facebook has billions of users that house their photos, videos, statuses, etc. on Facebook’s servers. Facebook has control over who uses the site, what content floats to the top of each user’s news feed, what advertisements each user sees, and then, again, can monetize each user by selling their data to advertisers. The amount of influence that Facebook has over public opinion is unbelievable. If Facebook slightly alters their algorithm and pushes liberal-leaning ads and content to the top of users’ news feeds, then it could have a large impact on public policy views, elections, etc. In addition to that, users trust Facebook to properly secure their data but have little understanding of how that works or whether it’s sufficient. Let’s look at one final case: Amazon. Amazon sneakily operates a massive portion of the global internet infrastructure through its AWS product line. Amazon hosts the large majority of cloud infrastructure so as the world increasingly moves to the cloud, more and more of the global internet will reside on Amazon’s servers. This means that if there is a problem with Amazon’s technology or a slip-up in security, it could simultaneously affect a large portion of the commercial internet.
These issues are what set the stage for our next topic: blockchain and the rise of decentralized networks.
That wraps up my post on the history of the modern internet. Hopefully you found it interesting! My next few posts will walk through the history of blockchain technology, how the technology works, and several exciting opportunities in the space. If you enjoyed this post and would like future posts sent directly to your email, please subscribe to my distribution list or reach out to me at email@example.com.
Also, if you have an interest in venture capital and want to read more VC-related content, please follow my publication “All Things Venture Capital” on twitter. Please also reach out if you are interested in adding to the publication! My goal is to continue to add high quality content (articles, podcasts, videos, etc.) from aspiring and current venture capitalists that want to share their perspective. Thanks for reading!