A (not so) brief history of America’s internet problems


And why net neutrality is important

Considering the president’s announcement today, it seemed like a perfect time to publish this post.

This is a literature review I co-wrote with Vansh Jain, Saral Jalan, and Jacob Swanson for a class on effective communication for engineers at USC. It’s long, but goes into great detail about why and how America’s internet infrastructure is lacking. If you’d rather read it in pretty PDF form, you can do that here.

Abstract: America’s internet infrastructure has fallen behind the rest of the world’s. In terms of both speed and cost, broadband internet in the United States is lacking. This paper explores the importance from a social and economic perspective of having a strong internet infrastructure. A review of the technologies that can be utilized to improve the state of broadband in the United States is conducted. The history of our telecommunications networks, in addition to the current regulatory framework has put America in this detrimental position. By analyzing the technologies and methods by which certain countries and municipalities have successfully deployed high-speed broadband internet, we can make recommendations about how the United States can seek to improve its infrastructure going forward.

I. INTRODUCTION & SIGNIFICANCE

AMERICA is home to the companies that created the infrastructure to power the internet. It is responsible for developing new technologies and increasing the speed of mobile networks and cyber connections. And yet America’s digital infrastructure has fallen behind that of other countries. In fact, America’s average internet speed ranks 31st in the world [1]. While it may seem that the strength of the digital infrastructure is negligible at a certain point, it is not. Faster internet speeds and broad internet coverage are crucial for a healthier America. But the existing political systems and regulations within the United States impede the development of better broadband infrastructure. In this literature review, we uncover reasons for America’s inferior infrastructure and identify factors that have made other countries successful in broadband development. We then make recommendations that can help the US create policies and systems to enable iterative growth rather than static adherence to the current American broadband infrastructure.

This paper will first discuss the significance of a fast and widespread broadband infrastructure. It will then describe the current state of the United States broadband infrastructure, pointing to the negative effects of a weak broadband system. Next, the paper will explore existing broadband technologies, and uncover the political systems and regulations, explaining why the current regulatory frameworks make it difficult to progress broadband infrastructure. After, the paper will show what factors allowed other countries to advance their broadband systems. Finally, we will discuss a set of recommendations that could allow the US to develop policies that could create an advanced and future-proof broadband infrastructure.

A. Importance of the Internet

From gathering information to finding job opportunities, the internet has become a central part of people’s lives. But having slow internet coverage in a country is not sufficient; it is crucially important to have fast and widespread broadband coverage. The faster the internet connection, the more time a person is likely to spend on the internet, which correlates heavily with economic development. Those with slower connection speeds, spend the least time on the internet [2]. Why is it important that people spend time on the internet? One of the main reasons is for healthcare.

1) Health Care: Upon recognition of a medical problem, a person with internet connection will often consult the world wide web, attempting to find reasons for their ailments. People will navigate to forums and connect with others sharing the same illnesses, trying to figure out a plan of action to fix their problems. Harvard Medical School’s Dr. Ronen Rozenblum says that the increase of reliable medical information on the internet has resulted in patients becoming more engaged with their healthcare providers, leading to a more patient-centric healthcare system in the United States [3]. Those with internet connection are able to browse the internet and have the ability to search for potential problems, leading to healthier individuals.

Studies also reveal that online healthcare is an effective way to mitigate mental health issues. “Online therapy will help serve the nearly 3 out of 4 people who have mental health problems but do not currently get any kind of help” reveals Lawrence Shapiro, Ph.D., President of Talk to An Expert Inc., [4]. Online therapy is particularly important for those who cannot afford in person health care or who have physical disabilities. Often when patients need immediate help they can speak with their doctors through online video conferencing. Communicating medical reports through the internet, streamlines long processes and saves both hospitals and patients time and money.

2) Individuals, Businesses, and Government: The effects of being on the internet not only extends to the healthcare industry but also to many other fields. Whether it be researching for a political campaign or taking an inexpensive class from an online education startup such as Udacity, access to the internet increases the welfare of people by helping them make well informed decisions, acquire jobs, and eventually give back to the United States. The key point is that a fast and reliable internet has to exist in the region so that people use the internet which enables growth at both micro and macro levels.

Not only individuals but also small to medium sized businesses use the internet as a means to expand their market reach and grow their companies. Blue Valley Meats, a business in Diller, Nebraska, with a population of 287 people, was able to grow it business by over 30% in under 5 years and double its number of employees by connecting with companies interested in buying its products [5]. Had a reliable connection not existed, Blue Valley Meats would have had to sell to only the 287 people in the town. In another scenario, a rural community had its local textile manufacturing base outsourced, which employed the majority of the people living in the city. After negotiations with an airline company, the airline company and the community agreed to establish a customer call center for the airline which would provide employment to those displaced. But the agreement was terminated for one basic reason: the community lacked the requisite broadband infrastructure [5].

At a macro level, broadband service helps cities in crucial ways. After Chattanooga, Tennessee updated their broadband infrastructure and began to offer the fastest speeds in the nation, the government was able to “piggyback” off the infrastructure and use the faster internet to benefit the city [6]. The police force now monitors the city more effectively, using live security cameras, increasing safety within Chattanooga [6]. Furthermore, with better resolution cameras, Chattanooga reduced the intensity of LED lighting in public spaces, saving energy. The fire department is able to download floor plans of buildings faster, accelerating response time in emergencies. In short, the use of broadband to transport video and and signals has resulted in a safer Chattanooga.

From the examples above, the power of the internet is clear. By connecting to the internet, people have access to the rest of the world and are able to communicate with others spurring innovation. People are able to learn from the billions of resources available online, leading to personal and professional growth, all of which eventually equates to overall economic growth.

3) Economics: If the approximately 200 million people using the the internet in the United States wasted two fewer seconds a day on the internet due to faster broadband speeds, the cumulative amount of time saved equates to approximately thirteen years per day. Internet connection makes a difference in American lives because faster broadband streamlines processes that would otherwise would take much more time. In healthcare, transferring a 3 gigabyte CAT scan could take hours, but the same process could take just minutes with faster broadband, allowing doctors to optimize their time spent and serve more patients. Sixty two percent of Americans use the internet in their workplace, and faster broadband speeds lead to streamlined processes and productivity [5]. Productivity is essential for profitability. In fact, Ericsson analysts report that doubling the internet speed increases GDP by 0.3% [7]. Currently, the United States average offered internet speed is around 30 megabits per second, and the 2013 US GDP was 16.8 trillion dollars. Figure 1 shows the additional amount of GDP in billions of dollars that could have been added to the US GDP in 2013, had internet speeds been different.

Fig. 1. Potential additional GDP in year 2013 at different internet speeds. Compiled with information from [7]

In addition to faster internet speeds, Deloitte analysts reports that a 10% increase in internet coverage leads to a 1.3% increase in GDP because of additional opportunities created for individuals that did not exist prior to having a broadband connection [8].

The internet affects health care, education, and the overall economy, and the reality is that all entities whether it be people or industries are depending more and more on the internet. Thus, it is imperative that the US broadband system improve to be strong and fast. Unfortunately, the state of the United States broadband connection is weak and outdated.

B. Current State of US Broadband

The US has fallen behind in broadband speeds when compared to those of other countries. As mentioned earlier, America is ranked 31st in the world in terms of average internet download speeds. At a national average offered speed of 30 megabits per second, the US broadband connection falls behinds countries like Estonia, Romania, and Uruguay [1]. In addition to the US offering slower speeds, internet providers are much more expensive than those in other countries. Residents of New York and Los Angeles can get 500 megabit per second connection at a cost of $300 per month, whereas in Seoul, South Korea, internet users can get 1 gigabit per second connections for just $30 a month [9]. South Korea offers twice the speed of that in America at a tenth of the cost. Even at slower speeds in different countries, America is expensive. Figure 2 shows the comparison between internet speed and price between different countries [10].

Fig. 2. Comparison of internet speeds and price for different countries. Compiled with data from [10]

Susan Crawford, a former special assistant to President Barack Obama on science, technology, and innovation policy, states that although US internet is much more expensive when compared to those of other countries, many Americans still pay the exorbitant prices because they don’t have a choice [11]. However, a substantial number of Americans cannot afford to pay the high prices and choose to not have broadband connections in their households.

Nearly 100 million Americans do not have broadband today [5]. Nineteen million Americans still lack access to fixed broadband service, but a large portion of the rest choose not to have broadband because it is simply too expensive [12]. If broadband prices were cheaper, many more Americans would be connected to the internet, which would benefit not only them but also the US economy. Unfortunately, many Americans cannot afford broadband connection which, according to Crawford, has created an internet-age class divide in the United States [11].

1) The Digital Divide: Susan Crawford argues that the third of Americans that lack broadband connections are at a clear disadvantage because they are “being left out” of the rest of the American economy, deepening inequality in the United States [11]. It is much easier to accumulate knowledge and earn a job with the use of the internet, but many Americans do not have this capability. The Department of Commerce reports that 40% of household with incomes below $25,000 have internet connection, whereas 93% of Americans with household incomes of above $100,000 have broadband connections [11]. Although there are many factors that contribute to a household income, there is a correlation between wealth and internet connection. With so many Americans not able to gain access to the internet, Crawford argues that the rich are getting richer and the poor are getting poorer.

American internet is slow and expensive and many people still don’t even have the option to receive a broadband connection. People may think that one of the main reasons for poor coverage and slow speeds is inferior technology in the United States; however, it is not. America has the technology to build a sustainable and future-proof broadband infrastructure, but it has not been deployed.

II. LAST MILE TECHNOLOGIES

A. Introduction to Broadband Internet

The term “broadband internet” refers to all of the technologies which provide high-speed access to the internet. Broadband differentiates itself from the old method of connecting to the internet via telephone line, dubbed dial-up, because of its higher data rate access. Broadband speeds are typically measured in kilobit, megabit, and gigabit per second download (i.e. from the internet to the consumer) and upload (i.e. from the consumer to the system) speeds. One gigabit is equivalent to 1000 megabits or 1 million kilobits. [Note: internet speed measurements are often confused with units traditionally used to measure computer storage, bytes. For reference, 1 gigabit is equivalent to 0.125 gigabytes (or 125 megabytes).] Broadband technologies usually use physical wires and cables to connect people and business to the world wide web, but recently mobile broadband, which utilizes wireless technologies has become more established and important as well.

In this section, we will explore the most important technologies being utilized to provide “last mile” service. The last mile is a phrase used to describe the final leg of the telecommunications network which actually delivers the communication connectivity to a retail customer (the word “mile” is used metaphorically). The last mile is typically the bottleneck in a communications network and its bandwidth capacity limits the bandwidth that can actually be delivered to the customer. This is because the topology of most retail communication networks have a tree-like structure, with a few “trunks” representing the main high-capacity lines which branch out to serve the last mile “leaves”. These final mile leaves are thus the most numerous and expensive parts of the system, as well as the most difficult to upgrade to newer technology. All of the technologies discussed below describe the last mile of the internet communications system, the “trunk” of which primarily consists of fiber optic lines [13].

B. Dial-up

In order to discuss more recent technological advancements in internet infrastructure technology, it is important to note the first technologies that allowed mass-market consumers to connect to the internet. Dial-up internet access was developed in 1989 and uses the publicly- switched telephone network (PSTN) to establish a connection to an Internet Service Provider (ISP). Dial-up was a natural first step in connecting consumers to the internet because it utilized the existing telephone networks and thus required no additional infrastructure to be built out. In 2014, dial-up is no longer a reasonable way to connect to the internet, as speeds are not fast enough for most modern websites and internet applications. For comparison to the technologies that follow, dial-up speeds are capped at 56 kilobits per second (which equals 0.056 megabits per second). Although dial-up is no longer state-of-the-art for internet infrastructure, it is still useful for some extraordinary remote areas where broadband cannot reach. In fact, 3% of Americans still use dial-up as of 2013, down from 6% three years earlier [14]. Dial-up users cited price and availability as the main blockers to switching to a broadband connection [14].

C. DSL

Digital Subscriber Line, more commonly known as DSL, was the next major innovation in internet infrastructure technology after dial-up. Like dial-up, DSL transmits digital data over the local telephone network, backed by the PSTN. But unlike dial-up, DSL allows for internet data to be transmitted at the same time as traditional telephone calls. This is possible because DSL operates on a higher frequency band than the voice data that also passes over the telephone lines. A DSL filter plugs into the plain-old telephone system (POTS) and is required in order to prevent interference between the voice data operating at lower frequencies and the higher- frequency internet data. DSL technology is important because it allows significantly faster data speeds to be pushed through the traditional phone line system. DSL speeds traditionally range from 256 kilobits per second (equivalent to 0.256 megabits per second) to 100 megabits per second, depending on a variety of factors, including the recency of the type of copper wire being used and the technology to send data through that wire. In a 2014 lab test, Bell Labs, the research division of Alcatel-Lucent, set a record by sending 10 gigabits per second (10,000 megabits per second) through a traditional copper (DSL) telephone line and additionally showed that it can achieve a 1 gigabit per second symmetrical (i.e. upload and download speeds) [15]. These lab results are promising, but until the new technology can be commercially developed and deployed, DSL speeds will continue to be constrained by the older technology that it was built on. This means that in both theory and practice, DSL speeds are slower than the speeds that can be realized by both cable and fiber optic technology [16]. Yet DSL is still widely utilized both in the United States and abroad. A 2012 survey titled World Broadband Statistics found that “DSL continues to be the dominant technology for broadband access” with hundreds of millions of worldwide subscribers [17]. This makes sense because DSL is based on an established and ubiquitous technology, phone lines, but in order to improve and future-proof America’s internet infrastructure, DSL will not probably be the answer.

D. Cable

Cable internet access (colloquially known as cable internet) provides network edge (last mile) broadband access to consumers via the cable television telecommunications infrastructure. The way it is integrated into the cable TV infrastructure is analogous to the way DSL is integrated into the publicly-switched telephone network (PSTN), as discussed previously. The use of cable internet requires a cable modem at the customer’s premises as well as a cable modem termination system (which can handle many cable modems’ connections) at the cable operator’s facility. Connections between these two endpoints are made over coaxial (traditional copper) or hybrid copper/fiber cables. Coaxial cables were invented in 1880 and are comprised of an outer plastic sheath, a woven copper shield, an inner insulator, and then the copper core at the center. Hybrid cables became more common with the rise of cable TV. Realistic speeds for cable internet can reach 250 to 400 megabits per second in the downstream in some markets [16] [18], but usual cable internet downstream speeds range from 5 to 50 megabits per second [16]. Upstream speeds usually range from 384 kilobits to 20 megabits per second [16]. Most cable modems subscribe to the Data Over Cable Service Interface Specification (DOCSIS) which allows cable companies to artificially limit upload and download speeds. Most modern cable modems use DOCSIS 3.0, which was released in 2006, allowing downstream speeds up to 160 megabits per second [19], which is sufficient for modern American standards, but certainly not future-proof or up to international standards. DOCSIS 3.1, proposed in 2013, would allow for download speeds of up to 10 gigabits per second and upload speeds of 1 gigabit per second [20]. The cable industry, led by its not-for-profit research and development consortium, CableLabs, hopes DOCSIS 3.1 will be able to keep cable around for longer, delaying the need to upgrade to fiber optics. Cable is the most widely used last mile internet technology in the United States, with 80 percent of broadband-internet-connected Americans connecting via cable [21].

E. Fiber to the Home (FTTH)

Fiber-optic communication is the method of transmitting information via pulses of light through an optical fiber. An optical fiber is a flexible strand of glass or plastic that is about the width of a human hair and can transmit light from one end to the other. Optical fibers are usually bundled together in precise mechanical ways to allow for light to transfer data in the correct way. Optical fiber permits much higher bandwidth data transfer than metal cables with much less loss due to electromagnetic interference. Because of its advantages, fiber optics has replaced much of the copper cable in the developed world’s core telecommunications networks and helped sparked the information revolution.

Fiber to the x (where x can be premises, building, home, curb, etc.) brings the speed of fiber optics to the last mile and is the current gold standard in internet infrastructure. Fiber to the home is much more robust in terms of speed and length over which data can travel than cable (copper) technology. For example, a gigabit per second transmission can run over cable for only about 300 feet, while a data transfer at that speed can easily run over optical fiber for tens of miles. Fiber can therefore offer much higher speeds than those of DSL or cable, which are severely limited. Fiber is also “future-proof”. The maximum realized speed in a lab test was 125 petabits per second (or 125 million gigabits per second) [22]. Usually, the speed of a consumer fiber connection is limited by the terminal node equipment and not the fiber wires themselves, thus permitting speed gains to be realized by upgrading the equipment without having to improve the actual fiber installation.

In practice, good fiber connection speeds are usually a symmetrical 1 gigabit per second connection [23]. Examples of proper municipal fiber installations include cities such as Chat- tanooga, Lafayette, and Kansas city. Unfortunately, only 23% of Americans have access to FTTH according to the government’s National Broadband Map [24], and the growth in fiber availability has stalled.

F. Mobile Internet Access

Mobile broadband is the marketing term that refers to wireless internet access through a portable (such as USB) modem, smart phone, tablet, or other mobile device. This technology works by delivering wireless internet connection to the end user’s device via mobile phone towers. A variety of standards and technologies have been developed to provide this mobile internet access, and they are grouped into “generations”, with the newest being the fourth generation (4G). Some of the most popular mobile internet technologies include GSM (2G), CDMA (3G), HSPA+ (4G), and LTE (4G). Current third and fourth generation (3G and 4G) networks allow for speeds that rival DSL or cable speeds. The best LTE networks, for instance, provide theoretical downstream speeds of up to 100 megabits per second. A Pew Research study concluded that about 58% of Americans have a smartphone and mobile data plan as of 2014, a number that is expected to rise further as smartphones become more prevalent [25].

III. INFRASTRUCTURE DEVELOPMENT

Thus far we have considered why fast internet is important and addressed how fast broadband infrastructure can not only improve economic outlook on a micro scale, but on a macro scale as well. It is therefore within national interest to maintain and improve broadband infrastructure, as well as consider cause and effect relationships that have played out over time and have contributed to worse infrastructure, better infrastructure, and how this infrastructure was upgraded over time. Technology will continuously improve, but to a large degree the discussion of broadband infrastructure isn’t so much about how to implement current technology, but to understand that with ever improving technology, we must develop methods such that new technology can be efficiently implemented. Perhaps technology can be developed to improve the pace of technological adoption as it were. We have also identified current technologies that can enable the broadband connections desired nationally.

Since these technologies have obviously not been implemented, as observed by national broadband connection rates not matching at all the rates advertised by these technologies, there is clearly something wrong with the way the US maintains and restores broadband infrastructure. In this section, we will track the possible reasons why the processes for infrastructure development have not allowed these new technologies to be implemented. In order to understand the current state of broadband services, it is useful to examine how the US has arrived at the current implementation of services and the policies which have incentivized and shaped those services. To a large extent, this conversation revolves around the continuing relationship between traditional phone-line based telecommunications companies and cable companies. Understanding how cable was developed not only gives useful insight to previous communications policy patterns, but also allows us to understand the underlying framework for modern broadband infrastructure.

A. Community Antenna Television

Cable technology was originally introduced in the 1950s as a complementary service for local broadcast stations. Because the United States was so widespread and had a relatively low population density, it became very difficult for broadcast antennas to reach every American home, especially those in rural areas. Cable TV, or CATV (Community Antenna Television) as it was called at inception, essentially “retransmitted” local broadcast signals to homes and communities where those signals were unavailable [26]. The distribution power this technology afforded quickly became apparent; these up and coming cable companies realized an opportunity in retransmitting broadcast signals (perhaps from non-local regions) and bringing them to monthly billed consumers in a multichannel format. Additionally, these cable companies began to realize that they could simply bypass the broadcast networks and offer their own content.

It quickly became apparent to the incumbent broadcast networks that cable technology represented a significant threat to their business model; users would begin to watch dedicated cable programming or retransmitted broadcasts from other regions instead of local stations. As the incumbent players, broadcast stations flexed their relationships with regulators and lawmakers, and in 1965 the FCC, or Federal Communications Commission began to regulate cable. The FCC, as the de facto regulatory body for communications technology in the US, claimed “ancillary powers” over cable companies. The commission quickly implemented “must carry” rules requiring cable companies to carry local broadcast stations, regulated the original cable content and required this content to be transmitted with equal time to local broadcast stations, and other regulations intended to protect the local stations [27]. Another regulation, at the time intended to protect local broadcast stations by keeping cable small, was the stipulation that telephone companies, which at the time enjoyed monopoly status in nearly every municipality were prevented from joining the cable game. This decision would become significant when broadband internet started to enter the conversation.

These ancillary powers were upheld by the Supreme Court in 1968 when the court ruled in favor of local broadcasters. The case settled a dispute between a Los Angeles broadcast station and the Southwestern Cable Company which was retransmitting the LA signal to San Diego.

The broadcast station felt its content was being unlawfully rebroadcast without license, and the court agreed that the FCC had the power to regulate and prevent Southwestern Cable from retransmitting [28]. This dynamic, where the incumbent technology uses the regulatory status quo in order to prevent up and coming technologies is a pattern that did not stop in 1968.

By the mid 1970s, growing popularity in cable television and a change of position on the part of the broadcast stations, who realized the benefits of cooperating with cable in order to reach a wider audience, shifted the regulatory dynamic. Litigative and legislative efforts began to be decided in the favor of the cable industry. In 1974, the Supreme Court decided to allow cable to retransmit broadcast stations in return for a standard licensing fee, a decision supported by the broadcast stations [29]. This decision helped cable grow substantially. Additionally, in 1978, the government settled a dispute between telephone companies, which sought to prevent cable companies from using telephone poles to distribute their services. The telephone companies were in a position to extract large sums of money from cable entrants, but the courts decided in favor of cable, and telephone companies were required to lease telephone pole attachment space at reasonable rates [11].

B. A Natural Monopoly

By the 1980s cable was becoming increasingly popular. Cities and municipalities alike began to articulate Request for Proposals (RFPs) so that this multichannel technology could be thoroughly distributed to citizens. Perhaps due to previous experience with electrical and telephone utilities, regulators assumed cable television would, left to market forces, naturally result in regional monopolies [30]. The idea is that the barriers to entry for cable companies are so large and involve so much infrastructure build out and capital, that were multiple entrants to compete for regional customers, one victor would inevitably dominate the market and force out competitors. To preempt this behavior, regulators gave municipalities the authority to assign exclusive franchise agreements that would be price controlled, the standard structure for a natural monopoly [30].

While nearly every municipality had some form of exclusive franchise agreement with a cable company, inconsistent national policy and the growing appearance that cable was too strongly regulated prompted congress to pass The Cable Communications Policy Act of 1984 [31]. The act sought to deregulate cable; lift the burden of price regulations and make consistent the exclusive franchise agreements. Furthermore, it reinforced the policy that telephone companies must remain separate from cable companies; however, instead of attempting to protect local broadcasters, which at this point were becoming largely irrelevant, this policy was aimed at protecting cable companies. If telephone providers were able to offer video service over their copper lines, they could possibly own the only “wire” into a particular household, an unattractive prospect for increasingly powerful cable companies [11]. This also seemed natural as each communications technology was responsible for entirely different services; telephone for voice and cable for television.

The concept was that unburdened from price regulation, cable could thrive; hopefully this would provide better services for customers. Even though cable companies enjoyed exclusive franchise agreements, the hope of regulators was that competition from enough local broadcast stations in addition to a potential “other” multichannel television service would force cable companies to keep prices reasonable [26]. This turned out to be completely false. Local broadcast stations never stood a chance at competing with the products and services provided by cable, and “alternative multichannel services” simply did not exist at the time; consumer satellite television was still years away. Armed with exclusive franchise agreements and free from price controls, cable companies indeed thrived. Large expansions and heavy private investment in cable promoted strong national growth. Unchecked, however, cable rates in the late 1980s increased at extremely unreasonable rates.

In 1992, in response to unchecked rate hikes and an extremely poor competitive landscape, Congress passed the Cable Television Consumer and Protection Act of 1992 [32]. This act sought to increase cable competition by acknowledging that cable did not in fact form a natural monopoly. By enacting some minor price controls and forcing municipalities to end exclusive franchise deals, the idea was to allow other entrants to compete for cable as well as allow telephone companies to join in on the game. This did not really happen; telephone companies were not so interested in laying out cable to compete with established cable firms, and the legacy of the exclusive franchise agreements made it difficult for anyone to compete with entrenched industry. Additionally, this policy gave cable companies the opportunity to consolidate. Prior to the early 1990s, thousands of cable companies with few systems in few municipalities existed; now, only a few cable companies with systems in thousands of cities became dominant. The largest of these companies was Comcast [11].

C. Lessons from Cable and the Beginning of Broadband

This history of cable television is intended to demonstrate certain regulatory patterns that have occurred in the US as well as set the technological stage for the emergence of broadband internet in the late 1990s. The most significant pattern that has shaped modern broadband infrastructure is the relationship between incumbent communications technologies and regulators. It is clear that incumbent technology players threatened by a new method of communication will use the legislative and regulatory systems in place to benefit themselves. This pattern has so far been apparent twice: regulation protecting broadcast networks from the threat of cable at the outset of cable technology as well as regulation protecting cable companies from the threat of video competition from telephone companies. Additionally it is clear that when incumbent players act to suppress new technology, regulation can also be used to promote the new technology. In 1974, cable was helped by being allowed to rebroadcast local stations. In 1978, cable was helped by being given reasonable access to telephone poles.

What is interesting about these patterns is that they actually resulted in a very substantial US cable rollout. Unlike other nations where cable technology was simply handed over to telecom companies for management, the US enforced strict regulations that encouraged cable systems to launch as an entirely new industry alongside traditional telephone systems [33]. While there were significant problems with the way in which cable executed mass distribution (discussed below), it is important to note that US policies encouraging cable rollout ultimately resulted in a very substantial national cable infrastructure. This would become important when broadband internet access began gaining popularity in the mid-to-late 1990s. Cable, using the DOCSIS protocol, can deliver much faster broadband rates than the DSL provided by telephone companies. Because the US introduced an entirely new cable industry, the US was already poised to flip the switch to fast broadband. Other countries which did not have cable infrastructure, would be forced to upgrade to Fiber in order to gain broadband access [33].

Although it was an achievement that the US implemented cable alongside telephone services, there were some significant problems caused by the regulatory landscape shaped throughout the 1980s and 1990s. Bad assumptions and poor economic foresight led to conditions in the early 90s wherein cable infrastructure was incredibly widespread, however the price rates and oversight were not appropriately managed. Regulators consistently either misunderstood the competitive landscape or enacted policies that had no effect. The 1984 Cable Act assumed that competition from antenna based local broadcast stations and theoretical multichannel competitors would provide sufficient competition to curb rising cable rates. The assumption that adequate competition would arise in an environment that significantly favors one player, either due to technological prowess or significant barriers to entry, turned out to be flawed. Without government regulated price controls, the effective monopoly enjoyed by cable companies allowed them to arbitrarily raise prices indefinitely. When the government attempted to fix the issue in 1992, the enacted policies were not strong enough and cable companies simply gained more ground and began consolidating.

D. Broadband Spreads

The idea behind the 1996 Telecommunications Act was simple enough. While cable and telecom had initially been kept very separate throughout the US due to differing products and platforms, broadband internet was not unique to either coaxial cable or telephone lines [34]. Cable, using the DOCSIS protocol was able to offer high speed broadband access, and telephone companies, using DSL were able to offer a similar product. High prices and differing regulatory frameworks for cable (which since 1984 had been fairly deregulated) and DSL (which was subject to common-carrier regulations as an extension of a telephone product) gave the appearance of little competition and burdensome government regulation. The proposed solution was to remove the regulations in place preventing cable companies to compete with phone companies and phone companies to compete with cable companies [21].

It was a given that if only cable companies could compete with telephone companies, cable would be forced to lower its prices and telephone would be able to offer sufficient broadband connections over DSL. New broadband suppliers would enter markets, and without exclusive franchise agreements, cable companies could compete for turf along with DSL. Competition would be abound, and the deregulated frameworks would allow the competitive market forces to regulate price controls instead of burdensome regulations [27]. This all turned out to be completely false. Just like the assumption in 1984 that local broadcasters would provide adequate competition was untrue, the assumption that DSL could provide adequate competition for cable was untrue [21]. Not only was DSL’s lesser technology not able to effectively compete with cable, but high barriers to entry in the now “non-franchised” markets prevented new cable entrants from competing with the entrenched cable companies. Simply ending exclusive franchises without other measures still gave the original company a competitive edge. But the problems didn’t end there.

In 1997, during what became known as the “Summer of Love,” the largest cable companies that had effectively bought out the entire US cable market simply divvied up territory. It was in none of their interests to compete for “turf” in individual markets, so the largest cable companies (TCI, Comcast, and a few others) simply assigned territory [11]. Furthermore, the largest telecom companies began merging in an effort to maintain territory and carve out the communications market. SBC merged with AT&T, while MCI merged with Verizon [11]. Instead of fostering competition, the 1996 Telecommunications Act simply allowed the largest players to increase market share and push out newcomers through hostile takeovers or other anticompetitive practices. Furthermore, just like the result of the 1984 cable deregulation, the lack of true competition caused prices to skyrocket.

The telecom mergers made something else clear. While it was obvious that cable broadband offered significantly higher performance than DSL, it became good for the telecoms to simply take over the wireless market and leave broadband to cable. After all to truly compete with cable the telecoms would have to lay out fiber. It is much cheaper to upgrade cable than dig up phone wire and replace with fiber, so phone companies went wireless, cable companies took over broadband. What we are left with is a highly clustered communications industry, with each component maintaining some degree of monopoly over territory and services. Verizon attempted to begin offering Fiber service to compete with high speed cable, but competitive pressures have caused them to cease build out, leaving only 14% of the country with access to Verizon’s fiber product, FiOS [11]. With no significant territorial competitors, cable companies such as Comcast, Time Warner Cable, and others have been able to raise rates without consequence. The United States has ceded the possibility of rate regulating cable companies; in 1984 this occurs with cable television, and in 1996 this occurs with cable internet. It is this combination of circumstances — low competition and no rate regulation — that prevents US broadband from being competitively priced on a global scale.

E. Vertical Integration and Net Neutrality

The concept yet to be discussed is the relationship between content providers and content carriers. The regulation regarding the original telecoms in the early 1960s through the 1990s regarded the services they provide as common-carrier services [35]. What this means is that the telecom companies were divorced from the content they carried over telephone wires. In the same way that a water utility doesn’t own the water that gets delivered to homes, telephone companies did not own the content that the wires transmitted between different locations. When cable television was introduced, it was not immediately obvious that they fit into this category. After initial business practices of merely retransmitting broadcast stations, the real cable business was developed by creating cable networks, or in other words television content [27]. In order to effectively deliver these services, cable companies argued that they should be exempt from common carrier regulation, and the regulators complied.

What this meant, however, in the fast paced television market growth of the 1980s, was the cable companies owned a significant portion of the content, and as contract-carriers, a new channel would have to negotiate terms with cable companies to receive airtime. It was, and remains, nearly impossible for a cable channel to succeed without being on Comcast and Time Warner, giving these two companies significant leverage with respect to licensing deals. Additionally, since in many local markets the cable companies themselves own sports and other networks, competitors, such as satellite or online services need to negotiate deals with the cable companies themselves in order to have competitive content [11].

In 1996, instead of acknowledging that cable companies were more similar to telecom common- carriers than broadcast stations which only create content, the FCC classified cable technology as an “Information Service” under Title II of the Telecommunications Act, precluding cable com- panies from common-carrier regulation. Simultaneously, DSL, which was merely an extension of telephone services in order to offer broadband data connections, was classified as a “common- carrier” (like it had always been) under Title I of the Telecommunications Act [34]. The problem with these uneven classifications was that at this point DSL and cable offered essentially the same data services. Why should there be regulatory differences? In 2005, instead of reclassifying cable services as a common-carrier as it likely should have, the FCC, with support from the Supreme Court, gave in to telecom demand and instead reclassified DSL as an “Information Service,” bringing it into alignment with cable-based broadband [36].

The impact of these classifications is significant. Cable companies who seek to vertically integrate (i.e. provide content services) are at no direct legal obligation to consider content from other entities equally with its own content. This scenario has precedent. In 2009, Comcast purchased a controlling share of NBC Universal, with the latter corporation essentially becoming Comcast’s content arm [37]. In 2013, Comcast sought to extort larger fees from Netflix, it’s largest data user. Comcast’s argument was that Netflix uses a disproportionate amount of data service, and Comcast should be compensated for that. In an effort to extort these fees from Netflix and simultaneously promote it’s own NBC Universal services, in 2013, Comcast significantly slowed down Netflix speeds. In 2014, when Netflix gave in to higher fees, Netflix speed over Comcast networks dramatically improved overnight [11].

This method of extorting higher fees for quality service is legal because of cable’s Information Service classification. The effort to prevent cable and other Internet Service Providers (ISPs) from participating in these practices is known colloquially as “net neutrality”. Because of a long history associated with cable television broadcast content, the modern internet is in danger of increasing costs for consumers [11]. The regulatory framework that allows cable companies to charge extremely high prices in an environment devoid of competition is the same regulatory framework that could potentially allow ISPs to control internet content, a concept foreign to telecommunications before cable became the incumbent telecommunications service. Although the United States has had difficulty with these regulatory frameworks, it is useful to consider models in the international community that have worked well.

IV. INTERNATIONAL SUCCESS

A. Successful Cases of Broadband Implementation

There are many factors that contribute to the successful implementation of broadband internet. While there are factors that are unique to certain geographical areas, the underlying theme of these success cases is government involvement in broadband roll out. Policy makers play an essential role in setting up the incentive structures that promote the kind of marketplace necessary for fast broadband. This section will go over a few examples of successful broadband implementation, both internationally and domestically.

B. Competition Structures

In order to understand why certain countries have successful broadband, it is important to first understand the different competition models in utility deployment. There are two forms of competition: facility-based competition and service-based competition.

1) Facility-Based Competition: According to Sujin Choi, a professor at the University of Texas at Austin, “Facilities-based competition is the competition between the vertically integrated platforms providing closely substitutable services entirely over their own infrastructure”’ [39]. Facility based competition is a competition of different infrastructures. Companies create their own infrastructure for broadband and maintain a competitive edge based on the merits of this infrastructure. “Under facilities-based competition, broadband providers can enjoy flexibility and create innovation by having full control in setting service features provided by infrastructure investments” [39]. This flexibility also allows entrant broadband providers to determine their own price, service, and technologies [39]. Facility-based competition generally leads to better efficiency in the long run, more products to serve consumers, and an increase in broadband penetration [39]. Unfortunately, entrants to the market have to cover the huge cost of developing and deploying their own infrastructure. They are also at a disadvantage because the incumbents are usually operating at higher efficiencies that can only be obtained through scale. In order to make facility-based competition work, subsidies and other stimuli are required to lower the cost for entrants.

2) Service-Based Competition: “Service-based competition is competition of companies that rely on partially or entirely on the facilities or the services of other operators” [39]. In this market, incumbents will wholesale end-to-end products or lease unbundled local loops of broadband infrastructure [39]. Service-based competition is known to introduce competition faster than facility-based competition because of the low risk associated with it. Since entrants aren’t tied to a certain facility or technology, they can easily adapt their service to meet the needs of consumers [39]. Service-based competition has one or two existing broadband providers, who lease or wholesale their services to entrants to the market for a flat fee (they usually do this because it is government mandated). It allows for entrants to learn and understand the broadband industry without taking a huge risk [39].

C. Hong Kong

Hong Kong was the first city to deploy sixty megabits per second internet speed to its consumers. According to Chinlon Lin, the reason for Hong Kong’s successful broadband deployment its “high-rise high-density building complex” [40]. Hong Kong has an average density of 6,544 persons per square kilometer. This high density makes fiber deployment to each additional area very cost effective. This density has created a facility-based competition model in Hong Kong. There are currently four major broadband providers in Hong Kong, most of whom have made Fiber-to-the-Building a norm.

The significance of Hong Kong is that it demonstrates how urban areas are a natural hotspot for competition. Places like San Francisco and Manhattan have very high densities, which makes it cost effective to create new infrastructure in these areas. Yet in cities like San Francisco and Manhattan the amount of competition is insufficient. This is because creating a facility-based competition market is unlikely without the correct government incentives. In order for dense American cities to have faster broadband, the first step would be to mandate that incumbents wholesale their services to entrant ISPs, essentially creating a service-based competition model. Eventually this could slowly be adjusted to a facility-based competition model as the market matures.

D. South Korea

Korea has had one of the most successful broadband internet deployments. Currently they have an 84.8 percent penetration, which is an impressive number for their population and size [41]. According to a study of Internet speeds, South Korea also has the highest average connection speed [42]. South Korea’s success in broadband can be attributed to heavy government involvement.

The drastic increase in broadband penetration from 1999 to 2001, shown in Figure 3, can be attributed to facility-based competition. During this time, the Korean government offered low cost loans to entrants of the broadband industry [39]. The Korea Information Infrastructure act of 1993 and Cyber Korea 21 Act of 1999 offered low cost loans to entrants of the broadband industry [39]. The National Broadband Plan guaranteed demand for ISPs, providing reassurance that investment into the infrastructure could be repaid. These competitive loans drastically lowered

Fig. 3. Broadband penetration in percent, comparing US to South Korea [39].

the risk for entrants to build out their own broadband infrastructure. Companies like Thrunet and Hanaro quickly captured 56.2 and 33.3 percent of the market, respectively. In addition to competitive loans, the Korea Electric Power Corporation allowed companies to lease their facilities for last mile broadband deployment. While the facility lease was a contributing factor to Korea’s broadband success, it is agreed upon in the literature that the loans played a more important role [39].

By 2002, there were many redundancies in the network and the government decided to switch to service-based competition, which required broadband providers to sell services to other entrants at a flat fee. The switch allowed the government to grant the Korean Telecom Corporation $929 million for it to deploy broadband into rural areas because of the money it saved from no longer lending to new ISP entrants [39].

The lessons learned from studying Korea is that in order to have successful broadband, government stimulus and oversight is required. Korea took the unconventional route; they started with facility-based competition and then moved to service-based competition once many redundancies in the network were built. There was a huge spike in penetration when Korea had facility-based competition. Switching to service-based competition allowed Korea to move its focus from urban areas to rural areas, which is an important factor in broadband penetration.

E. Chattanooga, Tennessee

Chattanooga became a pioneer in broadband connectivity when it launched its own internet service. The city raised $330 million for the project through a federal grant and the American Recovery and Reinvestment Act [43]. The city used this money to deploy fiber optic internet throughout the city, investing in the infrastructure themselves. They directly competed with the existing broadband providers in the region. The public utility company, Electric Power Board of Chattanooga initially decided to deploy this infrastructure to get a better understanding of their electric grid. Eventually the project got major support in the city hall and was expedited by the American Recovery and Reinvestment Act investment. The city now provides gigabit speeds to its constituents for only one hundred dollars a month [44]. In comparison, Verizon FiOS advertises its fastest speed (75 megabits per second) at one hundred fifty five dollars per month. One of the benefits of this facility-based competition model is that it lowers costs. Municipalities aren’t working to increase profit margins and can pass on the true costs of the service to their constituents.

The lesson learned from Chattanooga, is that publicly-owned broadband providers can be successful. Unfortunately, these investments are extremely large and the city usually takes on most of the risk. Chattanooga benefited from the American Recovery investment, but cities won’t have access to this fund in the future. The alternative is to have a company make an investment and take the complete risk. From the status quo, we can see that it is unlikely for companies to take this risk of creating their own facilities. In most successful cases, broadband has been a joint venture between the government and companies. Chattanooga demonstrates that the government can take the complete risk and successfully deploy high speed broadband.

Google Fiber is an example of municipally-supported broadband. Google Fiber has partnered with cities to deploy optical fiber networks around the country (e.g. in Kansas City and Austin). Provo, Utah had a fiber optic deployment known as iProvo, but the city sold this existing network to Google Fiber. In return, Google Fiber would pour investment into the current broadband network and expand optical fiber throughout the city. This is an example of how cities can support entrants to a market and add to facility-based competition. A trend to note with Google Fiber is that once it has entered the market, competitors consistently make their service both cheaper and faster [23].

While municipality-owned broadband may not be feasible in all situations, it should be an option for cities to consider. Currently, the incumbent internet service providers (ISPs) have proposed legislation that would stop municipalities from even having the option to create their own broadband [43]. Chattanooga was planning to expand its service to local neighboring counties but was sued by the ISPs. There are countless benefits of having municipally-owned or municipally-supported broadband networks. Cities should be allowed to decide for themselves if they would like to own their own broadband network. The FCC said in June 2014 that it will work with municipalities to create their own broadband network if the new networks can improve internet speed and reduce costs [43].

V. RECOMMENDATIONS

First, it is important to maintain separation between “regulator” and “regulated”. The reason that the regulatory framework works in the favor of incumbents is because incumbent players have significant lobbying efforts aimed at legislators, and significant contact with regulatory organizations. Tom Wheeler, the current FCC chairman, is the former president of the National Cable & Telecommunications Association (NCTA), the lobbying organization responsible for forwarding the interests of cable companies. It is extremely suspicious that the current leader of the organization responsible for regulating cable companies was responsible for in many ways preventing regulation from occurring [38]. If sufficient distance between regulatory bodies and the institutions they are charged with regulating becomes reality, perhaps the necessary regulatory frameworks can be put in place to curb the high costs and low competition rampant throughout the United States.

In order for America to increase its broadband speed and penetration, smart government oversight is required. Currently there is a lack of competition in the market, which is hurting American consumers. In order to boost competition, the United States needs to provide incentives and support to entrants in the broadband market. Like Korea, we could grant entrants access to existing infrastructure to learn and understand the dynamics of the broadband industry. This would require mandatory decoupling of existing broadband networks so they can be leased to entrants. Eventually, America will need redundancies in the broadband network so that we can reap the benefits of a facility-based competition. In order to do this, monetary incentives are required to lower the barrier of entry and the risk associated with developing infrastructure.

The primary takeaway from the regulation frameworks surrounding the previous two iterations of telecommunications (original cable and telecom as well as modern broadband services) is that the result of certain regulations is fairly predictable. When there is a new technology we need government regulation to prevent incumbents from pushing out the new technology, as government did for cable with broadcasters and telephone poles. When that technology becomes widespread, it is important for the new technology to be reclassified as a common-carrier to prevent monopoly pricing. Furthermore, when considering a “carrier” service, it is important to maintain separation between content, and the delivery of the content.


I don’t post often, but when I do, I want to tell you about it! Get notified when I write stuff about technology, business, and/or life. If you get sick of me, you can unsubscribe anytime. Enter your email address here: http://tinyletter.com/michaelrbock.

REFERENCES

[1] J. Aziz, “Why is American internet so slow?” March 2014. [Online]. Available: http://theweek.com/article/index/257404/why-is-american-internet-so-slow

[2] “Swiss Lead in Speed Comparing Global Internet Connections,” April 2011. [Online]. Available: http://www.nielsen.com/us/en/insights/news/2011/swiss-lead-in-speed-comparing-global-internet-connections.html

[3] R. Rozenblum and D. W. Bates, “Patient-centred healthcare, social media and the internet: the perfect storm?” February 2013. [Online]. Available: http://qualitysafety.bmj.com/content/early/2013/01/31/bmjqs-2012-001744.full#ref-3

[4] J. Kotenko, “The Doctor Will See You Now: How the Internet and Social Media are Changing Healthcare,” April 2013. [Online]. Available: http://www.digitaltrends.com/social-media/the-internet-and-healthcare/

[5] “Connecting America: The National Broadband Plan,” FCC, Tech. Rep., March 2010. [Online]. Available: http://transition.fcc.gov/national-broadband-plan/national-broadband-plan.pdf

[6] M. Williams, “Chattanooga Claims America’s Fastest Broadband Service,” April 2012. [Online]. Available: http://www.govtech.com/wireless/Chattanooga-Broadband.html

[7] “New study quantifies the impact of broadband speed on GDP,” September 2011. [Online]. Available: http://www.ericsson.com/news/1550083

[8] “Value of connectivity: Economic and social benefits of expanding internet access,” February 2014. [Online]. Available: https://fbcdn-dragon-a.akamaihd.net/hphotos-ak-ash3 t39.2365/851546 1398036020459876 1878998841 n.pdf

[9] J. O. Toole, “Americans pay more for slower internet,” October 2014. [Online]. Available: http://money.cnn.com/2014/10/31/technology/internet-speeds/

[10] T. Geoghegan, “Why is broadband more expensive in the US?” October 2013. [Online]. Available: http://www.bbc.com/news/magazine-24528383

[11] S. Crawford, Captive Audience. New Haven & London: Yale University Press, 2013.

[12] “FCC Broadband Report Finds Significant Progress in Broadband Deployment, But Important Gaps Remain,” August 2012. [Online]. Available: http://www.fcc.gov/reports/eighth-broadband-progress-report

[13] K. Finley, “Why Is Google Fiber the Countrys Only Super-Speed Internet?” Wired, jan 2013. [Online]. Available: http://www.wired.com/2013/01/google-fiber-shaming-exercise/all/

[14] J. Brenner, “3Tank, August 2013. [Online]. Available: http://www.pewresearch.org/fact-tank/2013/08/21/3-of-americans-use-dial-up-at-home/

[15] “Alcatel-Lucent sets new world record broadband speed of 10 Gbps for transmission of data over traditional copper telephone lines,” July 2014. [Online]. Available: http://www.alcatel-lucent.com/press/2014/alcatel-lucent-sets-new-world-record-broadband-speed-10-gbps-transmission-data-over-traditional

[16] F. O. of Engineering, Technology, Consumer, and G. A. Bureau, “Measuring Broadband America — February 2013: A Report on Consumer Wireline Broadband Performance in the U.S.” FCC, Tech. Rep., feb 2013. [Online]. Available: http://www.fcc.gov/measuring-broadband-america/2013/February

[17] P. Topic, “World Broadband Statistics, Q2 2012,” Point Topic Ltd, Tech. Rep., October 2012.

[18] Internet Service Plans. [Online]. Available: http://www.timewarnercable.com/en/plans-packages/internet/internet-service-plans.html

[19] CableLabs, “Data-Over-Cable Service Interface Specifications DOCSIS 3.0, Operations Support System Interface Specification,” CableLabs, Specification, nov 2012. [Online]. Available: http://www.cablelabs.com/wp-content/uploads/specdocs/CM-SP-OSSIv3.0-I20-121113.pdf

[20] — — , “Data-Over-Cable Service Interface Specifications DOCSIS 3.1, Physical Layer Specification,” CableLabs, Specification, oct 2013. [Online]. Available: http://www.cablelabs.com/wp-content/uploads/specdocs/CM-SP-PHYv3.1-I01-131029.pdf

[21] G. Kimmelman, M. Cooper, and M. Herrera, “The failure of competition under the 1996 telecommunications act,” Federal Communications Law Journal, vol. 58, no. 3, pp. 511–518, 06 2006. [Online]. Available: http://search.proquest.com/docview/213150798?accountid=14749

[22] Alcatel-Lucent, “Bell Labs breaks optical transmission record, 100 Petabit per second kilometer barrier,” Phys.org, Sep 2009. [Online]. Available: http://phys.org/news173455192.html/

[23] Google Fiber Plans and Pricing. [Online]. Available: https://fiber.google.com/cities/kansascity/plans/#plan=residential

[24] A. Neville, “New Broadband Map Data Shows Progress, But Work Remains,” National Broadband Map, Blog Post, Aug 2013. [Online]. Available: http://www.broadbandmap.gov/blog/3075/new-broadband-map-data-shows-progress-but-work-remains/

[25] S. Fox and L. Rainie, “Part 1: How the internet has woven itself into American life,” Pew Research Internet Project, Tech. Rep., February 2014. [Online]. Available: http://www.pewinternet.org/2014/02/27/part-1-how-the-internet-has-woven-itself-into-american-life/

[26] J. D. Levy, “Competition and Regulation in Cable TV,” Maine Policy Review, vol. 1, pp. 131–143, 1991. [Online]. Available: http://digitalcommons.library.umaine.edu/mpr/vol1/iss1/16

[27] F. C. Commission, “Evolution of Cable Television,” 3 2012. [Online]. Available: http://www.fcc.gov/encyclopedia/evolution-cable-television

[28] “United States v. Southwestern Cable Co. 392 U.S. 157,” 1968. [Online]. Available: https://supreme.justia.com/cases/federal/us/392/157/

[29] “Teleprompter corp. v. columbia broadcasting,” 1974. [Online]. Available: https://supreme.justia.com/cases/federal/us/415/394/

[30] C. Bolick, “Cable television: An unnatural monopoly,” 3 1984. [Online]. Available: http://www.cato.org/pubs/pas/pa034.html

[31] “The Cable Communications Act of 1984,” 1984. [Online]. Available: http://www.publicaccess.org/cableact.html

[32] “Cable television consumer protection and competition act of 1992,” 1992. [Online]. Available: http://transition.fcc.gov/Bureaus/OSEC/library/legislative histories/1439.pdf

[33] Richard Bennett, “G7 Broadband Dynamics,” 11 2014. [Online]. Available: http://www.aei.org/publication/g7-broadband-dynamics-policy-affects-broadband-quality-powerhouse-nations/

[34] “Telecommunications Act of 1996,” 1996. [Online]. Available: http://transition.fcc.gov/telecom.html

[35] “Title 47 U.S. Code Part I — Common Carrier Regulation.” [Online]. Available: http://www.law.cornell.edu/uscode/text/47/chapter-5/subchapter-II/part-I

[36] “National Cable & Telecommunications Association v. Brand X Internet Services,” 2005. [Online]. Available: http://www.techlawjournal.com/courts2003/brandx/brandx scus.pdf

[37] “NBC Universal Transaction,” 2014. [Online]. Available: http://corporate.comcast.com/news-information/nbcuniversal-transaction

[38] S. Gustin, “Tom Wheeler, Former Lobbyist and Obama Fundraiser, Tapped to Lead FCC,” 5 2013. [Online]. Available: http://business.time.com/2013/05/02/tom-wheeler-former-lobbyist-and-obama-fundraiser-tapped-to-lead-fcc/

[39] S. Choi, “Facilities to service based competition, not service to facilities based, for broadband penetration: A comparative study between the United States and South Korea,” Elsevier, 9 2011.

[40] C. Lin, Broadband Optical Access Networks and Fiber-to-the-Home: Systems Technologies and Deployment Strategies. Wiley, 2006.

[41] (2013) Highest Internet Penetration Rates. [Online]. Available: http://www.internetworldstats.com/top25.htm

[42] Angela Wilkinson. (2014, 8) Internet Speeds: As the Gap Widens. [Online]. Available: http://www.webanalyticsworld.net/2014/08/internet-speed-gap-widens.html

[43] J. Cosco, “A City In Tennessee Has The Big Cable Companies Terrified,” 7 2014.

[44] (2012) Fiber Optics. [Online]. Available: https://epbfi.com/internet/