P2P File Sharing: A Guide to the History and Functionality of Digital Information

Step
27 min readOct 18, 2016

--

Introduction: The Misunderstood Adolescence of P2P File Sharing

P2P (peer-to-peer) networking technology is a means for sharing and distributing files on the Internet. As the “information superhighway” hatched and began to fully spread its wings, P2P emerged as an alternative to the standard “client-server” model used for data distribution. This newer model was quickly dressed in a gown of controversy after it was discovered to be an easy tool for infringing on copyrights.

Tainted From the Start

First generation P2P file sharing became a part of mainstream Internet culture around 1999, in the age of Napster. Unfortunately, Napster relied on a central server to index users and their content, as well as establish a connection between two peers on the network. As it turned out, a central server within a P2P network that constantly dealt in copyrighted content was a major liability. Napster was officially forced to shut down in 2001 due to copyright infringement.

P2P file sharing was just a child then. It has subsequently suffered an awkward transition through adolescence into adulthood. P2P file sharing isn’t illegal. Much like a knife, P2P is only illegal if it’s used with malintentions. More often than not, knives only cause harm to the absent-minded sous chef. Similarly, P2P networking has a nearly infinite spectrum of legal functionality. Although, its reputation is tainted by years of abuse.

Understanding the Whole

Still, P2P file sharing is an important technology. It should be fully understood instead of pushed into the darker corners of the web. We want you to gain that complete understanding of P2P file sharing. In order for that to happen, we feel it’s necessary to build a foundation of information regarding the birth and evolution of the Internet as a whole. That’s what this guide will attempt to do. It’s an audacious venture, but we’ll try to keep it as contained as possible. When you come out the other end, you’ll be equipped with the knowledge of how our increasingly interconnected world works.

Part 1: Who Invented the Internet?

The InterWebs. The Information Superhighway. Cyberspace! The modern Internet has lived through several affectionate nicknames and will likely acquire dozens more in the future. As tides shift and technology grows, developments to both its name and structure are inevitable. The Internet’s relatively young age makes it the perfect candidate for monstrous modifications to the way that it lives and breathes. Like a child going through adolescence, many things are gained, and many more are left behind. So who is the proud parent of this fantastic contraption that we know and love? Who invented the Internet?

Western society continuously propagates the idea of the lone genius. Whether it makes storytelling easier or innovations more mystical, there is something about a lightbulb appearing above a single person’s head that satisfies us. We idealize these supposed lone geniuses as heroes of innovation, ambassadors of human creativity. The reality is often quite different. For instance, Edison worked with an entire team of inventors and merely put his name on the patents. And Jobs couldn’t have made Apple without Wozniak. Yet, the question of “who invented the internet” isn’t wrong. It’s just that there wasn’t only one “who”. As per usual, there were several “whos” involved.

Global Villagers

The concept of a globally connected world is not a new idea. However, we’re going to skip over those parts and get into the birth of the actual infrastructure and systems. Our story begins during a cultural hangover. It was a time when hippies started to migrate home and Nixon was licking his lips at the thought of the White House. Tensions between the U.S. and USSR were mounting and a race for technological dominance was afoot. It was at this time, in the late 1960s, that ARPA (Advanced Research Projects Agency) developed a working prototype of the Internet, called ARPANET.

So Who is This ARPA and What Does He Have To Do With The Net?

Originally funded by the United States Department of Defense, ARPANET was the brainchild of several scientists and engineers hoping to improve the way we communicated and shared data. It all began with the need to improve the pre-existing circuit-switching technology. Circuit-switching is a means of networking that required a dedicated channel for communicating between 2 nodes. The landline telephone system that we still use today is an example of a circuit-switching network.

In contrast, packet-switching networks transfer data in a series of “packets” that don’t require a dedicated line and are subsequently much more efficient. The Internet relies on packet-switching to enable the convergence of multiple devices communicating over a shared network.

Packet-switching was originally met with resistance. At that time, the Bell System of telephone networking had been the reigning king for decades. It wasn’t until a British computer scientist named Donald Davies further developed the concept and gave it its official name that the fellas over at ARPANET realized what they needed to do. Conveniently, a U.S. scientist named Paul Baran independently developed the same concept around the same time, ushering in the era of packet-switching in the States.

Information Super-Roadway?

Packet-switching is the reason why the term “Net” makes sense. This technology relies on several nodes that become available as soon as a packet of data has completed its transmission. The subsequent architecture of a packet-switching network resembles that of a spider’s web or fishing net. It’s another reason why the nickname “information superhighway” is very misleading. A better phrasing might be “information super-roadways”. This is because the image of a single highway doesn’t make sense with regard to the Internet.

However, the image of a complex series of roadways that all somehow connect in a tangled web is much more accurate. The cars act like packets of information on this super-roadway. As soon as a car/packet leaves a space on the road, there’s room for another car/packet to take its place. Multiple packets can travel on the same road. And packets can determine the fastest route to take based on traffic and other factors.

From Prototypes to Protocols!

The foundation of this sticky web was laid and the advent of modern Internet was approaching. The next step was to create protocols. When people ask “who invented the Internet”, the following few gentlemen are some of the first names to come up. They are the inventors of the protocols, or sets of rules, that computer devices still use to communicate over the Web. Most notably, they are Robert Kahn and Vinton Cerf.

Kahn and Cerf worked in unison to develop the Transmission Control Protocol (TCP) and the Internet Protocol (IP). Together, these protocols make up the Internet protocol suite. The World Wide Web, email, and file transferring all rely on TCP. IP is responsible for delivering information based on device addresses, somewhat like the Internet mailman. Aptly named IP addresses, these identifiers are required to send and receive information on the Web. How else would you get your mail if you didn’t have an address?

It wasn’t until 1983 that ARPANET officially assumed TCP/IP as its protocol suite. By accepting TCP/IP as the standard, the ARPANET team shaped the way the Internet would work up until this day. For the next several years, ARPANET would slowly make progress by building evenly distributed client-server relationships. They were building the “network of networks”.

Berners-Lee Spins a Web

The next name on the list of people who invented the Internet is Sir Tim Berners-Lee. Although, to be fair, Berners-Lee didn’t take part in the ARPANET project. He technically didn’t help to invent the Internet. He did, however, invent the World Wide Web. Without it, the Internet would look vastly different than it does today.

The World Wide Web is an information space for data organization and distribution. Despite the Internet’s seemingly infinite possible uses, the World Wide Web is typically the first thing that comes to mind. After all, it wasn’t until websites were created that users could “surf the web”.

At its most fundamental, the World Wide Web is a means for organizing information online. It consists of four main elements. They are HTML, HTTP, URL, and Browsers. These four elements also rely on client-server architecture, Internet Protocol (IP) addresses, and Domain Name Servers (DNS). While working for the European research organization CERN, Berners-Lee almost single-handedly created those first four technologies and subsequently reshaped the functionality of the Internet.

Hypertext: Everything Is Connected

We plan on diving deeper into the ways in which the World Wide Web works in our next post. However, the basic rundown is this: Hypertext is a formatting of text with the ability to link to other nodes, or web pages. HTML is the hypertext language that makes up a web page. HTTP is the protocol used to retrieve web pages within a browser. URL is a unique identifier that acts as an address for a web page. And web browsers are used to access and read web pages.

By 1991, the World Wide Web, along with the Hypertext Transfer Protocol, was officially adopted as the standard of Internet information architecture. At the time, a separate protocol called Gopher was steadily gaining traction. Although, its growth was severely stunted by the decision to start charging users a licensing fee. In contrast, Berners-Lee had convinced CERN to release the World Wide Web as a free and non-proprietary alternative to existing hypertext systems. This action laid the foundation for the free and open Internet that we’ve grown accustomed to.

Dot-Com Bubble Burst

By the mid-90s, the World Wide Web had taken flight and the network of networks was rapidly growing. It didn’t take long for companies to realize the potential for marketing, e-commerce, and the plethora of other business opportunities that the Internet provided. Soon, companies that solely existed on the Internet began to emerge. What followed was a surge of over-priced evaluations for Internet businesses that couldn’t live up to expectations.

Of course, the inner-workings of any economic boom and bust cycle are fairly intricate. The general gist of the dot-com bust is as follows. “Dot-com” Internet companies focused solely on growth in order to build market share. These companies would operate at a loss with the hope of eventually charging their consumer base for their products. Unfortunately, a large portion of them were built on unsustainable business models. Most of them jumped the gun in terms of going public. March 10, 2000 marked the climax and burst of the dot-com bubble as investors raced to pull out of companies founded on speculative facades.

Most of the smaller companies immediately died out, while the tech giants received critical blows. Ebay’s stock, for example, fell from $107 to $7 per share. Considering Ebay had an actually viable business model, it managed to survive and flourish later on.

Where Are We Now?

It took a few years of repair and an act of Congress to eventually regain stock market stability. To this day, thousands of tech businesses operate at a loss in favor of a “growth-centric” model. Sure, this type of model occurs in many areas outside of the tech industry. However, it’s much easier to pass-off a company as economically sustainable when one can demonstrate growth through user acquisition and engagement in a digital environment.

We’re currently seeing another slow and steady deflation of a tech industry bubble. Granted, this doesn’t even come close to the burst that occurred at the start of the century. Still, the cycles of booms and busts continue to occur in almost every industry. The economy is a bit of a wild, amoebic animal that is often times unpredictable. The information age ushered in by the rise of the Internet only seems to further complicate this fiscal beast.

Those who invented the Internet probably couldn’t have predicted how much of a commanding force it would be. Since its inception, it has transformed into an alternate dimension; A parallel universe to our earthly realm that people constantly migrate back and forth between. Some even spend more time within it than they do in the real world. Although, to be fair, the two realms aren’t necessarily mutually exclusive.

Well, that just about wraps it up for this post. We hoped you enjoyed learning about the people who invented the Internet and everything else that came with it. Stay tuned for our next post that will dive deeper into the Internet’s infrastructure and the mechanisms that allow it to work.

PART 2: How Does the Internet Work?

Like most complex forms of technology, the Internet is used daily by millions and yet rarely understood by most. At its most basic, it’s a network of machines that share information. At its most sinister, it becomes a deep-web of criminality and darkness. Of course, the Internet is neither inherently good nor evil, and understanding its inner-workings can only benefit the ways that you use it.

We’ve created this beginner’s guide to provide you with a sense of enlightenment, empowerment, and education in the face of an increasingly digital world. After all, true democracy can only thrive when the masses are properly educated about the technologies that make society tick. While this post probably won’t arm you with your next dinner party conversation topics, it will hopefully inspire you to learn more about the complexities of the modern world that most of us take for granted.

“It’s a series of tubes!” — Sen. Ted Stevens arguing against Net Neutrality

Arguments for or against Net Neutrality aside, Senator Stevens wasn’t 100% inaccurate with his grossly oversimplified description of the Internet. However, referring to an email as “an Internet” is a faux pas that we can’t let slide. Without being embarrassingly dumbed down, this guide will attempt to be as accurate as possible while remaining legible for the laymen. We don’t intend to provide you with all of the information, just enough to spark your interest and get you started.

Here We Go!

Everything on the Internet is information. Everything that occurs on the Internet is simply a matter of sending or receiving this information. This includes your Facebook profile, the House of Cards season that you’re binge watching, and this blog post. The physical infrastructure of the Internet is responsible for how this information gets shared. It can be broken down into four main categories: Clients, Servers, Nodes, and Transmission Lines.

Clients

Essentially, Clients are the receivers of information, while Servers are the storers and senders. A Client can be anything from a computer to a smart phone to a Wi-Fi activated refrigerator. Whatever you’re using to read this post is considered a Client because you’re on the receiving end of the information chain.

Servers

Servers store and send the information on the Internet. They are the keepers and senders of the data. Much like restaurant servers, computer servers are responsible for delivering the food (info) to your table, or rather, your tablet.

Nodes

Nodes are connection points along an information chain. Technically speaking, Servers and Clients can also be Nodes, since they are points within a communication network. However, for our purposes, we’re going to think of Nodes as routers, modems, or any devices that direct traffic and maintain a connection between the Servers and Clients. Think of them as the mailman or post-offices. Your Grandma sends you a Birthday card (server), the mailman directs the path of that card (router/node), and you receive the card (client). Of course, the chain is much more complex than that, but this will help you start to wrap your mind around it.

Transmission Lines

Transmission lines are the highways that the information travels on. They are the fiber-optic cables, the satellites, and the telephone lines that compose the physical network. The rise of cloud computing and constant connectivity might make it seem like the internet functions in some alternate dimension. After all, good technology should seem like magic to the user. Unfortunately, computers aren’t portals to another dimension where the Internet lives. Although, you can feel free to think of them that way.

The truth of the matter is, the Internet wouldn’t exist without the physical networking of cables and satellites that lay the foundation for its nebulaic form. It isn’t really as mystical as most imagine. This is why metaphors like “the Information Highways” and “World-Wide Web” are useful for depicting Internet infrastructure. It works like a system of roads that information travels along. In reality, the complexities of this system can be truly astounding.

Deeper Still

So how is it that all of these clients, servers, and nodes utilize transmission lines to connect to each other?

Well, first it’s important to remember what we mentioned earlier about IP addresses. These numerical identifiers allow devices to be located throughout a network. The current form of IP addresses is a 32-bit number. Each computational device has an unregistered internal IP address, used for connecting to other devices on a LAN (local area network). However, they require a separate registered address to gain access to the full Internet on a WAN (wide area network). This is where a NAT (network address translation) comes in.

Planes, Trains, and Network Connections

The Internet is a WAN. It connects together several LANs. Hence, the phrase “network of networks”.

Think of the entire complex as a city. The roadways are collectively the Internet, while the cars are the packets of information that travel throughout the Internet. Each intersection is like a LAN. In order to travel from one intersection to the next, you need to utilize the public roadways, i.e. the WAN.

Virtual Private Networks (VPN)

Subways are another way to travel from intersection to intersection. A VPN is sort of like a private subway car. VPNs establish a secure and private connection between LANs.

Similar to proxies, VPNs increase privacy and help users sidestep potential restrictions. For example, a person in China could potentially use a VPN to access content that would otherwise be blocked. They can also be helpful in blurring a user’s identity, making it difficult for their ISP (internet service provider) to track their activity. This is useful for a variety of legal, and not-so-legal activities.

NATs, not Gnats

A NAT is used somewhat like a concierge at a hotel. In a hotel, there are hundreds of rooms, each with their own extension for dialing their respective phones. Typically, the hotel only has one main phone number that outside individuals would have access to. These outside individuals call the concierge, and the concierge directs their call to their desired hotel room. Similarly, a home-based LAN will have one registered IP address for it’s router, and will require a NAT to direct traffic from devices on the WAN (home network) to devices on the LAN (entire Internet).

NATs solve the problems of finite IP addresses. When the 32-bit IPv4 (version 4) address format was first created, no one anticipated the huge number of devices that would be connected to the Internet. Consequently, there are only 4.3 billion registered IPv4 (version 4) addresses available. IPv6 (version 6) was developed as a solution to IPv4 exhaustion and utilizes a 128-bit format. Currently, only a small fraction of the Internet uses IPv6 since the transition from version 4 requires a slow and steady hardware and software overhaul.

NATs are the other solution to the IPv4 conundrum. They allow multiple devices on a LAN to share one registered IP address to connect to a WAN. The are the bridge from your home device to the outside world.

DNS (Domain Name Server)

As we mentioned above, an IP address is required for a device to connect to the Internet. A domain name server has the job of translating URLs into IP addresses. It’s technically easier for your web browser to access a web page if you type in the server IP address of that page. Fortunately, a DNS enables you to type in an easier to read/remember URL and will locate the corresponding server for you.

So to review, when you type in a URL in your web browser, your home computer acts as a client and sends a request. The request then travels from the LAN into the WAN through the various nodes and transmission lines and is given proper directions by a DNS. The path that the request takes to get to the server will usually be that of the least resistance. The true power of the Internet lies in the ability for Information to travel in this dynamically efficient fashion.

PART 3: Who Owns the Internet?

Who Owns the Internet?

It might seem obvious at this point that no single person owns the Internet. Although, as we learned in part two, the Internet relies on physical infrastructure in order to function as a network. In the world that we live in, there are practically no physical things that exist without someone to claim ownership over them. So it goes without saying that the Internet’s physical infrastructure must have an owner, right? And, by that logic, there must be some entity out there who owns the Internet. Or, at least part of it.

Well, it’s not exactly that clear-cut. Think about the ocean. Sure, nation states claim dominion over swaths of the world’s oceanic geography. They patrol and maintain these imagined borders via satellites and naval forces. Yet, there are still political grey areas, unclaimed territories, and limits to how these states can use or abuse “their” waterways.

The Internet is similar to the ocean. The authority of the infrastructural owners is completely undercut by the political thicket and fiscal restrictions associated with it. Yes, North Korea could decide to start destroying the ocean near their sovereign land. However, they’d be swiftly reprimanded by the world’s leading powers. If the CEO of Cox tried to shut down the company and all of the Internet it provides, he would be held financially and legally liable.

Power Dynamics

That being said, the power dynamics of infrastructure ownership bounce in both directions. The owners of fiber optics and satellites hold sway over the political complexes within a government, and vice versa. Anyways, we aren’t here to scrutinize the leveraging tactics of these individuals. We’re here to discover exactly who owns the Internet infrastructure and who has subsequent power over the physical aspects of the digital realm.

Thinking of the Internet as a single network containing multiple networks, it makes no sense to ask if one or even several people “own” it. That would be similar to asking “who owns the sky”. While there are restricted airways above most nation states, there isn’t a single entity that owns the entire sky. However, there are definitely individuals who own the mechanisms and hardware associated with us flying through said sky.

Backbone

The Internet has a backbone created by telecommunications companies. These companies are the “owners” of the backbone. The telecommunications companies that specifically deal with the Internet are upstream Internet Service Providers (ISPs). The U.S. upstream ISPs are UUNET, Level 3, Verizon, AT&T, Sprint, Qwest, and IBM. You’re probably looking at that list and wondering why your ISP isn’t on there. That’s because smaller ISPs are the companies that essentially work as middlemen between the upstream ISPs and the consumer. Although, some upstream ISPs will work directly with consumers, like AT&T and Verizon.

Smaller ISPs are usually Cable or DSL companies. They include the likes of Cox and Time Warner Cable. They provide the routers and cables that hook up to pre-existing landlines, fiber optic lines and satellite systems. There are antitrust restrictions put in place to ensure that no single ISP holds a monopoly over the Internet’s backbone. Still, if prompted your ISP could track, shut down, or restrict any of your Internet activity at a moment’s notice.

Rules of the Road

Aside from the physical skeleton that holds up the Internet, there are numerous rules and paradigms that shape the way of the web. These rules and paradigms must be developed and maintained. In a previous post we discussed the hypertext transfer protocol (http) that is responsible for allowing browsers to find and retrieve information. Currently, the vast majority of sites use the updated version (https) which includes a security feature that encrypts the information sent and received. The Internet didn’t just spawn this updated protocol out of nowhere. A group of engineers and scientists built and implemented https for the greater good of the network.

There are three main organizations that oversee the development and maintenance of the Web. They are The Internet Society (IS), The Internet Engineering Task Force (IETF), and The Internet Corporation for Assigned Names and Numbers (ICANN). The first two are non-profit, open membership groups, while ICANN is a controversially private business.

IS, IETF, and ICANN

The IS is somewhat like the human resources department for the Internet. They provide policies, educational approaches, and standard Internet practices. The IETF is composed of several smaller factions. These factions deal with the nitty-gritty aspects of digital architecture. One of those factions, the Internet Architecture Board (IAB), is responsible for the updated https protocol.

ICANN dominates the realm of domain names. They are like the owners and manufacturers of the Internet’s Yellow Pages. Their status as a private corporation makes some people wary, considering how hugely important domain names are and how ICANN has near total control over them. By managing the Domain Name System, ICANN links each URL with its corresponding IP address. This ultimately ensures that all traffic gets routed correctly.

We All Have Some Stock

So, as you can see, there isn’t one person who owns the Internet. Nor is there one corporation or one governmental institution who owns the Internet. A large mechanism full of thousands of moving cogs comprise the physical backbone, the digital architecture, and the pseudo-governmental bodies that shape what we see as the modern Web. Technically, any person who owns a connected device has imaginary stock in the Internet. We’re all microscopic shareholders in this fantastic network of seemingly infinite pieces of information and content.

PART 4: What is P2P File Sharing?

A Foundation of Sharing

The beautiful thing about the Internet is that it is built on the foundation of sharing. Whether it’s sharing photos, or sharing stories, or even sharing opportunities to complete e-commerce purchase, the Internet is an infinite space of information sharing. The ways in which that information is shared have continued to develop since the moment the first email was sent. P2P file sharing is just one type of information-based online communication birthed from these developments.

Four Types of Sharing

There are essentially four main types of digital media file sharing. They include peer-to-peer networking, web-based hyperlinks, central media servers, and removable media. There is a fifth type that is not as obvious, which involves cloud-based storage services with automatic file syncing. This is how digital media is transported on the Internet.

Removable media is the grandfather of file sharing. Prior to the Internet boom of the nineties, most people used removable floppy discs to transfer files back and forth. Currently, widely used removable USB dongles and external hard drives are acting as the grandchildren to floppies.

In the mid 80’s, remote file access via FTP servers, among other technology, finally came into play. This was the basis for the web-based hyperlinking and email services that are currently used on the World Wide Web. It wasn’t until the 90’s rolled around that MP3s were born. With them came the need for a Digital Millennium Copyright Act (DMCA), which would lay the foundation for the future of file sharing based copyright infringement controversies.

From Napster to Now

Just before the new millennium came to town, one major player in those future controversies rose amidst the flurry of technological development. Born in 1999, Napster was known as the first peer-to-peer file sharing network. Napster signed its death warrant by using a centralized server. A central server is an Achilles heel to all dubious online enterprises, and Napster fell subject to a legal attack in 2001. The father of P2P networks was shut down that same year.

In the wake of Napster’s lawsuit and eventual death, several peer-to-peer file sharing technologies immediately started to spring up. Gnutella became the first completely decentralized file sharing network in 2000. On a decentralized network, all users are equipotent and equally privileged. There is no central point of failure; No Achilles heel.

Post-Centralized

After Gnutella’s successful startup and Napster’s final goodbyes, the LimeWire client and BitTorrent protocol decided to join the party. LimeWire was hugely popular for a decent amount of time. It died after a 2010 legal injunction. On the other hand, BitTorrent continues to be the most widespread peer-to-peer network protocol in use.

The BitTorrent protocol brought a new development to the traditional ways that torrenting and P2P file sharing worked. When you shared an MP3 with Napster, you were essentially downloading an entire song from a single peer’s computer. The BitTorrent protocol works by letting a peer download bits of a file from multiple other peers, thus making the sharing process even less centralized and more efficient.

P2P File Sharing vs Client-Server Model

The traditional method for information distribution and consumption across the web is based on the client-server model. The server hosts and serves information to the client who consumes and responds. A common form of file sharing on the client-server model utilizes the File Transfer Protocol (FTP) in which a client request info and the server provides it. The World Wide Web is based on the client-server model. Your computer is a client that is receiving the information on this blog post from a server.

On the other hand, a P2P file sharing network essentially turns every participant into an equipotent client and server simultaneously. Most importantly, this means that P2P networks have no central server. They represent the truest form of digital democracy.

Sibling Models

The World Wide Web was supposed to be similarly open and democratic. As concerns for security increased, the open, democratic Web slowly developed into the client-server, broadcast-style Web that we’re familiar with today. Although, P2P file sharing and the World Wide Web are certainly not mutually exclusive entities. They exist in conjunction with each other. They are something like siblings in the Internet family.

P2P File Sharing Controversy

Napster was the first P2P network. It met its legal death only a few years after its inception. It is still the most notorious legal controversy in P2P history. Although, since Napster’s demise, several other file sharing sites, networks, and programs have suffered intense legal scrutiny.

P2P file sharing continues to receive backlash from trade groups and government bodies concerned with privacy, security, and copyright infringement. In 2005, the US Federal Trade Commission officially denounced P2P networks as havens for identity theft and copyright controversy. Of course, the popularity of P2P networking only increased.

In 2009, Congress passed the Informed P2P User Act, in an effort to prevent users from accidentally disclosing or “serving” their personal documents and information across a P2P network. This act helped to diminish unforced identity theft issues. Although, it did nothing to halt the sharing of copyrighted material. More legal proceeding were still to come.

What Happened to LimeWire?

In 2010, a U.S. federal court forced LimeWire to remove the user’s’ ability to search, download, upload, trade, or distribute any files within their software. This essentially rendered their program useless. They stopped distributing their software and the LimeWire name fell off the map.

Torrent Controversies

Torrents are a vital asset of P2P networking with the BitTorrent protocol. A torrent file is basically a roadmap containing metadata about the content that it represents. Torrents enable peers on a network to share information by providing their computers with directions. These directions tell the computer where the desired content exists. Using a BitTorrent client program, the user’s computer manages to download bits of the content from several other host peers.

The most popular way to find and download a torrent file is through a torrent indexing website. The Pirate Bay (TPB) is among the most notorious of these indexing sites. Established in 2003, The Pirate Bay works by allowing users to upload, search, and download torrent files. It has since become the poster child for P2P networking and copyright infringement. The Pirate Bay is a political jumping off-point for both anti-piracy and anti-copyright interest groups.

Beaten But Not Broken

Several years past filled with domain seizures, lawsuits, and court injunctions. A criminal prosecution eventually ensued. The Pirate Bay trial ended in March 2009 with the three founders and their business partner found guilty. This did not stop the site from surviving, and it currently thrives under new administration.

The highly publicised trial created a hotbed of debate over civil liberties, copyright laws, and the concept of intellectual property as a whole. Despite its legality, P2P file sharing will always be associated with piracy. The Pirate Bay wasn’t the first P2P service to receive legal attention. However, the 2009 trial and subsequent media attention has crystallized a negative outlook on P2P file sharing as a whole.

Beyond File Sharing

P2P dynamics go way beyond the sharing of information and content across the web. For instance, the P2P Foundation is a not-for-profit organization that advocates for the expansion of P2P systems within society. They cite examples of P2P networking on their homepage which include Creative Commons, Open Source Software, Wikipedia, and Free Culture. Many of these communities don’t make use of truly decentralized and democratized P2P distribution. Yet, they do utilize equipotent peer contribution.

Another notable instance of P2P networking occurs in the realm of blockchain economies. A blockchain is essentially a public ledger that records transactions and presents them across a P2P network. Blockchains enable cryptocurrencies to prevent double-spending without the need for a central authority. This allows blockchain economies to exist in a true peer-to-peer fashion.

Bitcoin is a type of P2P currency that relies on blockchains. After its inception in 2009, Bitcoins have grown to be synonymous with cryptocurrencies and P2P economies. In theory, Bitcoins are completely untraceable. Therefore, they’re commonly used to fund illegal activities on the dark web. In fact, if you go to The Pirate Bay’s homepage, you’ll see a link to their Bitcoin wallet where users can make donations. Despite their use in dubious digital activities, Bitcoins are legal. They are considered a successful experiment in decentralized digital banking.

Final Notes On P2P

P2P file sharing has necessitated the birth of SVOD and digital content on demand services in an effort to produce immediately available digital content in a legal purchasing option. In 2011, a report by Global Internet Phenomenon stated that Netflix traffic came to surpass BitTorrent traffic that year. Although, the ways in which this traffic is measured can be somewhat problematic.

As torrents and P2P networks continue to thrive, more and more content creators and copyright holders are focusing their attentions toward these distribution methods in one way or another. For instance, Radiohead’s Thom Yorke released his 2014 solo album Tomorrow’s Modern Boxes via BitTorrent. On the opposite end of the spectrum, Sony Pictures had apparently considered uploading “fake” torrents to popular indexing sites. This would have potentially dissuaded piracy and provide a cheap marketing outlet.

Similarly, the now defunct MediaDefender, a for-hire fighter of copyright infringement, would flood torrent sites with fake files in an attempt to overload servers and obtain user information. These efforts reveal the deleterious effect that torrenting has on studio profits. Regardless of your stance on copyright laws, you can’t deny the prevalence and pervasiveness of P2P file sharing in the world of media production and consumption. And despite the best efforts from trade groups and politicians, P2P file sharing isn’t going away anytime soon.

PART 5: Cloud Storage vs Home Media Servers

The final leg of our file sharing dissertation involves two oft-overlooked forms of information distribution: cloud storage services and home media servers. Both of these technologies have a widespread engagement. Yet, many people neglect their capacity for file sharing. Cloud services and media server software are typically seen as solutions to files storage and remote access problems. What most people neglect is their collective ability to facilitate collaboration and content sharing.

What is Cloud Computing?

It’s important to distinguish between enterprise and individual consumers when dissecting cloud services. The former is where most of the money comes from. The latter receives more public attention. This is likely because enterprise-level cloud services aren’t necessarily “sexy”. On the other hand, the consumer cloud is wrought with concerns over privacy and security. Even Hollywood has made light of these concerns. We’ll go over the different levels of cloud computing to give you a fuller picture.

SaaS, PaaS, IaaS — That’s a Lot of ‘aaS’

Cloud computing continues to evolve as businesses discover new ways to utilize the technology. Most cloud products are divided into three classes. They are platform as a service (PaaS), software as a service (SaaS), and infrastructure as a service (IaaS).

PaaS products enable developers to create applications that are optimized for specific environments. IaaS platforms allow companies to maximize their computing power and digital storage space, all within a virtual environment. SaaS offer files hosting and remote access solutions. While PaaS and IaaS are typically utilized by developers and enterprises, SaaS products have more widespread consumer-based functionality. For example, Dropbox, Box, and Google Drive are all SaaS applications.

So when we distinguish between the consumer and enterprise cloud, we’re essentially talking about SaaS vs PaaS and IaaS. Although, the lines between them aren’t always clear-cut. For the most part, an individual consumer will use SaaS, a developer (either alone or with a company) will use PaaS, and larger-scale enterprises will use IaaS.

At The Consumer Level

The bulk of SaaS products are file hosting services. These applications store your files on a remote server. They then allow you to access those files through a variety of web, desktop, and mobile app interfaces.

The general focus of these services is for an individual to solve their personal problems of remote file access. However, they have an added benefit of fostering collaboration and information sharing. Google Docs’ auto-syncing feature is a good example of this, as it blends aspects of instant communication, word processing, and basic file sharing.

Similarly, Dropbox allows you to share an entire folder with other collaborators. Every new upload is auto-updated within the collective folder. The end result is an attempt to facilitate remote cooperation.

Problems With the Cloud

The major downside to these types of cloud-based file sharing solutions is their reliance on a middleman, client-server model. The user uploads their content onto a remote server via a client application, and then a second user downloads said content from the same application. Unlike P2P file sharing, this scenario gives the central server a disproportionate amount of power. The users (nodes) are then at a disadvantage with regard to control over their files.

Another thing to consider is the amount of time it takes to curate and upload specific documents to these “clouds”. Sure, certain “auto-sync” features of Google Docs and similar services allow for a proper amount of spontaneity and are essentially instant. However, SaaS file sharing in the traditional sense is often slow, slightly inconvenient, and sometimes insecure.

How Does a Home Media Server Work?

A home media server is somewhat like a private or personal cloud service. You can imagine it as a tool utilized for transforming one of your devices into a private server, while your other devices become clients. A common example of home media server technology is a software application that grants remote access to files. The software transforms your computer into a server, and enables your other devices to access its files via a unified user-interface, or client application.

younity is a home media server software and mobile application. The younity app works by scanning your computer’s hard drive and then “serving” the files to your mobile phone or tablet. You can then access, stream, share, or download all of your files while on-the-go. There is no syncing or uploading required, and no file-type restrictions or storage limitations. If it’s on your computer, you’ll be able to access it with younity.

Relay Servers to The Rescue

Remember our previous discussion of NATs? They’re they current solution to the problem of finite IPv4 addresses. Well, the problem with NATs is that they make remote access difficult. They often block direct connections from a remote device on a WAN and home media servers on a LAN. That’s where relay servers come to the rescue.

A relay server is sort of like a temporary middle man that helps usher your digital traffic from your device on a WAN to your home device on a LAN. Relay servers communicate with the NAT to access devices on the LAN. Your remote device can then access the relay server. This is referred to as NAT traversal.

younity relies on relay servers to establish these bridges whenever a user is off of their computer’s LAN. This allows younity users to access their files at home, no matter where they are in the world. While the use of relay servers will typically create a slower connection between your devices, they’re a necessary piece of the remote access puzzle.

File Sharing With younity

younity turns your computer into a server and extends you the ability to share your files across a P2P network. You could think of the younity mobile app as a remote control for P2P file sharing from your computer to someone’s phone or tablet. When you share with younity, you are serving files from your computer to another user’s mobile device. Unlike cloud services, there’s no middleman involved.

younity provides a private way to share photos, videos, music, and documents that are stored on your computer from your mobile device. It’s easy-to-use, multifaceted, and more private and secure than the average cloud-based service.

Click here to learn more about younity.

--

--