To block or not to block, that is the question

In the run up to the May general election, British voters are being asked to consider proposals to blocking some internet content. At present, anybody in the UK who has broadband installed in their home must decide whether they want a filter that will impede access to certain types of content, notably pornography and unauthorized downloads: according to Ofcom, of the four main internet access providers, only one has been asked by more than a third of its customers to install a blocker, while the others say the figures are between 4 percent and 9 percent.

Undeterred, British Prime Minister David Cameron is now raising the specter of creating an independent regulator with the power to compel internet service providers (ISPs) to block sites which failed to include effective age verification. The proposed measures may be limited to Britain, but raise the question as to whether certain content should be filtered, and whether it is viable to try to do so.

This is a much more complicated debate than it might at first seem, touching on issues such as freedom of expression, parents’ responsibility for their children’s upbringing, as well as purely technical questions. From a technological point of view the answer to the question as to whether efficient filters can be developed is simple: no. Countries such as Australia, which have introduced content filters before, have produced reports showing that despite the technological improvements, they are still unreliable when it comes to discerning whether content should be filtered.

The long and the short of it is that censoring pages that publish pornographic content online is simply not viable: any list of potential sites would be very, very, very long, and would be impossible to update, given the dynamism that characterizes the pornography industry. Previous efforts to do so have failed either to block pages that should be blocked, while often blocking others that shouldn’t be blocked. Furthermore, not all pornography is accessed via websites and can be downloaded from P2P sites, forums, or different kinds of groups.

Filters are not able to recognize contexts properly, so that what often happens is that sites which discuss pornography or contain photographs of a medical or artistic nature get incorrectly blocked too. Studies designed to check the efficiency of content filters show that the most restrictive are able to block around 91 percent of pornographic content, but pay a hefty price for that: they also restrict some 23 percent of non-pornographic sites.

Then there is the question of what we might call the technological generation gap. Can we really trust that these filters will be maintained when the very people most interested in breaching them are usually more technologically skilled than their parents? Can a filter that is designed to be used by inexpert adults resist the efforts of a hormonally driven adolescent? And what is to stop the same adolescent from using a VPN and accessing content via a proxy server outside the UK, an option that is increasingly popular given the government’s efforts to block downloading sites?

Beyond the technological aspects, there are other important issues we need to discuss before we start blocking content. Filters give parents the illusion of security, effectively relieving them of their responsibilities. The best approach is to teach children about pornography, robbing it of the appeal of the forbidden. Simply preventing them from seeing it at home only means that they will try to access it somewhere else, and if they are unprepared, then its impact is potentially more harmful.

At the end of the day, several generations of us have been exposed to pornography of one kind or another, and the majority of us do not have any kind of unhealthy attachment to it. So why should it be any different for the current generation? To put it another way, why should the ineptitude of some parents mean that everybody else has to have their content blocked? Any parent who believes that they should “protect” their child from pornography can already install a filter system.

Developing sophisticated filter systems is simply a way of imposing further social control, something that many governments would like to extend to other content, not just pornography. This is seen by some as the thin end of the wedge, a first step toward providing governments with a laboratory for testing out ways of filtering other content that they might consider dangerous at some point in the future. Two conceptual steps more, and at some point you’ll find your theoretically democratic country building another Golden Shield Project, a.k.a. Great Firewall of China…

The filter debate is very much of the times, and fits in with many voters’ demands that their governments protect them. At present, the governments of Canada, Denmark, the United States, Finland, Italy, Norway, New Zealand, the United Kingdom, and Sweden have all set up systems to prevent their populations from accessing child pornography and other content. For the moment, such policies have not proved to be very effective. Marginalizing content through filters is simply trying to sweep a problem under the carpet that isn’t going to disappear any time soon, rather than tackling its production and consumption, but many of us still find this solution appealing.

Drawing the line between parental responsibility and that of the government to protect the population is at the center of this debate. What Britain’s Conservatives, and many other governments, seem to be saying to the electorate is that it wants to protect them at all cost (and of course the children, we must protect the children!!) from anything bad that might be out there on the internet. This is not only impossible, but downright dangerous.

(En español, aquí)

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.