The future of internet and technology policy is all about CATS. Yes, cats — the primary driving force for the past, present, and future of awesomeness on the internet — but also CATS, the acronym I’ve coined to talk about tech policy going forward. “CATS” represents the next 2–10 years of complex tech policy debates: Competition, Algorithms, Tracking, and Security. These issues don’t include everything in policy, of course. But as a mnemonic meme, using “CATS” captures key emerging issues where the public conversation is still on a growth curve, where political battle lines haven’t been fully formed, and where we haven’t yet figured out answers or even all of the questions to ask.
Competition policy. A fundamental assumption about the internet is that today’s big companies won’t be the same as tomorrow’s, because the internet is inherently disruptive and self-correcting. That assumption can no longer be taken for granted. Today’s big five companies — Amazon, Apple, Facebook, Google, and Microsoft — will, in all likelihood, still have their industry leading positions in a decade. Farhad Manjoo in the New York Times calls them “a new superclass of American corporate might.” And governments have been paying attention. The European Commission recently levied a $2.7 billion fine on Google over its combination of search dominance and embedded Google Shopping results. This is only the beginning, in Europe (where the Commission has two other open investigations into Google, regarding Android and AdSense) and almost certainly in other regions as well. But the typical antitrust authority toolkit of fines and breakups feels ill-suited to handling this phenomenon. Even substantial fines struggle to resonate against the earnings of these wealthy corporations. And breakups would often result in bad outcomes — users value the integrated services, and they’re certainly successful in the market. New, creative remedies need to be discussed and developed to target protecting the important characteristics of openness online, like interoperability through open APIs and open standards. These conversations are slowly beginning, but they have a long, long ways to go.
Algorithmic or automated decision-making. Machine-driven decision-making processes are taking over many systems formerly presided over by human deciders. Many of these are powered by forms of artificial intelligence, learning and prediction technology initiated by externally supplied data sets. They often bring with them incredible improvements in efficiency and efficacy. But, they also often recreate or even expand problems of bias and discrimination that the earlier human deciders struggled with, resulting in poor and sometimes illegal outcomes. The right outcomes here seem easy: We need a degree of control over machine-based decisions to correct problems when and where they arise. But the seas to that shore travel between a Scylla and Charybdis of technology industry resistance to outside influence on proprietary business methods and data on one side, and policymaker fear of systems they cannot understand on the other. This leads to occasional calls for prophylactic regulation of “algorithms”, a term often used synonymously with “magic.” Getting the right policy balance against such a backdrop will be challenging, to say the least.
Tracking and privacy choices. The deep tension between privacy and business models built on tracking and behavioral advertising remains as potent as ever. Fatalism runs rampant, whether in those who believe privacy is dead or in those who will not stop until all tracking stops (never mind that much of the technical activity that looks like tracking is used for quality of service or other generally non-privacy-invasive objectives). Neither of these views ought to shape the future of the internet. Today’s lucrative business models will not go away tomorrow, and that’s ok; for many users, even if presented with meaningful alternatives, the experience available today would be their choice. But, those who prefer a different experience too often lack that choice. In an ideal future, all relevant businesses would offer options — products and services that do not depend on privacy-invasive tracking. We will need less hyperbole, more constructive conversation, new thinking/innovation in sustainable business models, and effective political pressure if we are to have any hope of getting to that future.
Security. Tons of tech policy people are working on security, alongside countless engineers. But we’re not fixing it. We’re not investing enough in security defense, not by a long shot. Some of the policy problems here are known ones — for example, that core security research risks legal jeopardy in many countries around the world, and that backdoors in encryption weaken security for everyone. Meanwhile, it’s a basic truism of engineering that it’s harder to build a secure system than to find a vulnerability in one — much harder. Yet it seems like far more energy and resources are invested in attacks, ways to exploit inherent insecurity for its intelligence and law enforcement objectives, than in developing strategies, policy ideas, and investments to help get us to a more secure overall internet ecosystem. This topic is already a dominant one in the policy landscape, but it’s still growing. I’m just hopeful that over the next decade it will be grounded in a more technically literate, defense-oriented mindset.
UPDATE: I think there’s a better ‘S’ even that I could have used — Speech. Some of the most challenging issues we’re discussing today are around the use of internet platforms for speech actions that in many countries are illegal, and even where legal are considered harmful by the vast majority. This category includes hate speech, harassment, terrorism, and others. But it’s hard to figure out the best solution from a policy and regulatory perspective — particularly where the approach taken by policymakers is to assign legal liability to intermediaries who facilitate illegal speech without knowing or intending to do so. Governments around the world are looking at speech issues actively today, and although there is broad consensus that these problems are significant and need to be addressed, consensus is lacking on the right approach to solve them.
One thing that cuts across the CATS issues is that for all of them, if we don’t uplevel technology knowledge and understanding of government officials and policy communities, we won’t get to the right outcomes. This goal is what drove me from my Computer Science Ph.D. to law school and a career in public policy. It seems today that the need I felt 15 years ago is stronger now than ever. In other words — it’s time to get to work!