Dealing with Digital Waste

A blueprint for managing the data waste stream generated by the digital economy.

Center for Long-Term Cybersecurity
CLTC Bulletin
23 min readMay 5, 2021

--

“Digital Waste” by Kerim Safa | “THIS IMAGE ADDS 1380512 BYTES TO DIGITAL WASTE” is the statement that’s written by the horizontal and vertical lines merging on the dotted surface in the center of the illustration.

By Steven Weber, Ann Cleaveland, Sekhar Sarukkai, and Sundar Sarukkai

(Note: This piece is a longer version of an article published on April 8, 2021 by Noema Magazine, a publication of the Berggruen Institute.)

Signals that 2021 will be an inflection point for the digital economy: Multiple competition policy cases in the US, in Europe, and now in China targeting the largest and most powerful firms, from Google to Alibaba. The precipitous decline in reputation and trust among major social media firms. The story of “surveillance capitalism” and its catastrophic, almost apocalyptic view of modern technology bound to capital accumulation, leaving human values behind. The monetization of data through social vectors of hate, lies, and disinformation.

It’s hard to see how these issues get put back into a box or managed with only incremental changes on the part of users, firms, or governments.

We have lived through several such inflection points in the past 30 years, and they have mainly been centered on vectors of innovation and finance. The advent of the World Wide Web (1995) was an innovation inflection; the dot-com bust that followed (2000) was a finance inflection. The iPhone and App Store were technology and business model inflections, as were advances in machine learning and AI products around 2016.

Today’s inflection point will involve both innovation and finance, but it is fundamentally about something different: negative externalities. In simpler language, “digital waste” and the harms associated with it have accumulated to a breaking point, much like the valuations of dot-com firms with questionable or no business models did in 1999.

By digital waste, we mean data, whether raw or processed — the intangible aspect of the digital economy waste stream. (Not included here, though still important, are other forms of waste from the digital economy, like carbon emissions from data centers or pollution from the manufacturing and poor disposal of electronic devices.) For example, it’s fun and probably helpful to others to post a video showing how you use some tools in your home workshop to adjust the security cameras on the outside of your house. But that same data stream can easily reveal where you live, when you are likely to be home and where the gaps are in your security system — as well as enabling inferences about your neighbors’ homes. This is a form of digital waste, and it’s now a widespread consequence of our ‘normal’ activities.

Waste sounds like a big problem, and it is. But it also points to solutions. Seeing today’s disequilibria through the conceptual lens of waste, it’s possible to chart a reasoned, efficient, humane, and plausible path forward. If reducing the harms from digital waste is the overarching objective, there are models we can draw from in other waste management settings to help societies work intentionally and efficiently with a focused approach, rather than haphazardly and in a manner where priorities are buffeted by emotion and pure political maneuvers.

In this article we build out this analogy to make the case for constructing a systematic blueprint for reducing harms that people experience from digital waste.

If reducing the harms from digital waste is the overarching objective, there are models we can draw from in other waste management settings

Ecosystems and Waste

Every ecosystem has byproducts. A subset of those byproducts is labelled as waste, which typically means that whoever is doing the labelling experiences that byproduct either as useless or (more commonly) as costly or dangerous in some respect. Complex ecosystems usually have multiple ways of managing waste in everyday life. One strategy is toleration: for example, societies simply tolerate some level of waste in their drinking water (measured as an FDA threshold of acceptable PPM). Other strategies are more active. We convert some kinds of waste into inert substances (solidifying paint before disposing of it). We transform some waste into raw material input for a different process or output (for example, using old tires to make gym floors). Humans have also for a very long time pushed waste into some other ecosystem where it is thought to do less damage because the carrying capacity is greater (for example, dumping sewage into the ocean).¹ And we sometimes try to store waste in isolated containers that separate them from any ecosystem at all, and for thousands of years (as in the plan to bury nuclear waste inside Nevada’s Yucca Mountain).

From a photo by Jason Leung on Unsplash

But there comes a moment in the evolution of almost any socio-technical system when a significant proportion of people realize that these strategies are not enough, and that the waste stream cannot simply be managed, but must be reduced. The late 20th century saw that transition as we realized the harms of waste carbon released into the atmosphere, and of pharmaceuticals disposed of in sinks and toilets (which later showed up in fish stocks and other products). One of the most striking visualizations of this shift in mindset came in a 2007 Royal Dutch Shell advertisement that said “Don’t throw anything away, there is no away.” Regardless of the sincerity of the ad, it was a powerful message about the limits of ecosystem waste processing that much of the world has now begun to take to heart, at least in relation to carbon dioxide.

Ecosystem mechanisms for processing digital waste are overwhelmed

In 2021, societies around the world have hit that moment for the digital economy, with the widespread recognition that ecosystem mechanisms for processing digital waste are overwhelmed. That recognition can be seen in at least four fundamental shifts in beliefs.

First, we used to believe that internet disinformation would be processed by the “wisdom of the crowd,” by digital reputation systems that attach to the speaker, or by rational debate among competing viewpoints on open forums, as John Stuart Mill would have it. That idealism felt great to the early internet gurus in 1990s, but it was already in trouble by the 2000s, as what were then called ‘filter bubbles’ seemed to segregate people with different beliefs from each other.² In 2021, these mechanisms have simply broken down. A casual glance at the vicious invectives posted on Twitter show just what an ecosystem overwhelmed by waste looks like.

Second, we use to think that the jump in economic inequality associated with the digital economy would start to reverse as benefits spread more widely, from “trickle down” economics associated with productivity jumps, to the now banal (and not entirely accurate) stories of mobile phones enabling farmers in emerging economies to get better pricing for their grains. The plutocratic equation by which the largest firms in the world use massive lobbying spend to shape politics so that they can continue to accumulate digital (as well as conventional) political power and wealth shows that this ecosystem mechanism is overwhelmed.

Third, we used to believe in the transformative democratizing power of the TCP/IP protocol. John Gilmore in 1993 famously said “the internet interprets censorship as damage and routes around it.” The rise of sustainable digital authoritarianism in China and elsewhere shows that mechanism has reached its limits.

And finally, we used to believe in an oversimplified formula of governance and countervailing power that relied on making things that were previously hidden, transparent — that digital transparency would create accountability, which would lead in turn to changed practices among the powerful. This was never a strong argument, because holding a powerful actor to account generally takes power, not simply information. Slog through the many pages of a major platform’s privacy policy, or try to use their ad transparency data, to actually hold someone accountable, if you doubt that this mechanism is overwhelmed.

These four observations point to “system-level” overwhelms and failures. For most everyday users of digital products, there is also a personal sense of overwhelm and failure that is quite tangible and closer to our individual lived experience. That’s as simple as the sloppy mistakes many of us make every day when it comes to basic digital hygiene. Have you re-used passwords on multiple sites and accounts, or written them on notes stuck to the bottom of your desk? Do you really believe that a two-minute dark web search would not reveal your social security number, your mother’s maiden name, and the town where you were born and went to high school?

In an argument that focuses on system-level failures of waste management, it’s important to keep in mind that taking personal responsibility for digital waste is a necessary part of any solution. But individual action is always embedded in ecosystem dynamics, and it has become clear that digital wastes are poisoning their supporting ecosystem by overwhelming natural and engineered systems of waste processing and management. Closed ecosystems that hit this inflection point generally start to die off; it’s almost a natural law of ecology.

Open ecosystems respond differently, and in the 20th-century analogies to carbon and other forms of environmental pollution, there has emerged a pattern of sorts. The emergence of compelling scientific evidence and explanation of damage is an important foundation, but this initial moment of widespread recognition typically engenders three other important ingredients. There are the icons of damage that capture widespread attention, such as the Cuyahoga River catching fire, or the expanding hole in the ozone layer.. There are the symbols that mobilize and ground social discourse around the problem, such as the famous “blue marble” photo of the earth seen from space, or the dying polar bear. And there are charismatic messengers, people like Stewart Brand or Greta Thunberg, who make the argument for change in a compelling and emotionally resonant fashion. When it works, these ingredients work together to launch a decades-long process of social and political awakening, which leads gradually to action.

Can the digital waste problem wait that long? We think not. Like many things with digital, the rate of change is much faster than in other domains, and it seems likely that we have already hit the inflection point on the ecological S curve, where the carrying capacity of the environment is overwhelmed. Even within large firms that might suffer in the short term from a new way of handling digital waste, few in 2021 would argue for waiting another decade to see how it all evolves on its own.

Do Something

The sense of urgency to “do something” about digital waste is palpable in Washington, Brussels, Palo Alto, and just about every other center of power. News outlets and journals are filled with calls for action, from national privacy legislation and breach disclosure mandates, to awareness raising, to anti-trust proceedings. Some are reasoned arguments and some are simply emotional appeals to “throw out the bums” that run the waste producing companies.

Whatever you make of these individual appeals, no coherent strategy for harm reduction exists that goes beyond a “whack-a-mole” approach. The platform firms are largely in a reactive mode — sometimes appearing to try to move in a responsible direction, other times doing just enough to take the acute pressure off. They are certainly making it up as they go along, whether in content moderation, data privacy, or cybersecurity. There’s no better example than the multiple de-platformings of Donald Trump in the wake of the Capitol Hill insurrection on 6 January. Whatever your position on the decision to ban Trump from social media, it’s hard to argue that it was made in accordance with a broader coherent theory of harm reduction that would consistently apply to the harms that continue to propagate on the platforms, much less the new ones that will surely emerge. Such actions are more like a clean-up effort of a single Superfund site, or the response to the Exxon Valdez oil spill — an important step to take at the moment of crisis, but not one that is embedded in a larger theory of change. These measures are probably better than nothing, and there are times when whacking moles leads to overall better outcomes. More often, though, mole-whacking postpones larger action and mainly serves the medium-term interests of the polluters by stabilizing for a time their social license to operate.

What’s needed instead is a whole-of-ecosystem strategy that can reduce harm over time, be sustainable in both politics and economics, and be understood broadly by stakeholders across the spectrum. It’s just as important that this strategy be efficient, ensuring we get the most harm-reduction bang for whatever regulatory and restriction buck we spend. This is critical because we cannot forget that the digital economy, more than just producing waste, is also a source of the greatest economic growth and promise for human advancement that we have right now.

The point of waste management is not to shut down the otherwise valuable systems that produce waste; it is about reducing the waste efficiently so that we can amplify the upside. Cars have fuel economy standards so that they can go farther with less waste, not so that they can stay in park. Shutting down technology is rarely, if ever, a realistic option. But we can and should aspire to do better. What does doing better look like?

The point of waste management is not to shut down the otherwise valuable systems that produce waste; it is about reducing the waste efficiently so that we can amplify the upside.

Experiments in the industrial and petroleum sectors over the last several decades have taught us a lot about how to do somewhat better. Each step is partial, but as a package they have made a positive difference. We’ve seen some value from regulatory interventions, such as requirements for packaging take-backs. We’ve seen life-cycle analyses of products and processes that provide greater visibility into the full range of negative externalities that attach to a decision — for example, it’s now easier to understand where recycling actually reduces environmental damage and where it may not do so; or why electric cars are not quite so environmentally friendly over time as proponents would have it.[3] We’ve seen various tax interventions that change the balance of incentives for actions that range from installing renewable power generation to reducing chemical emissions. And of course we’e seen a variety of efforts at market design, most of which aim to put a price on externalities in order to shift incentives and behaviors of a wide variety of stakeholders who might produce or be harmed by waste.

Doing better obviously does not mean we’ve done enough or fully solved for the harms of climate change or industrial pollution. We haven’t. But these interventions have given us a fighting chance, and certainly a better chance than we would have had with no interventions at all, or with ad hoc interventions. (The powerful polluting industries naturally fought back against many of these interventions, but imagine how much more successful their opposition would have likely been against a whack-a-mole approach in the absence of any kind of efficiency logic.)

All of these logics have some applicability in the digital space as well. In the thought experiment that motivates our argument, we’ve chosen to focus on market design. This is because we think it is a good bet on the route to broadest applicability and deepest impact. It may also counter-intuitively be the easiest in practice to put in place. After all, the reigning ideology of the digital sector has been to push back against traditional regulation, seeing it as exacting unacceptable costs to innovation potential. But digital ideology cannot easily reject the idea of efficient market design, since it is precisely that logic that powered so much of its rise.

Efficient Markets for Waste Management, Digital Edition

Consider the example of continuous behavior-based authentication (CBBA). The simple logic of digital authentication is that to prove to a bank or a hospital that you are who you say you are, you need to show some combination of three things: something you have, something you know, and something you are. The most common version of something you know is a password, and we all know just how sloppy and risky password-based authentication has become. Two-factor authentication means combining, for example, a password (something you know) with a one-time code sent to your mobile phone (something you have) — and it is both safer and more burdensome (yet still not even close to foolproof).

CBBA is a set of technologies and processes that authenticates you not once but continuously, on the basis of something you are. For example, everyone has slightly different cadences when they type on a keyboard, slightly different eye motion responses to a flash of light on a screen. Each of us has a slightly different gait when we walk across the room; even the sound of a deep breath, if measured precisely enough, can be an authenticating feature. Imagine combining continuous measurement of a number of these uniquely differentiating characteristics of a person into a single probability score that updates regularly, and using that probability score to determine if I have the right to withdraw money from the bank, access a medical record, or even to vote in an election. The beauty of CBBA is that it runs entirely in the background of the user’s experience: I never get a password prompt, never have to worry about where my phone is, and never have to offer up an image of my face or the last four digits of my social security number (a truly silly way of doing things, since most of our social security numbers have been stolen and cheaply available on the dark web for years now).

CBBA has a lot of upsides beyond convenience. It would drastically reduce opportunities for fraud, make conventional phishing attacks nearly obsolete, and reduce the value of stolen passwords to essentially zero. It would significantly re-balance the cybersecurity landscape by taking away many of the easy routes of attack for bad actors, and it would reduce the costs that legitimate actors today have to bear. Think of a world without password recovery mechanisms and help lines devoted to that process. There’s a lot to like.

But CBBA also has risks and harms that are significant enough to matter. Data about my location, keystroke dynamics, voice patterns, and gait — and how the combination of these and other factors at any given moment compare to what they were in the past — are legitimate inputs to CBBA, but they also carry the risk of harm to me. They become waste as soon as they have been used for authentication. And like DNA records, it’s a long-term waste problem. I can change my password if it’s stolen, but I can’t change my gait or my voice patterns, at least not easily.

The problem with this long-term waste is more than just the possibility of impersonation or blackmail, since sophisticated CBBA systems can reduce those risks internally. Rather, the issue is what those data can be used to infer about my health, my mental state, or other characteristics that I did not intend to expose. This is particularly problematic when CBBA data combines with other data that is sloshing around in the waste stream. The obvious example: if you can see how my voice patterns and typing cadence have changed, and you combine that with data about my sleep patterns and what I buy at the grocery store, and you will almost certainly be able to determine that I am suffering from an episode of severe depression, which is private information that potentially could be used in harmful ways.

So how do we reduce the waste stream to a manageable level, consistent with the goal of maintaining a sustainable ecosystem that permits the growth of CBBA? Here are four simple moves that would contribute at a conceptual level. First, stop collecting CBBA data at the moment the system reaches the threshold level of confidence needed for a particular authentication function; in many cases that threshold will be more like 95% than 99.999%. Second, delete old data that you cannot prove is enhancing the system’s efficiency and accuracy. Third, limit the number of parties who are collecting CBBA data from the outset; a small number of sign-on service providers is probably safer. Fourth, develop business models around authentication services that support these changes in practice. Enterprises typically pay for authentication services, but most end users right now do not, at least not directly, and maybe they should.[4]

How do we convert those conceptual harm reduction moves into actual practice? The question leads directly to a thought experiment in market design that takes its cues from cap-and-trade systems for carbon reduction.

What’s essential is a pricing mechanism that incentivizes actors in the market to compete on reducing harms associated with data waste. The “cap” component is straightforward: government would establish the cap as an overall ceiling for harms associated with digital waste, and the cap would decline by a set amount — say 1% — on an annual basis. This incentivizes innovation on aggregate. The “trade” component is how the system allocates the burden of harm reduction to wherever it can be done most efficiently at a given moment. For example, the government would provide “credits” to algorithms that use the least data, and allow the credits to be traded and thus sold to companies whose algorithms require more data. This incentivizes innovation on an individual basis, and sets up competitive pressure with rewards for the most efficient ways of reducing harms.

It’s straightforward to imagine further means of price penalties for emitting digital waste. Consider an analogy to a Tobin tax, which was originally designed as a small amount of friction added to spot-market currency conversions, meant to disincentivize short term arbitrage on currency rates. A similar small tax could be placed on every new data input that a provider of CBBA wanted to add to its model. If the new data input were meaningfully beneficial in terms of improving the model’s performance, it would justify paying the tax; but if the data did not improve performance sufficiently, the tax would disincentivize collection of the data in the first place. This tax could also increase gradually over time on a predetermined schedule, in order to boost investment and innovation in data harm reduction.

Solving for the Unit of Account Problem

There is one conceptual roadblock that stands in the way of designing this market for harm reduction. It’s the simple problem of defining a unit of account for digital harm. In carbon cap and trade, the unit of account is straightforward: tons of C02 emitted into the environment. It’s straightforward because you can measure it easily, and because it is directly connected to harms. It’s not comprehensive and perfect, of course, as other greenhouse gas emissions such as methane and sulfur dioxide are also problematic. But it’s a good enough start.

What is the analogous unit of account for harms of data waste? We can say with certainty what it is not. A direct analogy for tons of carbon would conveniently be something like a terabyte of data, but that’s not the right answer, since potential harms vary greatly depending on what is in that terabyte. Any given terabyte of data will vary on measures such as range (how far the data has been shared or copied); period of decay (how long it takes for the data’s usefulness to expire); and “reactivity” (what other data it could combine with). On the other hand, one ton of carbon is essentially the same as any other ton of carbon, both now and five years from now.

It might be possible to gain some insight into the expected negative value of different possible coercive harms associated with data waste, for example by asking people what they would pay to reduce their exposure to those harms. This would be the equivalent of pricing according to a measure of revealed preferences, or at least as revealed as preferences can be in an opinion survey instrument. You would not want to take those numbers individually as representing real prices, but you might be able to take a comparison among them as representing a decent measure of relative harm. For example, I might be willing to pay X dollars to avoid being manipulated into buying something I didn’t really want or need, but 3X dollars to avoid being manipulated into a political act that is at odds with my personal convictions. But this process also has serious drawbacks, probably the most important of which is that many harms come from complex interactions among data that are not easily understood by the average consumer. In that sense, it would be like asking someone who is not a climate scientist to discover on her own the central role of carbon in climate change, which would be unfair to even ask.

We think there is a better way to go about deciding the most sensible unit of account, and it depends not on a theoretical finding or assertion, but rather on an institutionalized process. Imagine a public-private partnership in the form of a commission, probably mandated by the Congress, that is given the charge to come up with a proposed experimental unit of account and a formula that takes account of the best scientific knowledge to date. The key part of the thought experiment lies in how that commission is staffed and how it votes. So imagine a staff equation and a voting formula that incentivize for appointing reasonable people who, while naturally “tilted” toward the interests of their constituencies, would also represent the common good.

This can be worked out. For example, consider a formula where one-third of the commission is appointed by Congress; one-third by NGO’s, think tanks, and universities; and one-third by an industry association representing the platform firms. The commission must put forward a best effort consensus on the unit of account by the end of a year’s work. If the members fail to agree, we have a rule as part of the original plan that penalizes all parties, in some rough proposition to how much they benefit from not dealing with waste. The unit of account that the commission agrees upon is treated as an experimental one and is used for some defined period of time — say, three years — while a second commission of new members is appointed (according to a similar formula) to evaluate and refine it.

There has to be a credible threat that if a consensus on an experiment fails to emerge, the corporations benefiting from unmanaged data waste receive the harshest penalties. Is that credible? We think so because at present, society seems very much ready to make such a commitment vis-a-vis the platform firms right now. So the firms’ choice looks like this: participate constructively in a process to find a reasonable unit of account that will cost them, but improve the outcome for society; or block that process and suffer a much larger cost that will be more vindictive than efficient. It might just work. It certainly is worth trying.

Photo by Daniel Olah on Unsplash

Definition of Victory

To reduce the harms of digital waste is a modest goal, not a call for revolution. It won’t satisfy the most radical critics of the attention economy, surveillance capitalism, and others who have come to the conclusion that the exchange of personal data for services with indirect payment through advertising is a fundamentally flawed, deeply menacing, or purely undemocratic way to run a business or an economy. A carbon cap-and-trade system doesn’t satisfy the harshest critics of the fossil fuel industry or those who simply want to take vengeance on the oil companies, either. The point of carbon cap-and-trade with other complementary policies is to set the world on a safer climate path, not to satisfy critics of fossil fuels per se.

For those who accept sustainability as a legitimate goal, there is an observable way to know if it’s working. We’ll know, going back to the CBBA example, when we see a competition between providers like Google, Apple, Facebook, Amazon, and new entrants to provide CBBA single sign-on capabilities broadly. The most important signal will be that the competition between them is largely about how they reduce negative externalities, waste, and the potential for harm, and not just about making it easy, convenient, and cheap for users. That is not what the competition would be about in the absence of our proposed intervention, because the incentives right now barely point toward waste and harm reduction at all. Our experiment would change the balance of incentives, and that’s a modest but important definition of victory.

This argument has a number of other limitations. It’s a distinctively Anglo-Saxon approach, based in an economic frame that sees waste through the conceptual lens of negative externalities, efficiency, and the like. To assume that the coercion of individuals through inferences made out of data waste is an aggregate harm may be a culturally distinctive notion, not a universal one. Societies and cultures around the world certainly have different mindsets and practices around waste generally. We see that as a rich source of additional ideas and an opportunity for future work to go farther and imagine additional mechanisms, not a reason to drop this thought experiment as too limited or biased in and of itself.

We recognize as well that market design exercises are not free of politics. Our view is that market design can be partially insulated, more so than other means of intervention. The public-private commission with voting rules aligned around the incentives we name is one way to provide some insulation, and there may be others to discover.

Another objection is that fixed assets and sunk costs — inertia, in simple terms — will overwhelm the incentive structure that we’ve tried to design. It’s an important point that has quite a lot of resonance in the climate-energy system, where some of the world’s most massive fixed assets sit and amortize over 40 or 50 years. And there is already a great deal of digital waste present in the environment (just as there is accumulated carbon in the atmosphere), and some of that digital waste may have quite a long half life. But the digital economy is for the most part much less stuck in fixed assets. Data centers are expensive, but not like an offshore oil platform, and the amortization period is estimated to be more like 10 years. Servers typically amortize in less than five years. The infrastructure of digital waste production isn’t a long term sunk cost, and there are more degrees of freedom in the world of bits than in the world of molecules.

We recognize that the concept of harm reduction in digital waste will feel abstract and distant to most individual users of the internet and digital services. Think of it as an important step toward something more evocative and personal. The medium-term goal should be to create a lifestyle package for reducing digital waste that includes our harm reduction mechanism but goes far beyond it, much as societies have done for climate lifestyles over the last 20 years. In Oakland, California, it has become a bit of a lifestyle to practice “extreme recycling,” to compost your food waste, use solar energy, travel with an electric car or bike, and ask Amazon to reduce its packaging. It’s easy to make fun of this as a luxury lifestyle that only the upper-middle class can afford, and it is true that such practices do not address the structural inequalities that have led to the proliferation of oil refineries and childhood asthma clinics in Richmond or Martinez less than 20 miles away. But the price of each of those luxuries is coming down over time, and thanks to tireless organizing by the environmental justice community, pressure to redress harm to local communities that bear more than their share of waste is becoming more prevalent.

When you add the element of intergenerational equity to the equation, there is no inherent reason why entrepreneurs won’t seek over time to put together a similar lifestyle package for digital harm reduction. You don’t knowingly poison the groundwater that your children will have to drink. The digital lifestyle will ultimately be about protecting the internet for your grandchildren to enjoy, and digital waste reduction is a good place to start.

About the Authors

  • Steven Weber is a professor at the school of information and the department of political science at UC Berkeley, and a 2019–20 Berggruen Institute fellow.
  • Ann Cleaveland is the executive director of the UC Berkeley Center for Long-Term Cybersecurity.
  • Sekhar Sarukkai is an entrepreneur and a lecturer at the school of information at UC Berkeley. He was recently a fellow at McAfee.
  • Sundar Sarukkai is a visiting faculty member at the Center for Society and Policy at the Indian Institute of Science in Bangalore.

Footnotes

  1. Sometimes waste is pushed into geographic areas where residents have lower incomes, whether that be neighborhoods or countries. Arguments about whether this is driven by a carrying capacity or efficiency logic, or simply by exploiting the power of the strong against the weak, are core to the environmental justice movement. An iconic example of this debate was the furor over Larry Summers’ 1991 ‘under pollution’ memo at the World Bank, where he was serving as Chief Economist. See https://www.nytimes.com/1992/02/07/business/furor-on-memo-at-world-bank.html
  2. John Perry Barlow’s 1996 Declaration of Independence in Cyberspace was the iconic statement of this position.
  3. See the Wall Street Journal, https://www.wsj.com/graphics/are-electric-cars-really-better-for-the-environment/
  4. It is admittedly hard to move people from “free” to “paid,” but it’s not impossible to do so when the real costs of externalities are made visible. The fact that many people today pay for identity theft protection from firms like LifeLock is a proof point. And the shift from free to paid is not always voluntary, as is the case with many taxes. When you buy new tires or a new mattress in California, you now pay a mandated recycling fee, for an ecosystem service that used to be free.

--

--

CLTC Bulletin
CLTC Bulletin

Published in CLTC Bulletin

The CLTC Bulletin brings you the latest news, research and opinions from the Center for Long-Term Cybersecurity at UC Berkeley. Authors are from CLTC’s community of researchers, faculty, students and collaborators.