Imagine a Tech Covenant

Center for Long-Term Cybersecurity
CLTC Bulletin
Published in
33 min readJun 26, 2020

By Arik Ben-Zvi and Steven Weber

Arik Ben-Zvi is President, CEO and founder of Breakwater Strategy (BWS). Steven Weber is the Faculty Director of the Center for Long-Term Cybersecurity.

Today is June 23, 2020, and the Nasdaq Composite Index just hit an all-time high. Many of the largest U.S.tech companies also have market capitalizations near or at all-time highs. The five largest companies in the United States (by market cap) are Apple, Microsoft, Amazon, Alphabet, and Facebook. The spring of 2020 has been a period of multiple crises for American society, but it has been a very good period for the business side of the tech sector.

The contrast is what matters — because it is not at all a good time for tech’s social license to operate. Scratch the surface and you have a sector that is riddled with fundamental challenges — security, privacy, social mission, diversity, data rights, misinformation, business models, anti-trust — the list could go on. Tech is right now uniquely strong and important, and equally vulnerable.

CLTC is committed to understanding and solving for long-term security challenges, with the aim to amplify the upside of the digital revolution. A prerequisite to progress on many of the issues we work on is a tech sector that can mark out a broadly stable and mutually beneficial relationship with the society it is supposed to serve. Right now, that relationship is deeply challenged in ways that we discuss in the following essay (and more than we take on).

Tech reflects the broader crises of the moment, but is also undergoing a crisis of its own. And a crisis is in fact a terrible thing to waste. So we have tried here to mark out an unconventional pathway to a better future, consistent with the insights CLTC developed in its scenario work, and adjacent to (but necessary for) progress on the digital security issues we care about most. Asking tech to do more for society — and to do it safely, securely, and with the interests of users, customers, and citizens at the forefront — is not the obvious next step. But it’s a step we think is worth exploring. That is the purpose of this essay.

Imagine a Tech Covenant

The events of the past months have opened a window of opportunity — and demonstrated the acute need — to establish a new covenant between the technology sector and the society that it is meant to serve.

In the pandemic, products and services provided by leading technology companies have been central to preserving our way of life. Ten years’ worth of digital transformation has taken hold in something like 10 weeks, from companies learning to do remote work, to e-commerce becoming nearly all of commerce, to streaming video taking over entertainment for millions of consumers, to educators figuring out distance learning, to patients and doctors finally embracing telemedicine, to grandparents being present in their grandchildren’s lives through videoconference.

Yet technology could be doing so much more and doing so much better.

Technology could be solving the disease contact-tracing challenge at scale. Technology could be carrying out more of the most dangerous, monotonous, repetitive but absolutely essential work society needs done, during lockdown and beyond. Technology could be improving human decision making and accelerating invention and discovery across society.

These transformative opportunities and others like them are being lost not because they are technologically infeasible (though they are certainly challenging), but primarily because society does not trust tech companies to undertake these projects in a way that truly benefits the public interest.

That lack of trust has not prevented Americans from weaving technology products ever more deeply into our lives during the pandemic. But it’s a huge error to believe that what happened during a crisis and out of necessity signals that the underlying breach of trust is repaired. It isn’t. Appreciation for what tech has done to make quarantine endurable is real, but that hasn’t solved for people’s privacy concerns, for their sense of exploitation, for their anxieties about how automation and robotics will impact their jobs, or for the impact tech is having on our discourse and politics. All of these issues, and more, have not gone away.

And as a direct result, at a time when we should all want to see our best resourced and most talented tech companies ambitiously focused on delivering breakthroughs that address the current crises and the longer term deficiencies that these crises exposed, these companies instead face antitrust investigations, internal and external protests and wave after wave of bad media coverage and policymaker criticism. And these are likely just tips of the icebergs that await as the pandemic subsides.

In this essay, we lay out in broad strokes what a different path forward could look like for a more positive relationship between tech and society. We seek to answer the two most important questions on that path: what actions could tech take to re-earn some of the trust that the industry has squandered? And in turn, what license is society prepared to give tech if it were to change its ways and actually re-earn that trust.

In laying out this vision, we are fully aware that for many critics of the tech industry, the time for this kind of bridge-building between tech and society has passed. Their feeling is that it’s time for governments and regulators to step in decisively and bring the behemoth firms to heel.

We understand and have sympathy for the motivations that ground that argument — to a degree. Adversarial regulatory action by governments is certainly one legitimate theory of a particular kind of change. AT&T, Microsoft, and others were rendered less powerful, to some extent and for a certain period of time, by this mechanism in the second half of the 20th century.

So regulation can achieve some goals, up to a point. But it takes a long time. It is also incredibly expensive; the politics (national and global) are extraordinarily complex; and the results often are less advantageous to the public interest than expected.

Most importantly, regulatory intervention is much more effective at turning noxious things off than it is at turning desirable things on.

We are taking a different approach here because we think that the technology industry needs to play a bigger and more positive role in society, not a smaller and (only) less negative one. Our alternative theory of change intentionally puts the referees — government and regulators — in the background. We foreground instead the primary players on the field — tech firms and the customers and citizens who make up the society it is supposed to serve. If they can re-engineer their relationship between themselves, with the regulators staying in the shadows, the possibilities to amplify the upside and enable a tech sector that can change the world for the better are significantly greater.

The thought experiment that motivates our vision is something like this: imagine you came to the judgement that practical antitrust and competition policy actions would pose risks of greatly delaying (at best) or simply undermining (at worst) the potential positives that the technology sector can bring to society, and you wanted to preserve and even multiply the upside as best you could. Is there another way to get to a new and better relationship between technology and society that is more empowering of both, as well friendlier to the innovation drive that the technology industry so badly wants to protect, and that society so badly needs to benefit from?

It is against this backdrop — appreciation for all that technology is doing, disappointment at how lack of trust is stunting technology’s potential, and a sincere belief that it’s not too late to seek a newer (tech-driven) world — that we are calling for a new covenant between the technology sector and society.

*******

A covenant is not a legal contract. It does not spell out precise reciprocal responsibilities under as many contingencies as possible, as contracts try to do. Instead, a covenant is an oath bound relationship that is broad, flexible, and adaptive. It represents a deeply held set of reciprocal obligations that are permanent, while acknowledging that the circumstances in which these obligations are to be fulfilled is ever-changing.

The parties to a new tech covenant need not include every company that fashions itself a tech company. This is about those firms that the broad American public looks to as the indisputable leaders of today’s technology-driven transformations. Think Alphabet (Google), Amazon, Apple, Facebook, Microsoft, Twitter and, now, Zoom. Many other companies and other stakeholders (including investors) will have much to contribute to the dialogue, but ultimately it is a small handful of dominant companies that will define the relationship American society has with technology. Where they lead, the rest of the sector will have to follow.

Why are we talking about American companies and American society rather than the world? Tech operates on a global playing field, but the cleavage between technology and society still plays out differently on national stages. To be meaningful a covenant needs to reflect deeply the distinctive culture, norms, market practices, regulations and politics that it encompasses, and most of these still fall along national lines. We can speak to these realities inside the U.S. much more confidently than we can for other parts of the world, so we won’t presume to try. We welcome other countries articulating what their own versions might be. There’s no presumption here that what is right for the U.S. is right for anyone else. Some forms of harmonization might later be needed; a meaningful level of distinctiveness and difference is still possible even in a global marketplace.

Under this new covenant, tech companies will be called to do what they have never really done before — articulate the obligations to society that they are prepared to fulfill that go beyond compliance with law and protection of corporate reputation. Some of these obligations will be costly to adopt, challenging to live with and contrary to the ingrained habits not just of tech companies but of for-profit corporations generally. That is intentional, because it is costly signals that change minds.

Technology companies became profitable at unprecedented levels by mobilizing some of the best talent of a generation and vast amounts of financial capital for what have turned out sometimes to be dubious purposes. Alongside the many valuable innovations these companies brought forward there are just as many that contributed to declines in human decency, individual happiness, and the possibility of civil discourse — often simply in service of driving more people to click on more ads.

To move past that, technology companies need now to surprise society by taking equally shocking actions that are about social good. They need to show a willingness to sacrifice things nobody expected they’d sacrifice. Tech companies need to recognize and then act on the recognition that trust is never re-earned on the cheap.

If technology companies are willing to go that far, then in return society must be prepared to offer tech something of roughly equal value: a uniquely expansive license that goes beyond “license to operate” existing businesses. Rather, tech companies would be given the chance to experiment at the edge of the envelope in terms of privacy, safety, competition law and other social and legal third-rails. While this cannot be a carte blanche for tech companies to do as they please, it must be sufficiently broad to allow for risk-taking and failure, secure in the knowledge that they won’t be targeted for sanction or public abuse so long as they are keeping faith with their reciprocal obligations.

*******

There’s plenty of blame to go around for how we got to where we are in June 2020. And there’s plenty to support the belief, if you have already made up your mind this way, that the tech companies are this generations’ ‘giant vampire squids’ who simply don’t care about anything other than making money (and accumulating power for the purpose of preventing anyone from slowing them down). But neither is the point of this essay.

The point is to start from a more hopeful place and assume for the moment three simple things: that it is just possible to let go of blame for the past if there’s a brighter future in view; that there are people and forces within the tech sector that really do care about creating social value; and that customers and citizens can be more than simply passive and captive “users” of tech products if given a chance. If these things are true — in whole or in part — then we can and should try to fundamentally reset the relationship between tech and society with core shared values placed right at the center. A covenant is an expression of those values on both sides, and a forum for long-shadow-of-the-future bargaining.

The firms should start by acting on five specific principles.

1. Double Down on Productivity

Repairing the economic damage wrought by the pandemic goes way beyond reopening stores and restaurants. Governments and companies now carry unprecedented debts. Record unemployment and lost confidence means consumers cannot and will not spend us back to growth.

A sustainable recovery has to be driven by productivity gains — it’s what enables people and societies to improve their standard of living. As Paul Krugman once said, “productivity in the long run isn’t everything but it’s almost everything.” If productivity stagnates, we will face higher taxes, loan defaults, even more pressure on labor to give up gains to capital, and (most likely) higher levels of carbon emissions.

And the bad news is that even before the crisis our productivity track record has been weak. Tech has created amazing new hardware and software products, but the productivity effects have been lower and slower than almost anyone expected. There are many reasons for this, not all of them subject to decisions by the leading tech companies.

But given that these companies now sit at the commanding heights of the economy, they must accept an outsized role in turning things around.

If tech firms want their social license preserved or even expanded, then let them now make clear, measurable commitments to growing the parts of their business that have the highest potential to boost productivity.

This mean less time and brain power spent optimizing algorithms to drive social media engagement.

It means more effort spent on optimizing supply chains to reduce waste and deadweight inefficiencies. It means orienting AI/ML product development toward things like robotic process automation, which takes costs out of backend business service functions. It means aiming for predictive material science as an example of the acceleration of innovation in the building blocks that enable many other economic sectors to boost their productivity.

It means less gathering up, analyzing and selling consumer data to advertisers. It means favoring B2B over D2C. And it means focusing on sectors where productivity has been particularly stagnant — most visibly, in the 18% of U.S. GDP that is healthcare.

There’s plenty of money to be made in productivity, but that isn’t the point. This is about tech firms stepping up to a profound responsibility. These firms have market caps that exceed many nations. They must unleash that wealth on the problems that most vex our nation’s (and the global) economy. Doing so will create more wealth with fewer raw inputs and less (carbon-intensive) energy.

These commitments must be concrete and measurable. Either using existing measurements or bespoke, tech firms should report how their innovations are driving productivity gains over time. Margins and short-term profits might decline, but those (relative) losses are nothing compared to the potential gain of securing greater trust from society. It is exactly the kind of costly signal that trust requires.

2. Pay a Fair Share of Tax

Tech firms are infamous for financial engineering to reduce their tax burden to extraordinarily low levels.

It’s mostly legal. It is also deeply immoral and entirely indefensible. Arbitraging the fact that most countries’ tax regimes are still designed principally to assess physical atoms rather than digital bits isn’t clever. It is rogue. It is among the most visible signs that a company is indifferent to the foundations of the society in which it operates.

Tech firms, depending on the day, make up more than half of the top ten S+P 500 firms by market capitalization. Facebook, Apple and Google should not pay a substantially lower effective tax rate than Proctor & Gamble or Ford Motor Company.

Putting the onus on the IRS and other government tax writers to fix this problem is too easy. Particularly when the firms’ own lobbyists work to thwart changes that would increase their taxes. Instead, as part of this broad reset in their relationship with society, tech firms must embrace the duty to pay their fair share of taxes.

There are any number of ways to get there. One way would be to create a tax benchmark scheme for tech against the S+P 500 as a whole. There would be enough variation in a group that size to allow for reasonable policy leverage — say one but not two standard deviations away from the mean (after all, we do want tax policy to be able to favor some kinds of business activities and investments over others, perhaps rewarding carbon reduction and capital investment and — perhaps — job creation).

A supplemental point might be to consider the analogy to AMT (alternative minimum tax) in personal taxes, which adjusts for some of the quirks of the tax code in order to set a lower bound on fairness. Why not an AMT equivalent for firms that aims to do the same?

The key point here is that the tech firms need to own this problem. That it sounds mildly crazy to imagine a group of firms offering to pay higher taxes is exactly the point — it’s an iconic costly signal that can change reputations and generate higher levels of trust. As is always true with taxes, there are devils in the details which would need to be carefully worked through (the two most important might be to insure that a heavier tax burden doesn’t asymmetrically benefit the largest tech firms at the expense of medium and small tech firms; and how to manage cross-border issues since firms operate on a global scale). But the firms are just as or more capable of designing and evolving solutions that meet a reasonable definition of fairness and legitimacy than any single government or consortium of governments would be.

Tech firms want to be trusted. As a society, we should want to trust them. If they do the unthinkable and step forward to pay their fair share in taxes, it would take us a long way toward that goal.

3. Compete to Build a Healthy Digital Ecosystem

Of all the concerns society has with the tech sector, none feel more urgent in June 2020 than tech’s relationship with speech, misinformation, civil discourse and democracy. In a global pandemic in which access to accurate information is a matter of life and death, and with a fraught national election rapidly approaching, these concerns grow more acute by the day.

With pressure mounting for change, the tech sector has done almost exactly as a cynic might expect.

First, they have insisted that they welcome regulation on these topics and publicly urge governments to make the hard decisions and promulgate rules that presumably they would then follow. Second, they deploy armies of lobbyists to Washington, Brussels and elsewhere to quietly thwart the process of making those hard decisions and enactment of those very rules. Finally, they insist that in the absence of regulation, they cannot take meaningful action as doing so would position them as “arbiters of truth.”

Whatever merit the latter argument may have, it is no longer a legitimate excuse for inaction. Society interprets tech firm’s passivity on these issues as self-serving, and it should — because that stance conveniently reinforces the business models and growing profits of the firms who try to protect it.

Firms as large, wealthy, and influential as the leading tech firms have a special obligation to provide collective goods without being forced to do so by governments or anyone else. They are big enough to be responsible for the health of the ecosystem they have built and on which their business models rely.

To be sure, it is far too simplistic to just tell tech companies to “fix” the problems of misinformation, the breakdown of civil discourse and free speech. Tech leaders are not wrong when they say these are vexing issues.

The point is that they owe it to society to bring the same spirit of innovation, experimentation and ruthless competitiveness that enabled them to succeed in the first place to these issues.

That spirit is utterly lacking right now. It is startling to see firms that were built by iconoclastic founders who made their fortunes exploiting the inefficiencies of incumbents, effectively paralyzed in the face of their industry’s most glaring vulnerability. And turning to government to take the initiative? That’s exactly the opposite of the mindset that created these firms in the first place.

So, let the firms start “moving fast and trying things” when it comes to addressing these challenging problems. Let’s see Twitter beta-test an alternative version of its platform that applies substantially more rigorous tech and human-based moderation to weed out the violent, hate-filled content that pollutes the service today. Let’s see Facebook try a transparent, opt-in experiment with a small group of users to see how they react to a version of the app that aggressively tries to screen out false or misleading content. Let’s see YouTube launch a recommendation-free version, that actively works to keep from sending people down rabbit holes that can radicalize them.

If Mr. Zuckerberg and others are right that nobody wants them to be arbiters of truth, then these experiments would validate his thesis. At least we would be able to move forward knowing that this was true and not just a very convenient excuse. And if, just if, it turns out that people are actually happy to trade a bit of convenience and sense of unbridled free expression in exchange for a sense of safety from misleading content and vitriol — then we, as a society, can use that data to drive toward better, workable balances between “truth,” freedom of speech and expression, and civil discourse.

In the end, if different platforms land on different solutions, that is fine.

Regulators can then react to what happens on those platforms and what users decide to accept for what purposes — they might, for example, provide greater liability protections to platforms that assert a higher degree of control over content than to those that take no responsibility at all (almost the opposite of today’s dysfunctional stance).

To be sure, this is an expensive proposition. Building these experiments and scaling the most successful might cost billions of dollars. That’s just fine. In the midst of an economic crisis, Facebook effortlessly dropped nearly half a billion dollars to buy a company that creates GIFs (and collects data). It is not too much to ask that they spend real money to search out ways that their products can exist in harmony with democracy, rather than in tension with it.

Society, citizens, and users have a reciprocal obligation — which is to know where they are in the digital environment and sort their expectations, activities and speech acts in accordance with the particular reality of the platforms on which they operate for any given purpose. Most people know to expect something different when it comes to speech, truth, and discourse from The New York Times, Fox News, The National Enquirer, Bleacher Report, The New York Post and The Daily Mail. We have different expectations for a PG film and XXX film. We treat FDA approved pharmaceuticals differently than untested ”wellness treatments,” and we accept different levels of risk and responsibility as well. Customers are also citizens and people, and they aren’t quite so passive and blind as tech firms sometimes assume.

Tech firms will say they need political cover to act this way and that’s fair. But they shouldn’t wait for it to come to them. They should use their market power to force government’s hand, and justify the political cover by leading rather than free-riding on this crucial ingredient of ecosystem health.

4. Ask Permission and Forgiveness

Tech firms have to grapple with the fact that they are no longer a band of insurgent companies with the right (and perhaps duty) to innovate recklessly. They are now among the biggest and most consequential companies in the world. Their innovation paradigm must mature accordingly.

When Clay Christensen published The Innovators Dilemma in 1997, the tech sector as we now know it was still in an early stage of development. In that moment it made sense for the firms to key on a small part of his argument to establish a doctrine of “permissive innovation” — the idea that tech firms should be allowed to do anything they wanted to do unless and until someone could prove beyond any reasonable doubt that what they were doing was positively dangerous.

This was almost 180 degrees reversed from what in the pharma sector and others is called “the precautionary principle,” which means you don’t get to push innovations out into the market unless and until you can prove beyond a reasonable doubt that they are safe.

This reversal of the burden of proof was monumental. It enabled tech companies to do things that would otherwise have been impossible. If tech had been caught inside the precautionary principle, we’d probably still be living in a world where the internet operated more like the old Bell telephone network — a landscape defined by overly powerful incumbents backed up by impenetrable regulatory thickets. We’d almost certainly be living without WiFi networks, without cloud computing, and without open source software code.

But permissive innovation went too far. It gave us business models that had no path to profitability, were predicated on arbitraging regulatory systems, or profiting from the worst sorts of human behaviors.

Society wouldn’t benefit from dragging technology firms all the way back to a precautionary model. But the permissive model isn’t working either.

So now the burden falls on tech firms to articulate and put into practice a middle ground model. Call it responsible innovation.

A responsible innovation doctrine would be based on a commitment by tech firms to deeply embed consideration about the potential consequences of their innovations throughout the product development lifecycle. In other words, when considering a new product or new feature to an existing product, the firms would ask not only “how will the market react to this innovation?” but also “what impacts — good or bad — could this have on society?”

That process would require the companies to ask a series of probing questions and to have the right set of inputs and experts on hand to answer them. These would include the impacts their innovations might have on civil discourse, the mental health of customers, user privacy, data security, income inequality, worker rights, social justice, race relations, human rights broadly defined, and the environment. This would all be part of the early design process, not a ritualistic ‘checklist’ that comes later when momentum to bring something to market is building.

Innovations that have a high likelihood of causing material harm on one or more of these areas probably shouldn’t be launched. Those that have the potential to do real good should be fast-tracked even if they aren’t necessarily surefire moneymakers. And those where the picture is more ambiguous might be launched with built-in checks and balances and a willingness to be transparent about the trade-offs involved so that regulators and other stakeholders can hold the company accountable.

For this paradigm to work, it must be woven into each company’s “operating system” — a basic way of doing business that is understood and observed from the entry-level, to the c-suite, to the board, to major investors.

A responsible innovation mindset and practice is a touchstone for appropriate levels of trust — not too much, but also not too little. This is what’s needed to have tech firms run serious experiments with disease surveillance and contact tracing that society will accept as legitimate. This is what’s needed if a government wants to seriously evaluate the ability of tech firms to solve the massive problem of ensuring free, fair, and accountable voting over the internet. It’s what’s needed if a city decides it’s fed up with petty corruption of local officials and wants to use an algorithm to prioritize which potholes to fill and streets to repave. And of course if a city wanted to deploy technology in a meaningful way — beyond body cameras — to accelerate the oversight and reform of policing, with the same vigor that motivates predictive policing and sentencing algorithms.

It’s true that a permissive innovation mindset might get you to some of those things more quickly and in a more straightforward way, if you are a determined technology product manager simply looking to solve a “pain point.” But that’s not the pathway to a sustainable license to operate, and tech firms are too powerful to play the game that way anymore.

The ball is now in tech’s court to define the parameters of responsible innovation in a manner that is acceptable to society, not the other way around. There’s no precise formula that would work in all sectors and in all geographies — what counts as responsible will look somewhat different in parts of the EU than it will in the U.S. for example when it comes to personal information and privacy. Global firms despise that kind of uneven playing field, but the reality is that they are stuck with it and should be looking for ways to make it a feature, not a bug (as others sectors have in the past, for example in the “California effect” where high standards are made de facto general by firms, not governments).

When firms get to define their own set points for responsible innovation, customers and employees get to migrate to products and firms that hit the balance that they want to see. Tech firms can make this set point very concrete for their employees by setting up their evaluation and compensation systems accordingly — a product team might be evaluated at year end not only by market growth and profits, but also by impact on a set of core social measures (i.e. did the product contribute to civil rights, public health, waste or carbon reduction, or more informed and civil dialogue). Boards can reinforce this by deploying their governance and oversight responsibilities to evaluate product roadmaps for long-term social value as well as near term profitability measures. And audit firms might issue third party independent assessments that rank tech firms along some meaningful dimensions of responsible innovation, to inform the market about their long-term value proposition.

Responsible innovation means asking permission and forgiveness in a balanced measure. It isn’t quite so disruptive; it isn’t exponential; it doesn’t naturally “blitzscale.” But it’s precisely what the tech sector needs to define and practice, and hold itself to account for. We’ll know the balance is more or less right when governments and regulators are backgrounded because firms that break the norm are eschewed by other tech firms, and the top tier of engineers, ML specialists, and other in-demand employees self-select to work for firms that are committed to this compromise.

5. Enact the Long-View

For decades, Americans have lamented the tendency among publicly traded corporations to focus on quarterly results and short-term profits rather than long-term value creation — so much so that calling on companies to push against that tendency has become nearly cliché. But for the tech sector, long-term thinking is so fundamental that it cannot be treated as just a hollow slogan.

The modern digital tech sector was born out of desire to transform the way people understand and move through the world and relate to each other, by using bits and data to solve problems that have been recalcitrant and persistent in the world of atoms and molecules for decades or sometimes centuries. That kind of crazy ambition can be easily satirized, and often is. It can be criticized in a more serious way, as hammers looking for nails, or simplistic tech “solutionism.” But the audacity of tech is actually worth celebrating. And it also means that tech ultimately is either transformative, or it really isn’t much of anything special at all.

Choose your starting date — it could be the birth of the silicon microprocessor around 1960 or the release of the Apple Macintosh in 1984, but to say digital technology is still in its infancy isn’t right anymore. If a decade from now the biggest problems tech has solved revolve around faster food and product delivery, nicer looking front-end interfaces for what are still traditional services, and more precise ad targeting and better recommendations for escapist entertainment, society (and the employees of tech firms most of all) would be justified in wondering whether a generation of wealth and talent had been largely wasted. And if tech’s most profound achievements are to enable ever-more socially divisive discourse or panopticon-style surveillance that governments use to control their populations, it might be worse than wasted.

Tech can and should aim much higher. This isn’t a question of self-regulation; it’s a question of self-reorganization around longer-term goals. There’s plenty of inspiration to be found back in the early days of the Homebrew Computer Club, Xerox Parc, SRI, and the like, but what matters now, in what is de facto middle age for the sector, is concrete, visible, and sustained action. You don’t have to look hard to find issues of deep importance where the world’s leading tech companies are positioned to make a meaningful difference: improving public health infrastructure to allow for faster and more effective mitigation of the spread of infectious disease; securing the integrity of elections and other core public processes; promoting public security in ways that are more effective and less violent and intrusive; and combatting elements of structural racism that lie behind bias in education, finance, healthcare, real estate, and elsewhere.

These are all long-term recalcitrant issues that are painfully visible right now, which means you don’t need science fiction authors or scenario thinkers to articulate why they matter. Most tech firms could simply ask their employees what they think should be the long-term agenda for their firms, and the answers would be a pretty compelling corporate mission.

The products and services that would make progress on these problems can be profitable eventually. They probably won’t be profitable quickly. That’s a feature that can elicit trust, because it’s an opportunity for the costly signals of long-term investment.

And so publicly traded tech companies need right now to do what their peers have always struggled with — convincing their investors to be patient. But tech companies are exactly in the right place to do this. First, many have in place well-oiled machines in their core businesses that can deliver excellent revenues at fantastic margins for as far as the eye can see. Senior leaders at companies like Google, Facebook, Apple and Amazon can afford to let those profitable business units run, while their focus and incremental investments go elsewhere.

Second, the largest and most influential tech firms have stock structures that secure the position of their founders against many of the incentives and whims of the investment community. They secured that license for a number of reasons, but long-term problem solving has to be among the most important. It’s easy to criticize the lack of accountability to financial markets that many tech firms have effectively gained for themselves, but to the extent that they have one important upside it is in the freedom they give founders to put long-term results ahead of quarterly profits.

Finally, most tech firms have the infrastructure and culture of research institutions deep in their world views, or DNA if you prefer. There is no need to create wholly new business units, recruit entirely new types of talent, or revamp entrenched mindsets and behaviors. Indeed, all that’s really required is a reordering of priorities and incentives, to tap into the audacious energy of discovery and empowerment that is right there just below the surface.

For the big firms to really do this — and to convince the public that they are really doing this — will require a different kind of transparency. That doesn’t mean publishing all aspects of a company’s research agenda. Trade secrets can remain secret and not all data or intellectual property has to be or should be shared. But if companies want society to trust that they are investing in and working toward transformative solutions, then the public needs a good degree of visibility into what those efforts mean.

Toward that end, tech companies should publish their own forward-looking research vision and roadmap statements, which describe the types of technologies and products they imagine being able to deliver in a 10-, 15- or even 20-year time horizon. They should speak candidly about the potential risks to those projects — including technical bottlenecks and possible social reactions that they may provoke. They should situate these commitments in a clearly articulated point of view about why these projects matter over others; about global vs. national (and other, including local) priorities; and about how the world will be a better place when these projects succeed.

If that sounds like just a PR exercise right now, it’s precisely because of the trust deficit that the firms need to overcome — and this is the right way to do it. Skepticism will be gradually whittled away if and when companies commit to regular substantive updates and report-outs on their concrete progress. Stealth mode isn’t a viable option anymore. If a major project gets shelved and resources redeployed, the companies should explain why and where the resources are headed, so that others can learn from that experience and adjust their own investments accordingly. If a project is making progress, the companies should be upfront about that as well, so that other stakeholders including NGOs, academics and policymakers can start to process the implications of emerging innovations and think ahead to the frameworks (market, regulatory, behavioral, and otherwise) that might be required to make those new products achieve the greatest possible benefits for society.

That type of commitment to long-term thinking and acting will require new management habits and most importantly new governance structures. Boards of directors will be a crucial part of this equation (which they frequently have not been in many of the largest tech firms because the founders and primary owners have chosen to create Potemkin village boards made up of their friends, unquestioning supporters, and people like them). Tech CEOs should instead create and welcome the oversight of a “real” board of directors that advises, consults, and ultimately governs the long-term trajectory of the firm.

The word accountability gets thrown around a lot by tech firms talking about how their products can change for the better how citizens relate to other sources of power — other than the tech firms themselves. A meaningful dose of the same accountability medicine ought to be something that tech leaders take on for themselves, and their boards of directors are the most meaningful, visible, and functional places to start.

Right now, the time and attention of tech company leadership is understandably spoken for between running their existing businesses, keeping up with the competition and handling day-after-day of bad press, political attacks, all in the shadow of government lawsuits and antitrust investigations. Some leaders feel that they simply don’t have bandwidth to think far into the future. But they simply must. It is the best and probably most important way to establish trust that their businesses and their users, customers, and citizens need.

If the firms get this and the other principles we’ve articulated here right — then they can justifiably ask society to reciprocate in ways that give them the space and license to work towards, experiment with, and deliver the types of innovation we most need. What does that look like?

What Society Must Be Prepared To Do

What if tech said “yes?” What if we woke up tomorrow to headline announcements from Google, Amazon, Apple, Facebook, Microsoft and other leading tech companies committing themselves to shifting their priorities toward investment in productivity-enhancing innovations; paying their full, fair share of taxes in the U.S.; rolling out a wave of new, competitive products and services whose core value proposition was that they might improve the basic health of the digital ecosystem; articulating and promising to abide by a “responsible innovation” paradigm in which all new products and services would be vetted closely for societal impact before being commercialized; and laying out a research and development agenda for the next decade focused primarily on ideas that would undermine structural racism, and benefit public health, democracy, financial inclusion, and social cohesion.

What would we be prepared to do differently than today in terms of our collective relationship with tech companies?

The answer can’t simply be “nothing.” A covenant is an oath between two parties, not just a promise from one to another. The tech covenant we envision asks companies to make choices and commitments that are short-term antithetical to the conventional practices of for-profit corporations, and they would likely feel the pain in market capitalization as well as employee compensation tied to that metric. Costly signals work to change reputations precisely because they are costly, but the short-term pain can’t be borne by one side alone if the goal is to shift the relationship to a better place. And it really does not matter who was “at fault” in some primordial sense for where we are right now; it only matters how we move forward to a different and better place.

Which is a way of saying that society cannot be passive. “Wait and see” may sound and feel good to people who are angry and disappointed, but it doesn’t actually advance the cause if the objective is a substantial reset of the relationship. This isn’t just about changing reputations and establishing trust. The innovations that the tech companies would be committing to bring forward over time would all be dependent on a regulatory and market framework that can make those products successful. Users, customers, citizens, and the society that confers to firms a license to operate have to send their own signals in return.

If we want tech to act differently, we need to be prepared to reciprocate the firms’ reset efforts.

Specifically, we as a society need to send costly signals of our own.

Like most businesses, tech companies are sensitive to the signals they pick up — not only from the marketplace, but also from the media, social media conversation, policy debates and expert dialogues. And right now, the signals we as a society are sending to tech companies overall spell the words: You. Can’t. Win.

How can that be when most of us spend most of our days using tech products and either directly or indirectly spend a significant proportion of our disposable incomes with them as well? The answer is that much of this is now done grudgingly. The signals we are sending during the COVID-19 crisis (more so even than before, though it’s been this way for a while) say that while we need the tech companies and let them into our lives in intimate ways, we really wish we didn’t.

When tech companies lean toward more aggressive content moderation in the service of combatting misinformation, they are accused of censorship and aiming to become the arbiters of truth. When they lean back and preference free speech, they are accused of enabling extremism and undermining the Enlightenment. When they talk about their capacity to build efficient and accurate contact tracing, they get hit with accusations of “surveillance capitalism” and enabling “1984.” When they pull back from that kind of ambition, they are mocked for giving us the tools for 280-character screeds instead of flying cars.

The point is not that anyone should ever feel sorry for tech companies. Firms that are among the wealthiest and most powerful organizations on the planet are always going to be scrutinized intensively, as well they should be.

What matters here are the signals we send about what society can offer in return as part of a covenant that manifests a different relationship. It starts with recognizing that if what we do is beat up on the tech companies no matter which direction they turn, then the wisest course of action any of them can take is to not move at all. And that leads us back to the dilemma we started this essay with — a world that needs tech to do more even as we struggle to trust tech to do more.

But if tech companies show a willingness to change how the sector relates to social needs — if they embrace some version of the principles outlined here — then society needs to send signals that it in turn is willing to change its relationship to tech.

We need to send different signals as consumers. If one of the tech companies overcomes the fear of first-mover disadvantage and visibly embraces this agenda to take some far-reaching business risks that we can see and measure, then that company should be rewarded with greater customer affinity as quickly as possible. Users need to demonstrate by their actions that they will pay a premium price for products from companies that act to create long-term value in the service of building trust.

We need to send different signals as the owners and arbiters of corporate reputation. Right now, there is just too much public delight in criticizing every tech company mis-statement and misstep, and (often) personalizing it as ardent antipathy toward specific founders or executives. This kind of demonization is really just the flip-side of the hero-worship that was more common around tech visionaries a decade or more in the past — but it’s just as one sided and just as dysfunctional. We need to ask ourselves: is the vitriol all about them; or is it partly about ourselves as well, a reflection on our own weaknesses having granted too much license to tech in the past; or simple envy and jealousy; or resentment spilling over from other kinds of disillusionment with elites and power imbalances in contemporary life? It’s simply not right (and more importantly, not helpful) to blame tech for most everything that is wrong with contemporary American life.

What matters is what we do about reputation going forward. Tech firms own the power position, so most of the burden for reputation change does fall on their shoulders. But some of it is shared. If tech companies visibly change, then we must be prepared to look past mistakes that happen along the way because mistakes are part of being human. We certainly don’t need to love or revere tech billionaires, but we should commit to try to understand their point of view just as we want them to understand ours.

A big part of that right now would include asking our mainstream media outlets, as well as many academics, to be less reflexively vitriolic in their posture toward tech (and rewarding them for doing so). Everyone would be better served by being a little bit more honest about the self-interested aspect of newspapers’ “techlash” clickbait, and the reputation-enhancing effect for academics competing over who can be most viciously eloquent in critiquing technology.

We need to send different political signals. Right now, a generation of rising political talents on both the right and left are making their names as “tech slayers.” That is surrendering to short-term political incentives in precisely the same way we accuse firms of surrendering to short-term profit incentives. Boxing ourselves into an unbending adversarial stance is counter-productive if we want tech companies to change their ways. Citizens need to demand and reward political leadership that doesn’t treat tech companies as enemies, but rather looks at them for what they are — vital national assets that need to be re-oriented toward acting as partners in collaborating on the most essential collective goals. Voters and activists will need to stop giving space to the feel-good, anti-tech bomb throwers and rally behind more constructive voices making more subtle and balanced arguments. It’s a big ask, but so are many of the things we are asking from the firms.

Most importantly, we need to send different signals about our hopes for the future. Americans are living through an understandably pessimistic moment. Survey data clearly shows that this country has never in recent history been so fearful of the future. That pervasive lack of optimism has the perverse effect of lowering everyone’s standards of behavior. If we assume the future is going to be worse than the present, we enact an expectation of zero-sum relationships that quickly becomes reality. And in that zero-sum mindset, there’s no incentive for people and firms that might actually offer inspiration. They justifiably figure that nobody wants to hear it, and if they try to put some inspiration out there as a trial balloon, they are more likely to be satirized and scorned than to be listened to.

*******

Cynicism is a bad starting place from which to launch a new covenant. But we think a new covenant is precisely what is needed right now, and the first step is simply to imagine it, which is what this essay tries to do. The basic bargain ends up being pretty simple. Tech companies drive toward a radically better future through the spirit and actions of disruption, transformation, experimentation, and continuous improvement; they demonstrate concrete commitments by taking short-term costly actions for long-term shared values and benefits. Society reciprocates with a demonstrated willingness that we are ready to believe again, in a more mature and responsible way, which means backing up those beliefs with votes, dollars, and social license to experiment.

Both sides recognize that we really are in a position to try to invent new ways to manage or maybe even solve problems that have been intractable for decades and centuries. But just because something can be accomplished doesn’t mean it will be. We can discover better trade-offs through debate and bargaining, but not without some shared foundation of joint commitments that grounds those debates and holds all sides accountable to the long-term upside.

If you think it’s too late for a new covenant, ask yourself what a decent alternative is, and what the world will look like if we don’t change direction. If you think the U.S can’t lead by constructing a new covenant here at home that would be a template for others to follow, ask yourself if it’s better to wait for other countries, societies, or regions to set the terms and have the U.S. react instead.

And if you think a new covenant might just be a realistic possibility to achieve, then ask yourself right now what you can do to start down that road. We hope this essay offers some initial thoughts about how to do so.

--

--

Center for Long-Term Cybersecurity
CLTC Bulletin

CLTC helps individuals and organizations address tomorrow’s information security challenges to amplify the upside of the digital revolution.