Loading…
0:00
22:22

The Cambridge Analytica scandal was the latest hit in a long series of controversies involving leading Silicon Valley tech companies. Facebook, Google, Apple, Twitter, and Uber, to name a handful of the marquee players, keep popping up as targets of our great digital ambivalence. Many of us love aspects of their products and services while also hating the price we pay for using them: attachment, dependence, vulnerability, lock-in, and a sense of being exploited and of that exploitation being validated.

Mixed emotions exist because big tech companies aren’t fundamentally like big tobacco companies, despite how popular the metaphor has become. Yes, tech addiction is real. But tech companies aren’t pushing toxic, single-use products. Sure, they’ve helped eviscerate privacy, amplify prejudice, exploit psychological weaknesses, incentivize harassment, intensify distraction, and exacerbate political tensions. But we haven’t collectively kicked these companies to the curb yet, because proprietary digital tools also enhance personal, social, economic, and civic well-being, and the free and open source movements haven’t gotten the attention they deserve.

And so, despite all the benefits of technology, tech companies have scandalized away the public’s trust.

Trustworthiness goes hand in hand with ethics, and if trust is to be regained, lots of work needs to be done to close gaping ethics gaps. Here, I’ll talk about three areas that need improvement: ethics in design, ethics training and procedures at work, and ethics education and policy at school. Throughout the conversation, I’ll explain why it’s so damned hard for tech companies to live up to high ethical expectations, and I’ll offer suggestions for improving upon the status quo.

But first I’m going to confront the elephant in room. Should realists believe that ethics can actually matter — instead of resigning themselves to the conviction that tech companies will continue to play us for suckers until there are draconian shifts in policy?


Isn’t Ethics Just Ineffective Moralizing?

Perhaps the most frightening thought is that there’s little point in even identifying the core ethical questions and issues that the major tech companies should be discussing among themselves in order to take more responsibility for the powerful ways they shape our individual and collective lives. Frank Pasquale, author of The Black Box Society: The Secret Algorithms That Control Money and Information and one of the sharpest thinkers I know, hauntingly told me that the days of being optimistic about self-regulation are over, even though the prospects aren’t good for bold governmental approaches, in either the United States or Europe, to protect citizens from being manipulated, deceived, and exploited by tech companies.

“I don’t think tech companies can have these discussions until a regulatory framework forces them to do so. They were warned about the perils of lax application of their own guidelines, and they have ignored or marginalized their critics,” Pasquale said.

Facebook systematically overvalues AI, engineering, and automation, and devalues compliance, legal expertise, and ethics.

“In the U.S.,” he continued, “politicians need to empower the Federal Trade Commission (now a Potemkin privacy protector) and beef up the staffs of state attorneys general. (The Massachusetts attorney general is, for example, aggressively investigating just this issue). In Europe, data protection authorities need to be enforcing privacy laws with added vigor and to ensure that the general data protection regulation is not strangled in the cradle by crabbed interpretations of its provisions for algorithmic transparency, such as the right to explanation.”

Pasquale makes a good point. If people like Sandy Parakilas, “the platform operations manager at Facebook responsible for policing data breaches by third-party software developers between 2011 and 2012,” wasn’t given the authority he deserved after warning “senior executives at the company that its lax approach to data protection risked a major breach,” why should we believe that firms are now ready to listen to ethically concerned employees and grant them power to be agents of change?

Sometimes, it seems that the major tech companies are drunk with power, believing they’re too big to fail, and so disruptive as to be beyond the scope of being disciplined by mechanisms of the old world order and only cynically willing turn to high-minded ideals, like “ethics” and “corporate social responsibility,” when running interference plays from public relations manuals. Apologies and promises of change can be good crisis management strategies, filled with obfuscation and worse, rather than sincere expressions of aspiring to be worthy of the public’s trust.

And let’s face it, tech companies are in a structural bind, because they simultaneously serve many masters who can have competing priorities: shareholders, regulators, and consumers. Indeed, while “conscientious capitalism” sounds nice, anyone who takes political economy seriously knows we should be wary of civics being conflated with keeping markets going and companies appealing to ethics as an end-run strategy to avoid robust regulation.

But what if there is reason — even if just a sliver of practical optimism — to be more hopeful? What if the responses to the Cambridge Analytica scandal have already set in motion a reckoning throughout the tech world that’s moving history to a tipping point? What would it take for tech companies to do some real soul searching and embrace Spider-Man’s maxim that great responsibility comes with great power?


Why Design Is Critical to Ethics

If companies want to really do better, first and foremost they need to recognize that it’s time for an ethical revolution in design.

Across tech companies large and small, design issues are wearing away the public’s trust. It’s not just the impenetrable boilerplate terms of service contracts that are keeping us in states of manufactured ignorance. Design choices of all kinds influence how we perceive risks and rewards. Like advertising, design strategies program our preferences and desires, affecting which technologies we want to use and own and how we’re inclined to interact with ourselves and others through their technological affordances.

Nobody is doing more to make this revolution happen than Woodrow Hartzog, a frequent collaborator of mine and author of Privacy’s Blueprint: The Battle to Control the Design of New Technologies. “Design isn’t just an ethical issue because it is everywhere and it is power,” he told me. “Design is a major ethical issue because that power can and always is used for political ends. By definition, it allocates power between platforms and users. And it is never neutral. Every decision to make something searchable, to include certain things in a drop-down menu, to include a padlock icon to give the sense of safety, to nudge people for permission for certain data practices furthers an agenda to make certain realities of disclosure come true. Usually, the agenda is disclosure.”

The agenda of weaponizing design choices so that the machinery is optimized to extract maximum personal information from tech users was put on full display by the recently leaked memo from Facebook VP Andrew “Boz” Bosworth. In this controversial document — which, at the time that it was written, was meant to be a provocation — Bosworth made rather shocking remarks about why Facebook should aspire to connect more and more people. Distressingly, he didn’t portray the platform’s potential for being instrumental in people losing their lives to bullies and terrorists as a deterrent.

“After reading this memo,” Hartzog said, “it’s a little easier to see how every aspect of the design of Facebook is bent towards its mission to get you to never stop sharing and to feel good about it in the process. And the reason design is now such an important ethical issue,” Hartzog elaborated, “is that law and policy have thus far had little to say about it. Lawmakers focus on data processing but too often ignore rules for the design of digital technologies. We can do better across the board, and it starts with being more critical about the way our tools are built.”

In other words, design disasters occur so frequently because there’s been a perfect storm of neglect running from policymakers through CEOs alike. To improve the situation, Hartzog argues that a new blueprint for privacy and related values needs to become widely adopted. His ideals are sound, but unfortunately, tech companies won’t accept them until they stop hiding behind the myth of heavy-handed ethics. Tech companies like to sound liberal by suggesting that paternalism and democracy are incompatible. The story goes that since people have different values — including varied privacy preferences and conceptions of acceptable speech — being sensitive to diversity requires avoiding promoting strong standards that some will embrace and others will see as infringements upon their liberty.

This is bullshit. It’s like celebrity athletes saying they want to be in the limelight but don’t want to be considered role models. The very moment tech companies create objects that distribute power by influencing large-scale behavior, they should take responsibility for what they’re unleashing upon the world.

Responsibility has many dimensions. But as far as Hartzog is concerned — and the “values in design” literature supports this contention — the three key ideals that tech companies should be prioritizing are: promoting genuine trust (through greater transparency and less manipulation), respecting obscurity (the ability for people to be more selective when sharing personal information in public and semipublic spaces), and treating dignity as sacrosanct (by fostering genuine autonomy and not treating illusions of user control as the real deal). At the very least, embracing these goals means that companies will have to come up with better answers to two fundamental questions: What signals do their design choices send to users about how their products should be perceived and used? What socially significant consequences follow from their design choices lowering transaction costs and making it easier or harder to do things, such as communicate and be observed?


Ethics Education at Work

It might sound like a cliché, but it’s impossible to talk about comprehensive ethical reform in the tech industry without discussing education.

To some extent, it’s understandable that tech companies have ended up in positions where they’re seen as ethically deficient and deserving of slots reserved for the greedy in Dante’s fourth circle of hell. For starters, disasters that create resentment and fear are more easily remembered than routine exhibitions of everyday virtue, which largely go unreported and thus go unnoticed. Similarly, all the good that scientists, engineers, and managers do at tech companies — including fighting for civil rights and privacy — readily get overshadowed by the negative stereotypes fostered by their more myopic and less scrupulous colleagues.

Nevertheless, tech companies need to do a better job of infusing ethical instruction into their corporate training. The problems are so nuanced, complicated, and amenable to “death by a thousand cuts” dynamics that silver-bullet solutions — like hiring chief ethics officers — won’t accomplish much.

Irina Raicu, director of the Internet Ethics Program at Santa Clara University’s Markkula Center for Applied Ethics, perfectly captures what the main goal of this practice should be. “Such training would not inoculate technologists against making unethical decisions — nothing can do that, and in some situations, we may well reach no consensus on what the ethical action is,” Raicu argues. “Such training, however, would prepare them to make more thoughtful decisions when confronted, say, with ethical dilemmas that involve conflicts between competing goods.”

One of Raicu’s key insights is that tech companies can’t infuse robust ethical sensitivities and help employees cultivate mature ethical judgment with sporadic activities. Since “ethical decision-making is like a muscle that needs to be exercised lest it atrophy,” it needs to be integrated into the fabric of day-to-day activities. In other words, ethics here and there is far too limited of a commitment — much in the same way as religious practitioners question the commitment of folks who attend houses of worship only for popular festivals.

After the emotion contagion scandal, Facebook gave the public a window into its research review process. As it turns out, the company reached out to me for input while drafting the document. While I found the authors to be quite receptive to critical dialog, questions have been raised about how much the document really reveals. But that’s the thing. There are hard questions to ask about every tech company’s ethics policy — about what they make public as well as what they treat as corporate secrets.

And this brings us back to an issue I raised earlier on. Since tech companies serve many masters, there often are structural tensions between professional ethics, personal ethics, and social ethics. Privacy professionals, for example, don’t even have their own distinctive code of ethics. Many privacy professionals are lawyers, and it’s a matter of great debate whether adhering to their responsibilities as lawyers would conflict with swearing allegiance to a new set of ideals that place greater emphasis on advocating for the public good.

So, even if more tech employees had hearts of gold and the resolve of saints, they won’t be able to accomplish much if they aren’t empowered to challenge and change norms that normalize the banality of institutional evil.


Ethics Education at School

When data scientists and engineers walk in the door on the very first day of their very first adult jobs, they enter the workplace with their own views about what it means to be a professional and what professionalism requires of them. To prepare for this moment, universities can’t pass the buck and delegate the responsibility for ethics training to corporations. They need to do their fair share of ethics education, which, at a minimum, means placing a high value on imparting codes of conduct, case studies, moral reasoning skills, policy briefs, and project-based assignments that highlight ethically salient issues.

And design ethics! For all the reasons Hartzog emphasizes, there’s got to be lots of design ethics. Students who are creating the online, virtual reality, augmented reality, AI-driven, and machine learning environments in which we’re all going to think, learn, work, and socialize need to have a clear sense of the immense power they wield. Experience designers aren’t just responsible for making intuitive products; they also bear some of the burden for shaping what gets experienced.

The thing is, it’s not enough to throw more resources into ethics classes and related, ongoing endeavors, like integrated ethics across the curriculum. To be sure, more of these activities would help, especially if they involve philosophers. Admittedly, as a philosophy professor, I’m biased. I believe that conceptual tools, like carefully crafted and smartly interpreted thought experiments, can help people think in novel and creative ways, much like the most challenging quality fiction, film, and fine art do. There’s much more mileage to extract from applying the trolley problem to self-driving car policy, for one example, even though critics rightly point out that the thought experiment can divert attention from pressing problems if it’s interpreted too reductively and draws attention away from other valuable insights.

When I talked about the value of philosophy with Robin Zebrowski, chair of the Cognitive Science Program at Beloit College and an affiliate of its Philosophy, Psychology, and Computer Science Departments, she took the argument further. Philosophers shouldn’t just be taken more seriously in academic settings, Zebrowski argued, they should also get a more prominent seat at the corporate table.

“Governments have begun to recognize the work philosophers do around critically examining algorithms, autonomous vehicles, drone warfare, artificial intelligence, and even social problems that technology companies are trying to solve in their own ways,” Zebrowski said. “Philosophers are invited to consult with the United Nations; the United Nations Educational, Scientific, and Cultural Organization; the Department of Defense — all because of the unique expertise philosophers have in relation to these questions. So why aren’t technology companies hiring them and paying them for this specialized knowledge that promises to offer a competitive edge in a cutthroat industry?”

Special pleading for philosophy aside, the reason universities shouldn’t be content with merely improving ethics curricula is that it’s hard for them to be good ethical role models in the information age. Role models that don’t create an alarming disconnect between what they preach and what they practice, if they aren’t introspective and committed to good governance.

Simply put, universities need to continually yet carefully reflecting and acting upon the dangers of infusing business intelligence into their own technological practices. After all, universities are embracing big data–fueled surveillance systems and predictive analytics to improve how they pursue highly prioritized goals: recruiting and admitting students; networking with successful alumni; improving retention; and helping students study better, learn more, and select the right classes at the right time.

While these are all laudable objectives, abstractly speaking, the devil, as always, is in the details. Universities have the digital tools for putting students in lockstep with the all the customers and employees who are imprisoned in the “iron cage of the quantified self regime that aims to track all of our data to maximize and optimize all of our behavior.”

Mitch Daniels, president of Purdue University, recently cautioned that if universities don’t properly administer their technological systems, their profiling and nudging could end up doing serious harm to the very students they are charged with protecting. Since schools are acquiring massive amounts of personal data that aggregate to form rich portraits of student habits, they have the potential to comply with all state and federal education laws while still creating excessively controlling environments — possibly even string-pulling ones, like China’s all-encompassing, government-run social credit system. “Many of us,” Daniels writes, “will have to stop and ask whether our good intentions are carrying us past boundaries where privacy and individual autonomy should still prevail.”

In other words, if universities want to educate tomorrow’s leaders while also leading by example, they need to become more committed to the ideal of “information justice.” Jeffrey Alan Johnson, director of institutional effectiveness, planning, and accreditation support at Utah Valley State University, has long been making the case that, among other things, information justice requires that college students be given fair processes to challenge and change personal information that a school believes is useful, but which, in reality, is inaccurate or inappropriate for them to possess or act upon. Johnson also contends that universities should be doing a better job of “tapping into the expertise of their faculty members who deal with technology ethics and basic social science and research methods who can talk about data as a social construct.”

I ran these ideas past Pasquale, and he found them all worthwhile to pursue. “My main problem is with efforts to substitute ethical reflection for legal obligations,” he said.

“In a sound legal system,” he continued, “Facebook would be facing massive fines for repeatedly violating the trust of its users and would be subject to prudential regulation (in the same way many banks are) to assure it keeps its promises in the future. But there are many areas where our sense of right and wrong is emerging or where duties are more moral in form than legal. That’s where ethical education is essential — to cultivate judgment and articulacy about values in realms where the obsession with the algorithmic leads to binary thinking (whatever is legal is fine to do) or hacker ‘ethics’ (rules are made to be broken).”

Tech companies will have to take ethics much more seriously than they currently do if they want to be genuinely trustworthy. The public and politicians alike are fed up with repetitive mea culpa sound bites that are scripted to resonate as contrite but lack the substance of committed leadership.

The reckoning seems to be finally here, and so promises of reform that appear bold but only tinker at the edges of institutional myopia, greed, and arrogance will be taken for what they are: lies that can no longer be tolerated.