How do We Solve AdTech’s Democracy Problem?

Start by valuing quality of privacy design over quantity of eyeballs in product development

Lauren Kaufman
Jul 19, 2020 · 6 min read
The growing list of brands boycotting Facebook as a result of failure to act in removing misinformation and hate speech from the platform. LUMAscape courtesy of Terence Kawaja.

now, news outlets and tech pundits alike have all weighed in on the expected impact of #StopHateForProfit. The campaign, launched in June by a coalition of nonprofits including Color of Change, the NAACP, ADL, Sleeping Giants, Free Press, and Common Sense Media, encourages advertisers to boycott spending on Facebook until the social media giant addresses key problems with its community and algorithms in relation to promoting hate speech and misinformation. Facebook has historically been reluctant to enact any policies seen as limiting freedom of speech, citing democratic values, though past pressures have slowly led to incremental improvements in content moderation and advertising rules.

While the list of brands pledging to boycott continues to grow, the direct harm to Facebook’s bottom line and audience reach remains low. Despite the strain on its PR and sales teams scrambling to contain the blast, Politico’s senior media writer Jack Shafer argues damage to Facebook will be minimal:

“The small and medium businesses that feed Facebook with advertising aren’t about to join the boycott. They can’t afford to because Facebook makes them so much money. And neither are users. About 70 percent of American adults use Facebook. The service has become so integral to American lives that asking people to abandon it for a month would be like asking them to give up their phone or refrigerator.”

Shafer may just be right. As of this writing, Facebook’s stock price is at an all time high.

Facebook is not alone in wrestling with what democratic values mean in the context of Internet media. Twitter, Reddit, YouTube, and other popular platforms have all had their fair share of skirmishes in the media over free speech dating back years. However, buttressed by the protections afforded by Section 230, these platforms were also often praised for their defense of free speech. That was before Russia used their platforms to influence American elections. Before President Donald J. Trump weaponized Twitter to rattle off racist, sexist, and false missives to rile up his base. Before Myanmar’s military used Facebook to incite genocide. Yes, we can also point to the positive impact social media has on collective organizing, building communities, and supporting local businesses or artists. Yet, in embracing the good outcomes, must we accept the bad?

How we got here

I’m not the first to suggest social media companies and other AdTech players have a democracy problem. Tech journalist Kara Swisher and American scholar Shoshana Zuboff were early critics of how these companies’ business models incentivize amplifying the most outrageous, most divisive content in order to gather users’ online behavioral data at scale — the “scale” part being critical to success in driving the dual commodity model wherein consumers use a company’s application in exchange for their data being sold to advertisers at a tidy margin.

What critics tend to discuss less frequently is the business-to-business (B2B) technology services industry built around online publishers. Many, but not all, of these platforms build products deliberately designed to augment the founding value of the existing AdTech incumbent powers: Quantity over quality. In emphasizing quantity of data collected, this industry often walks a fine line when it comes to making responsible decisions about managing user data privacy and surveillance.

In turn, a lack of data ethics design (versus compliance-oriented safeguards) taught as an integral step in product development processes has led to unintended threats to democracy. Cultural norms around privacy in a given society often indicate the society’s belief in a right to non-discrimination. The privacy erosion we’ve seen in the digital age, when coupled with a monetary incentive to capitalize on our psychological instinct to engage with content that arouses strong emotions and a lack of user policy enforcement, directly contributes to products which become havens for those who stir hate and peddle misinformation.

Luckily, we can reverse some of the damage. We can do so not just through brand boycotts, which are crucial to addressing changes in policy, but also changes in how the AdTech industry addresses building strong products with privacy in mind.

How we improve moving forward

In a consumer-oriented sense, there are two major mechanisms by which product design could evolve at scale to combat hate speech, data surveillance, and the spread of misinformation: Change the business model or design humanely.

Several tech executives have recognized the consumer (and shareholder) value in changing the business model for content or application platforms. For example, Twitter recently indicated via a job posting that it is developing its first subscription-based product, likely due to financial pressures. Another example comes from former Google Ads executive, Sridhar Ramaswamy, who left the company to build Neeva, a subscription-based search engine.

This movement from building products that are “free” in exchange for data to operating on a “freemium” or pure subscription model indicates a cultural appetite for high-quality products, without privacy infringement. Subscription revenues provide greater stability quarter-over-quarter and mean the incentives in product development are to achieve scale by building great products for consumers versus building products which capture the maximum amount of data. This in turn encourages “humane” product design.

The humane technology movement was birthed from a presentation made by former Google design ethicist, Tristan Harris, in 2013 to illustrate how damaging the effects of monetizing attention are to social interaction. The presentation went viral internally, but spurred little action. In 2018, Harris, troubled by the ease with which Russia meddled in the 2016 US general election using the tools his industry had built, started evangelizing his principles for human-centered design via the Center for Humane Technology. The Center, which focuses on changing how “technologists think about their work and how they build products,” developed a design guide for use in building products which provide value to a consumer, rather than exploit human nature to extract data.

This design guide, if adopted by product managers and UX designers as a standard for product development, represents a great improvement in reducing hate speech. Humane design and other proposals, such as former Google lawyer and United States deputy Chief Technology Officer Nicole Wong’s suggestion we impose a “slow food movement” for the Internet, offer potential solutions for consumer-facing applications and services. However, given the vast ecosystem of B2B companies integrating with and powering much of the offending platforms’ business models, these B2C fixes can only address part of the puzzle.

Relevant B2B software companies, often classified as “middleware,” must also address poor privacy design hygiene habits in product development in order to break the outrage capitalization encouraged by AdTech business models. Responsible B2B product teams follow Privacy by Design principles in product development. Some are even making moves to include product design ethicists, who are employed to integrate ethics-based decision making models into the product development process, as independent reviewers of new product proposals. If we wish to encourage broader use of these practices, all that’s needed is for clients to require their use as part of any vendor search.

These economic pressures carry huge weight in the world of B2B product development, and they will also bring about change, albeit potentially more slowly than we’d like, to consumer-facing platforms. Pronouncements that the Facebook ad boycott has very little impact are missing something: People vote with their feet, and online, with their eyeballs. If safer alternatives emerge and incumbent product teams fail to take swift action, eventually, we’ll stop watching.

All opinions are my own and do not reflect those of an employer or other institution with which I am connected.

Popular Privacy

Cultural context on the intersection of privacy, technology…

Lauren Kaufman

Written by

Armchair Public-Interest Technologist | Working to make sense of the intersection between policy & technology

Popular Privacy

Cultural context on the intersection of privacy, technology & policy

Lauren Kaufman

Written by

Armchair Public-Interest Technologist | Working to make sense of the intersection between policy & technology

Popular Privacy

Cultural context on the intersection of privacy, technology & policy

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store