The hidden costs of technologists’ addiction to their own products and what can be done about it

Danil Mikhailov
Wellcome Data
Published in
8 min readNov 15, 2019

Following scandals like Facebook’s links with Cambridge Analytica, it is simply no longer legitimate for the makers of technology to think that worrying about negative consequences of their product or tool is not their job. Or that the best strategy is to fix issues after the product is launched. To change behaviour, we need to confront the role addiction to technology plays in the decision making of the engineers and designers themselves.

The world of technology is about to undergo another revolution. The famous Facebook motto “move fast and break things” will be challenged. When two players in the industry as different as Brad Smith, President of Microsoft, and Catherine Miller, Director of Policy at DotEveryone, make effectively the same argument, you know something is afoot.

The reason for the change is the growing awareness by the rest of society that some of the biggest tech companies have been releasing products which they have not sufficiently tested for possible negative consequences, in a hyper-competitive race to dominate the market. We now know that the addictiveness of tools, apps and games can be such that by the time you realise there is an unintended negative effect on the user, the damage has been done, millions of users have been impacted.

Case in point being Snapchat’s snapstreak feature from 2016. The app used flame shaped emojis against a friend’s profile to indicate whether you have “snapped” that friend in the last 24 hours and a number next to the flame emoji which shows how many days the streak of this daily exchange has lasted for. Because such streaks require collaboration from both parties, failure to respond within the set time limit broke the streak for both. The engineers and designers in the Snapchat product team came up with snapstreak to increase engagement in the app and they were brilliantly successful in meeting that narrow goal. What they did not anticipate was the stress children were put under, trying to keep the streak going. Soon after the snapstreak was released news reports started to circulate of cases of bullying and confrontation among school children over a long streak being broken.

An artistic impression of a human plugged in to the Internet
Credit: Wellcome

It’s not just Snapchat

Research across multiple different platforms suggests that people are often locked in to engaging with technology platforms through social pressure. They feel the need to respond to their social circle’s posts, even when they are being cyber-bullied, in order to maintain their status within their social group.

The tech industry has been too slow in acknowledging these problems. Leaders of the web 2.0 wave of technology development like Sean Parker or Chamath Palihapituya, both ex-Facebook senior execs, have only recently — belatedly — admitted the addictiveness and negative social impacts of the products they helped develop. The penny is starting to drop as the tech bros of the Noughties grow up, have their own families and ask themselves some tough questions about what effect their technology could have on their own children.

Our industry’s behaviour is partly a response to an external environment of high stakes competition to get your app, tool or product to be the “one” that users pick, to generate the advertising revenue on which many apps and software tools rely, to get the next round of funding, to, eventually, be the one to achieve platform dominance. An environment that, until recent moves first in Europe and now possibly in the United States, has been lacking in clear rules, robust regulation and meaningful penalties. However, I would argue that there is an additional, hidden part of the problem…

Creators of the code are just as addicted to their creation

Let me unpack this a bit, and show how addiction can impair our judgement of what product can and should be released.

How “real-time” the information exchange is when using a technology, and how useful the interaction is in the short-term, exert pressure on the user to act in a way that does not take account of potentially negative future consequences. The sociologist Langdon Winner calls this effect technological somnambulism: users are sleep walking into unintended consequences through their compulsive use of technology. It’s one way to explain the addictiveness of the leading tech platforms like Snapchat, Facebook, Twitter, Uber, Amazon etc, all of which are aiming for an immediate and easy satisfaction of our needs and desires.

Having led a number of tech teams over the years, I’ve noticed the same sleep-walker effect among engineers and designers. Just as the user-facing interface of the technology creates a pressure on the user for immediate action, to download, click, like, buy, so there is a mirror image pressure on the engineer to commit the code, release the feature, make the change, sometimes without considering the consequences. After all, they too are staring at an interface optimised to trigger action: GitHub for code commits, Slack for information exchange, Trello for planning Agile sprints. And they too get addicted.

The technological somnambulism effect is reinforced by the social pressure exerted on the engineers by their network of friends or colleagues, whom they are trying to impress and compete with. Indeed, within the tech world a key contributing cause is the socio-economic system engineers inhabit. First and foremost culprits are the venture capitalist firms that provide promising start-ups with seed capital and then use a starvation diet of next rounds of capital as a measure of leverage and control to prompt an obsession with getting to scale as soon as possible.

One final, important point needs to be made. I say “addiction” purposefully, in order to underline that, as with other types of addiction, the individual affected is not fully in control or able to resist. We are beginning to accept that this is happening with the users of tech. But just because they have expert knowledge about technology, the creators of the tech are not immune to the same effects. Because of this, just blaming technologists for making bad moral choices, taking the pay packet and not caring about the longer-term effects of their products, is too simplistic. As Catherine Miller argues, technologists are themselves recognising the negative long-term effects of what they are creating and want to change. Decision makers, founders and leaders of tech teams just need to find a way to help them do this.

Step one…

How are other types of addictive behaviours approached? The principles often revolve around getting expert help, creating spaces where the individual does not have the means to carry out their addictive behaviour (think rehab) and encouraging acknowledgement that there is a problem in the first place (“my name is Danil and I am a tech addict”).

In my team, Wellcome Data Labs, we introduced all three approaches without even realising we were following this well-trodden path of tackling addiction, until we looked back on what we did.

Our equivalent of talking to an expert was introducing two social scientists into our tech product teams, who could observe, give an alternative point of view and facilitate discussions.

We created a space free of the triggers of addiction by introducing breathing spaces in agile cycles — we called them “firebreaks” — where for one week each quarter, all the product teams in the building (including those from our sister Digital team) get together to work on projects outside their products. We also enforced a “no out-of-hours working” policy to clamp down on increasing pattern of team members staying late, glued to their screens.

Finally, our equivalent of acknowledging the problem was to work together in the team to agree a set of principles and a framework to apply them. We introduced monthly ethical reviews to create opportunities for longer-term implications of product development to be discussed jointly by the technologists and their social scientist colleagues. We also put together a wider Tech Ethics Advisory Group, getting input from people with a wider set of experiences and skill-sets: scientists, lawyers, comms professionals, experts in diversity & inclusion, policy etc.

Overall, this is beginning to make a positive difference to our team, but it has certainly not been all plain sailing. Here are a few things we learned.

1. Creating truly interdisciplinary teams is hard

An early positive outcome of introducing social science thinking was the engineers and data scientists starting to expose and discuss some of the assumptions they are making. For example, should we optimise this algorithm to avoid false negatives or false positives? How risky is this potential outcome? That kind of transparency proved very positive not only in identifying some potential unintended consequences of our products and thereby thinking about how to avoid them, but also generating buy-in for what our team was doing from the rest of the non-tech organisation. At the moment such conversations are mostly happening within specific meetings, like ethical reviews. Our next objective is to make this thinking the norm in the day-to-day workings of the team.

It should be said that a lot of effort and energy had to go in from all members of the team — both the engineers and the social scientists — to create enough of a shared language and approach to make this possible. This process is far from complete and requires constant attention and reinforcement.

2. Balance productivity and analysis

Of the issues we encountered, the one raised most often by engineers was the fear that it takes longer to get a good product out while considering unintended negative consequences. This can have a reverse effect to the one we intend: in a hyper-competitive environment, delaying release or reducing functionality because something might go wrong, will mean another, less responsible app, tool, or product taking ethical short-cuts will win the race and therefore more rather than less people will end up being negatively impacted.

This is a fair and reasonable concern. Often, the modus operandi in big corporations who have started to wake up to the need for a more responsible approach is to carry out ethical reviews, but as a separate exercise to the actual product development. While one team develops a nifty new use for a product, a completely separate group of people audits that use. Precisely because the reviewers are external to the product team, they will find it difficult to time their intervention effectively. Given this external review process typically happens after the new product feature has been substantially completed, it can lead to painful redesign, significant delays, and waste time and effort.

We sought to address this by embedding one of the social scientists directly into the product team to help minimise the disruption and delay. Our embedded paired review approach makes it easier to time the interventions optimally and get the balance right between productivity and analysis of unintended consequences, but it’s certainly not a solved problem. Some impact on speed of deploying your products is inevitable the more extra ceremonies, like ethical reviews, are introduced, even if they are integrated into the product life cycle.

In the end, the calculation on what level of delay is reasonable has to be context specific: what is the risk profile of the product (how many people is it likely to impact, how sensitive is its domain, how transparent is the decision making etc) and therefore how much time do we need to spend up-front on understanding potential unintended consequences, so we can adjust for them.

3. Keep going!

We are still learning but even from what we know so far, I can say with some certainty that it is possible to change behaviour of tech teams and help them spend more time thinking about the longer term, but it does not come cost-free.

Our trial with embedding social science thinking has been successful enough for us to have decided to continue it for another year in 2020, so keep following our posts to learn more about how it progresses.

--

--

Danil Mikhailov
Wellcome Data

Anthropologist & tech. ED of data.org. Trustee at 360Giving. Formerly Head of Wellcome Data Labs. Championing ethical tech & data science for social impact.