How do you solve a problem like technology? A systems approach to digital regulation
Digital technologies deserve democratic standards and accountability. At Doteveryone, we’ve been thinking about whether digital regulation is possible or desirable — both in the context of the coming UK Digital Charter, and the global status quo. This is the first of a series of posts exploring our work-in-progress thinking.
The technology industry has come of age. Even ten years ago, the web and digital technologies were seen as youthful “disruptors” — naughty children that could challenge legal and social norms if a big market valuation was on the cards. But those days are over.
For most of us in the West, digital technologies are approaching ubiquity, and their social, cultural, and economic consequences are changing how we think, how we live, our sense of self, and the nature of citizenship. As this has happened, global technology platforms have not only entered the space between the market and the state, they have also changed the nature of the state. This change in the civic landscape isn’t something most people have knowingly chosen, and it’s not something most of us can vote for or rebel against, so society needs to start thinking differently about ways to empower itself, and what useful corporate and technological accountability really looks like.
Tech commentator Scott Galloway makes the case for how the cost-benefit analysis stacks up in the US:
The US benefits, hugely, from the Four. [Google, Apple, Facebook, Amazon] There are very real costs worth discussing … [but] There isn’t a nation that would not endure the navel-gazing to infect their countries with the innovation, wealth creation, and competition the Four bring.
He puts European resistance to GAFA down to a reduced share of the bounty. But outside of the US — where business and politics are less closely coupled — that is not the only source of discontent. Capitalism isn’t the same as democracy. Civil liberties and democratic freedoms can’t be casually disrupted in the same way as juicers or hotel rooms, and the nature of citizenship shouldn’t be changed when we update an operating system. And even after Brexit, short-term economic returns won’t have the same value as our civil rights.
In Brussels, Margarethe Vestager continues to seek real accountability, and has issued the most meaningful fine yet to Google. In London, Uber have (for the time being) lost their license to operate, not because they’re a tech platform, but because they won’t do safety compliance — whatever you think about Uber’s general risk approach, getting passengers safely from A to B is the one thing a taxi company can’t disrupt.
Government and traditional regulatory bodies will of course play an important role in creating better accountability, but they can’t do it on their own: we are also dependent on civil society, users of technology, and technology workers to play a part too.
Why do we need accountability?
Justice, voting, freedom of expression, national defense, hate crimes, surveillance and basic service provision are areas that that are usually regulated, in the broadest sense, by some combination of the state and civil society, working together.
In 2017, they have all become increasingly influenced by corporate technology. Technology’s impact is no longer restricted to the “science and technology” or “digital and media” domains; a conversation about regulation is no longer just an open Internet or a closed Internet, but about shared principles that ensure technology can act in the service of humanity.
The following is a random sample of ways market-driven technology has affected democratic society in the last month or so; many of them fall outside of existing regulatory frameworks:
- Facial recognition is working its way through the hype cycle. Use of facial recognition by the Met Police at Notting Hill Carnival resulted in 35 false-positive identifications; Stanford researchers claimed an algorithm could “guess” people’s sexual orientation (and make assumptions about biological determinism) from facial-recognition data; and Apple is gamifying facial recognition with a new kind of emoji and building it into the security settings for the iPhone X, sparking speculation that the police, abusive domestic partners, and criminals could easily unlock your phone against your will. (The Economist has a good summary of the moral and ethical considerations around facial recognition.)
- In August, 116 AI experts co-signed a letter to the UN seeking to ban Lethal Autonomous Weapons, but Britain has already opted out of supporting the ban.
- Ad sales and ad targeting continue to act outside of the rules of both the law and common decency: Facebook has been selling ads to Russian democracy hackers; Google and Facebook have allowed hate speech and anti-semitism as an “interest” for ad targeting; and a rape threat has been repurposed as a Facebook “popular post”.
- There has been the first death in a Tesla self-driving car.
- The entanglement of the free market and free speech continues in both the UK and the US. Matthew Prince, CEO of Cloudflare, which stopped hosting a white supremacist group’s website in the wake of the Charlottesville terror attacks, eloquently explained the double-bind of corporate responsibility in an open letter, concluding: “Without a clear framework as a guide for content regulation, a small number of companies will largely determine what can and cannot be online.”
- Meanwhile, in the US EFF have condemned this kind of ad hoc censorship by service providers, while the UK government is pressing for zero tolerance from Facebook and Google on extremist content. In summary, tech companies are looking to governments while governments are looking to the tech companies to sort this out — so it’s a good guess to say no one knows what to do next.
This list could go on. And it shows that, as the UK government starts to think about data ethics and a Digital Charter, technology accountability isn’t something that can be solved with a single regulatory body; it requires a systemic approach to change.
A systems approach to change
The idea of a single regulatory technology body misunderstands how people and technology work — it assumes there is a static “thing” or a series of static “things” (such as, end-to-end encryption) that can be named, stopped and solved. It assumes we can predict how change will happen, that nothing is connected or ambiguous. It is also authoritarian, and eminently breakable: it’s a challenge to technology and technologists to take another form and try again.
A systems approach is more flexible and less risky. It might be a difficult political sell (it’s not pithy, like “strong and stable”), but it’s more likely to work in the long term because it allows different elements of the system to emerge at different times and move at different speeds: it recognises that power doesn’t just flow one way; parts of the system can evolve to compensate for weakness else where; and it allows for learning. And a systems approach to digital accountability should be powered by humanity and human values, not just economic or technical ambition.
The system sketched out above encompasses the state, civil society, and business working in the service of shared global values. This might seem ambitious and optimistic, but it becomes less so if those shared values are imperatives: a code of conduct like the Geneva Convention doesn’t assume everyone is united in the service of doing good, and the Hippocratic Oath doesn’t exist to banish risk. Instead they create a universal, actionable basis of acceptability. (At Doteveryone, we’re going to bring together civil society with the technology community to understand what this would mean for the UK. Email email@example.com if you’d like to be involved.)
Most importantly, the system does not have to be perfect to begin to act. Society don’t have to wait for a digitally transformed government or expect technology companies to be unionised to create useful change. Instead, we can act now: government and technology companies can invest in public understanding; regulators can start to get to grips with the breadth of the challenge ahead; communities of interest can start to organise; consumers can exercise choice.
The components we’ve identified so far are ambitious when considered altogether, but they are made of many small beginnings:
- Global shared values, such as a Geneva Convention
- A digitally capable government, passing actionable, joined-up legislation
- Modernised regulators
- Accessible ombudspeople and consumer groups that the people can easily find and use to seek effective redress
- Campaigns and activism to raise awareness and give a voice to society
- Public understanding, for greater individual protection and empowerment
- Workers, shareholders, and investors sharing individual and collective responsibility
- Consumer choice as a driver of better brand values and corporate responsibility (or, the role of shame in ethical business practices)
Digital understanding is a vital corrective and enabler: after all, knowing how to use a device isn’t the same as knowing how it affects your democratic rights. Doteveryone is experimenting with ways to bring this to life, such as “Public Digital Health” campaigns, but we also know that understanding is not the same as doing.
We’re also interested in exploring the power of consumer choice. At a time when PR giant Bell Pottinger has “put itself up for sale”, corporate responsibility is no longer a side game, it’s a building block for a trusted brand. Doteveryone is exploring how a trust mark (like fair trade for technology) can combine with understanding to raise consumer awareness, and whether it can influence the dominant market patterns.
And for the industry, it is easy to defer responsibility to others: to CEOs, shareholders and boards. But every investor, designer, engineer, product manager, UAT tester makes choices every day. Those individual choices are not entirely dictated by a business plan or a product road map: they are formed by our values and experience too. Better decisions every day will change the way products and services are created, shipped and marketed. There’s an opportunity for everyone to take part. This autumn, we’re starting to build a community of practice around our Trustworthy Tech mark, and will be publishing our findings. (Again, get in touch if that sounds interesting.)
We’ll explore what each component of this system could do in more depth in subsequent posts, but the point of sketching out the system is to show that we don’t have to wait for everything to be perfect or wait for any magic bullet to make a start. There is enormous possibility in the digital world; society needs the opportunity to make the most of it.