Marietje.Schaake
11 min readNov 6, 2019

What principles not to disrupt: on AI and regulation

Speech delivered at Stanford University’s Fall Conference of the Institute for Human-Centered AI, Regulating Big Tech, October 28, 2019

(The video of this speech can be found here)

Last week I participated in an Intelligence Squared debate, on the proposition: ‘Europe has declared war on American tech companies’. I am happy to report the audience at the end of the night, was convinced this is not the case. If it would have been a few days later, the proposition might have been: ‘US Congress has declared war on tech companies’. In that light, merely regulating big tech seems like an olive branch.

Clearly, in Europe and the United States, questions of governance to safeguard rule of law, public interest, and the protection of individual rights amidst technological change and geo-political shifts, are at the top of the agenda. The question is how to implement?

Eric Schmidt and Marietje Schaake during Q&A

Starting with principles we protect for good reason, is a more productive approach than to suggest that technologies are so exceptional that they can only be regulated by entirely new systems or models. Firstly, we don’t have time to start from scratch to build a global governance system, especially in these polarized times, and more importantly, there is too much of value in our human rights frameworks and other fundamental principles, to simply discard.

An often-used argument that governments should refrain from regulating technology or the internet, is that it would ‘stifle innovation’. But this zero-sum dichotomy is a caricature.

In fact, arguing that, implies innovation is more important than democracy or the rule of law. The foundations for our quality of life. I believe, some of the most serious challenges to open societies and the open internet today do not stem from over- but rather from under-regulating technologies.

Now the idea that technology companies are categorically against regulation is paradoxical because they have directly and significantly benefitted from regulations, such as section 230, intermediary liability exemptions. And actually, companies themselves are increasingly governing very impactful parts of our economies, societies and democracies. Terms of use are a stronger indicator than legal articles of what content hundreds of millions of people experience online.

Google processes 63000 searches a second, Verizon and Mastercard verify your identity and payments online, Uber knows our every move, Microsoft builds the defense department’s cloud, while Facebook decides who can and cannot be trusted as a news source.

There is a lot of power in the hands of a few actors. Not only is it nearly impossible for newcomers to catch up in terms of data volumes, private companies increasingly take over crucial parts of the role of governments, but without an explicit mandate, without democratic legitimacy, and without accountability proportionate to the powers they assume.

Principally, we need a deeper debate about which tasks need to stay in the hands of government, out of the hands of the market: currency, defensive and offensive capabilities, critical infrastructure, personal data, identity including genes?

When the internet was designed and shared, many hoped and hinted that access as such would harbor and spread democracy, others thought the internet would be technically ungovernable. Let’s look at what we learned from the promise of the open internet, and where we are in practice. Remember the famous words by John Perry Barlow and his Declaration of Independence of Cyberspace? “We are forming our own Social Contract. This governance will arise according to the conditions of our world, not yours. Our world is different. We are creating a world where anyone, anywhere may express his or her beliefs, no matterhow singular, without fear of being coerced into silence or conformity. Your legal concepts of property, expression, identity, movement, and context do not apply to us.”

I often recall Barlow’s echo as a reality check, when I hear evangelists of artificial intelligence. (Blockchain). Suggesting there is no time to lose, or that in a G2 world the ‘race to AI dominance’ will determine geopolitical relations for decades to come. On the significance of AI, I do not disagree, but the question is not merely who dominates but on the basis of which values and principles? A race for AI power must not be an excuse for a race to the bottom where innovation, efficiency, or competition trump safeguarding the public interest, fair competition, human rights and democratic principles.

If AI benefits disproportionately from an undemocratic and centrally governed model such as the one we see in China, where data can be massively hoovered up without much restriction and where human rights are not respected; and if AI will in turn make that undemocratic government more powerful, why do we have such high expectations of what AI will bring us, especially without rules, checks and balances? And if AI is not inherently an accelerator of top down control, we need to look at governance and regulation even more ambitiously. If we want to preserve democracy, we need to democratize the way we use and govern tech itself.

It is ironic, that the same companies that are warning against the dominance of Chinese standards, are in fact sending data to Beijing themselves. Zuckerberg warned lawmakers on Capitol Hill for China as the alternative to Facebook’s Libra, interesting as such, also because the company has data sharing partnerships with 4 Chinese companies including Huawei.

You cannot imagine how often I heard from tech lobbyists: ‘Do not regulate us because otherwise China will use our laws as a legitimation for theirs’. We can safely conclude that argument has not led to successful outcomes for democracy so far. Inaction to regulate by democracies has not stopped Chinese leaders from instrumentalizing tech, mirroring communist values and political models. In fact, the asymmetry in governance becomes ever larger, when democratic countries refrain from ensuring a values- and rules-based framework that creates benchmarks to preserve principles such as free expression, access to information, non-discrimination, fair competition, the innocence presumption, and when we do not develop a vision for our relation to developing economies and in trade relations around AI and dataflows.

We see China using technology as an extension of its governance model, that is increasingly global, while the US mainly lets the technology and thus business models, speak for themselves. Except when it comes to national security, which always seems the exception for Americans where they do see a significant role for government. Europeans privacy laws should be seen as a protection from government intrusion and company behavior alike.

Since WWII the rules-based order was seen as a key priority for the West, from trade to human rights, from development to war and peace norms. For norms to have meaning, they need to be enforceable and violators accountable. We need more guarantees than just stated good intentions. And I am not even sure that such explicit intentions are still in place. Is ‘Do no evil’ still Google’s motto?

Led by Silicon Valley culture and success, the US went the libertarian direction, and certainly did not seek to ensure rules-based order online, or an internet whole and at peace.

We now know:

-That this hands-off approach did not break monopolies, but created new ones

-Empowered not only individuals, but also companies and dictators

-Disrupted journalism and electoral processes

-Did not prevent the Balkanization of the internet

-And certainly did not nudge China into following our example

And I have not even mentioned inequality, discrimination, job displacement and environmental damage that artificial intelligence puts on steroids. Glad JOY BUOLAMWINI is here to talk about some of the discriminating features of AI and the biases in data used to train algorithms.

Because digitization often means privatization this means the outsourcing of governance to tech companies, technologies and algorithms, built for profit, efficiency, competitive advantage, time spent online, adds sold, and certainly not designed to safeguard or strengthen democracy.

The shift to private and opaque governance through technological standards is one of the most significant consequences of AI that we need to shed more light on. Lawrence Lessig’s work Code is Law is as relevant as ever.

But the reality is, a very inconvenient truth, that the full impact of the massive use of tech platforms and AI is largely unknown. Academics, regulators, law-enforcement, lawmakers, judges, journalists and citizens alike face an information deficit compared to companies, even if their impact is public, the good and the bad. And companies may look at the data with different lenses and goals.

Many AI engineers will admit no person knows where the head and tail of algorithms are after endless iterations. They are excited about the fact that outcomes are not predictable. But we can only know what the unintended outcomes are when we know what was intended in the first place. When there is transparency of training data, and documentation of intended outcomes, and variations of algorithms. On top of that regulators and auditors, as well as other public servants, will need to get mandates and capacity for meaningful access to data and information.

Some people may believe Cambridge Analytica abusedFacebook, but it simply used the platform the way it saw possible without restrictions on data collection, microtargeting, data sharing and political ads. The same goes for other disinformation campaigns.

In assessing all the opportunities and potential harms that AI offers, we must explicitly look at both the use and abuse, the intended and the unintended.

The Cambridge Analytica scandal anecdotally shows how huge the accountability gap is, and we see the same with each data breach or cyberattack. Too often no one faces meaningful consequences. Without transparency, no accountability, and a real risk of disenfranchisement of citizens who see powerless public authorities in the face of powerful companies.

Now, trade secrets or other intellectual property protections cannot be the perpetual shield against meaningful access to information and oversight. It’s a fairly cynical cycle where companies claim politicians do not know anything about technology, so they propose bad laws, when in fact the most important information is carefully guarded.

If trade secrets stand between us and scrutiny, then we have to change that.

Another argument I often hear: it is too early to regulate artificial intelligence. While many agree we were too late to regulate platform companies, micro-targeting, political ads, data-protection, and privacy online. Perhaps there is never a perfect timing, but I prefer we are pro-active and do not wait until we are confined to being reactive to the further harms of AI.

It is popular to develop ethical frameworks or guidelines to mitigate harms. And it is hard to be against ethics. That may explain there are now 128 frameworks for AI ethics in Europe alone. But if everything is ethics, nothing is. And the question is who designs and oversees the ethics standards? Who decides what is an ethically competent leader, and what happens in case of breach, in other words, how is it more than window dressing or a distraction?

-AI development should promote fairness and justice, protect the rights and interests of stakeholders, and promote equality of opportunity

-AI should: promote green development and meet the requirements of environmental friendliness and resource conservation;

-AI systems should continuously improve transparency, explainability, reliability, and controllability, and gradually achieve auditability, supervisability, traceability, and trustworthiness.

-AI developers, users, and other interested parties should possess a strong sense of social responsibility and self-discipline, and strictly abide by laws, regulations, ethics, morals, standards, and norms.

-Encourage exchanges and cooperation across disciplines, domains, regions, and borders;

-Respect the natural laws of AI development; while promoting the innovative and orderly development of AI, search for and resolve risks that might arise

-Artificial Intelligence -should begin from the objective of enhancing the common well-being of humanity; it should conform to human values, ethics, and morality, promote human-machine harmony, and serve the progress of human civilization;

>> Principles for AI governance and “responsible AI.” Produced by the National New Generation Artificial Intelligence Governance Expert Committee of China. June 17, 2019

(Thank Lorand Laskai and Graham Webster from New America for the interpretation)

These ethics standards have not quite solved the differences between the US and China. I believe we must focus on the rule of law over ethics, and on empowering the institutions we have, to perform the tasks of regulating for anti-trust, the handling of personal data, net neutrality, media law, consumers rights, safety and technical standards, etc.

We do not regulate the internet, or against technology companies, but for principles.

It is unrealistic to assume trust in AI, especially after so much of it has been lost by tech companies and failed self-regulation efforts. Companies cannot have it both ways, with on the one hand big promises of micro targeting to advertisers, and on the other hand, very modest expectations of ML in the public debate. One of the things that continues to puzzle me is how a company like YouTube or Facebook can turn over billions because of the ever more precise ways it handles its information, and does not come much further than ‘we are sorry for mistakes made, and we have a lot to learn’ about how its platform was at the heart of a series of scandals. And only because Facebook is the most visibly targeted now, does not mean they are the only ones.

The naivite stands in no proportion to the power tech companies have, and with great power should come great responsibility. Or, modesty. Some of the outcomes of pattern recognition or machine learning are reason for such serious concern, that a pause period is justified. Not all that is possible has to be put in the wild as part of the ‘race for dominance’. We need to answer the question ‘how much risk are we willing to take?’

Here too, we can take a cue from existing rules. In Europe we have the precautionary principle, which is applied to GMO’s, medicine, and other innovations where the impact is potentially huge, and the societal risks are unclear. Recently, it was discovered 2 years after announcing the successfully genetically manipulated cow in the US, that in the gene editing process, bacteria were also added, including ones causing antibiotics resistance.

At least there should be systematic impact assessments and parallel learning processes in the public interest when AI is developed.

If data cannot be anonymized, or is very easily re-identified, we should limit the use until it is convincingly sorted. If facial recognition systems are irreconcilable with the right to privacy, then there is a legitimate ground to ban their use, not only for by governments but also companies. We know how easily technologies proliferate.

The EU has adopted a few regulations, causing some to call it a super regulator. It didn’t always feel that way when I saw how the sausage was made…. But it is good to treat internet users as citizens and not as products or consumers, and GDPR will hopefully lead to higher quality datasets for AI, as well as data protection. Net neutrality, cybersecurity laws are steps in the right direction, I was not as happy about the copyright directive > and without more growth in Europe it will be difficult to actually set standards. This is where the EU has to step up.

Meanwhile, the US is catching up on regulation > SF banned facial recognition, Uber and Lyft drivers are not independent workers, California has a privacy bill, and the hearing of Zuckerberg looked much like a grilling. Is a fear for similarly hard questions the reason Google executives did not testify before the Senate? But whatever the reason, hearings cannot be a substitute for regulation, even if lawmakers deserve answers.

There is a clear momentum now, catching up to fill the regulatory gaps for platforms and other digital services, and anticipating the broader use of Artificial Intelligence. The question is not whether there will be regulations, but who sets the rules.

I hope that between the US and EU, and with partners like Japan, hopefully India, we can build a democratic governance model for technology and AI. Tech companies cannot stay on the fence in taking a position in relation to values. I am convinced that a rules based-order serves the public interest, individual rights and liberties, but also common interests and the core principles of democracy.

Marietje.Schaake

International Director of Policy, Stanford’s Cyber Policy Center, International Policy Fellow, Institute for Human-Centered AI. President, CyberPeace Institute