The Form Playbook Interview: Dorothy Chou on AI policy and regulatory strategy
Every month we interview a leader in venture, start-ups and regulation for The Form Playbook, our newsletter supporting founders building in markets where policy matters.
This month we caught up with Dorothy Chou. With a career in policy development and public affairs at Deepmind, Uber, Dropbox and Google, and as an investor on Atomico’s Angel Programme, Dorothy Chou has a unique insight into the regulatory challenges and opportunities faced by start-ups. We discussed:
- The role policy plays in companies’ success
- The impact of forthcoming AI regulation in the UK and US, and
- How smart founders can execute more effectively by getting on top of regulatory exposure early
Dorothy’s Advice for Founders:
- All companies inventing new ways of doing things eventually have to (re)negotiate a social contract with their users. Since public institutions like regulators are a proxy for what users want, they can’t and shouldn’t be ignored.
- The ongoing discussion about AI regulation should be seen as an opportunity to clarify the outcomes society wants AI to achieve. The more clarity the law provides, the more stability & predictability for start-ups. The best case scenario results in clear targets and outcomes, while the worst case leaves founders guessing and spending on how to comply. Companies like FTX have demonstrated how to get ahead and avoid the costly, default situation when there are no rules.
- The more start-ups have worked through the rules & institutions they want to operate within, the more freedom they will ultimately have to focus on the work they do best. It’s best to consider whether you might need to do this before you need to learn the hard way — which is almost always more costly.
FORM: You’ve worked in a range of policy roles at Google, Dropbox, Uber and Deepmind. How has the role that regulation plays in their success, and the focus of your work, varied?
DOROTHY CHOU: I see policy roles at companies that have a large impact on people’s lives as the practice & craft of earning & retaining public trust. All companies inventing new ways of doing things eventually have to (re)negotiate a social contract with their users. In exchange for providing email services, search results, cloud storage, a ride at the press of a button, and more, these services require users’ trust & a permanent shift in user behavior to grow & scale. The political process is an important part of this process because the (democratic) institutions that govern society are proxies for what users want — they can’t and shouldn’t be ignored.
So policy work on a good day becomes a function of understanding what public expectations for responsible behavior looks like, building the sector norms it takes to earn public trust, and then eventually working alongside policymakers & interest groups to draft regulation to ensure those norms scale. You become an advocate for the public interest to companies, and an advocate for companies to the public to create mutually agreeable frameworks to advance society — it’s a relational & translational role.
At Google I tried to do this by starting the Transparency Report, which documents how laws & company guidelines impact the content that people see online as well as their privacy & security. At Uber, where I led consumer protection and self-driving policy, I advocated for safety standards that took historical legacies of racism into account (many of the policy proposals being pushed didn’t reflect over-policing in communities of color) and being open about safety incidents which resulted in a public Safety Report. It’s important to push the sector to compete on the values most important to the public.
DeepMind is different in that we produce research rather than products, but that’s also where the opportunity is greatest. How do we learn what the public’s dreams & fears are before products with AI are deployed and earn their trust to solve intelligence to advance science & benefit humanity? What are the ways we can partner with governments, interest groups, and more to define & secure the public interest with AI? It’s an incredibly creative challenge!
A huge proportion of startups use some form of AI. How should they be thinking about upcoming regulatory changes in the UK and EU?
It’s crucial to see the regulatory discussion as an opportunity to clarify the outcomes society wants AI to achieve. The more clarity the law provides, the more stability & predictability for the startup; the best case scenario results in clear targets and outcomes, while the worst case leaves founders guessing and spending on how to comply (which is the default situation when there are no rules; not all startups see that upfront, though FTX certainly has with crypto).
Often conversations about tech regulation get bogged down in process rather than focusing on outcomes, and then we end up with solutions like pop-ups with information about cookies — which are well-intentioned but limited in terms of practical impact. Instead, creating space for companies & policymakers to identify what their users & constituents want and creating the mechanisms to achieve it is how time is best spent, be that maximizing creative avenues to secure public benefit from AI, ensuring the equitable distribution of said benefits, while mitigating risk and holding parties accountable should things go wrong.
This moves the conversation away from broad fear-based regulation to specific and tangible outcomes that we collectively want to achieve and can begin with normative practices across the sector. And ideally, the regulation should be future proof. Values change over time, so laws should build in the provision to be regularly reviewed and revisited. What was socially acceptable 10 years ago is different from today, so preparing for & factoring that in can only be a good thing.
Many early-stage start-ups know that government and regulation will be factors in their growth, but don’t know where to start. As an angel investor with experience here, what’s 1 piece of advice for building a regulatory strategy at this stage?
In general, the more startups have worked through the rules & institutions they want to operate within, the more freedom they will ultimately have to focus on the work they do best. At Uber, we ended up passing regulations in every market where we operated eventually; it’s best to consider whether you might need to do this before you need to learn the hard way — which is almost always more costly.
Start by figuring out what your users affirmatively want, then consider how markets, norms, & laws play into that. Take the EU Digital Services Act, for example, it’s creating so much room for companies focused on content regulation to run. Policy is a tool: entire markets are created through a few small amendments in procurement contracts. It’s simply another way of partnering with the public to achieve societal goals through the private sector. We’ve seen how norms in information security like responsible disclosure were adopted by companies & governments alike for consumer benefit. There are a lot of creative ways good governance can be constructed.
What’s one tech issue or sector where the UK could and should lead, but isn’t yet? What change would you like to see?
Women’s health, where I’m focusing my angel investing at the moment, is a great example of the opportunity & challenge of using private sector capital for public sector innovation.
The best business model we know of for startups working on women’s health (not just related to hormonal/reproductive health but also services like post-pregnancy loss and eating disorders) is B2B sales into HR teams at large corporations. But that makes women’s health a luxury good that’s only available to people working at those companies. What standards do startups have to meet in order for the NHS to validate them for patients more broadly? How can the procurement process be streamlined to make it easy for startups to navigate so everyone can benefit? Those are the questions I’d love to see the UK lead on.
Which forthcoming tech advancement do you think will fundamentally change the world in your lifetime?
I’ll leave the future predictions to technologists; my hope is that we build the institutions required to support their fair & equitable development & distribution. Participation, equity, & appropriate avenues for recourse have proven to be just as hard — if not harder — problems to solve than inventing the technology itself.
This interview was published on The Form Playbook, our newsletter supporting founders understand, navigate and build where regulation is a driver of success: