bnh.ai’s Andrew Burt on Infinia ML’s Machine Meets World

bnh.ai Lawyer Andrew Burt on AI’s Biggest Barriers

Join Infinia ML’s Ongoing Conversation about AI

James Kotecki
Jan 5 · 9 min read

Episode Highlights from Machine Meets World

This week’s guest is Andrew Burt, Managing Partner of the AI-focused law firm bnh.ai. Andrew’s interview is full of great insights like these:

The biggest barrier to the adoption of AI and machine learning is not actually technical. The actual technology is fairly commoditized. The biggest barriers are risk-related and they’re policy-related and they’re law related.”

AI is great, but if you want to be serious about responsible AI, you need to be ready to respond when something actually goes wrong.”

“Even without new regulations on AI, there are a whole host of laws and ways that AI can create legal liability right now.”

“It’s good in some senses to move fast and break things — you can innovate, you can get there faster — but with AI it’s very, very dangerous.”

Watch the show above. You can also hear Machine Meets World as a podcast, join the email list, and contact the show.

Image for post
Image for post
Photo by Macu ic on Unsplash

Audio + Transcript

Andrew Burt:
The biggest barrier to the adoption of AI and machine learning is not actually technical. The biggest barriers are risk-related and they’re policy-related and they’re law related. If you think about these issues at the end, it’s just going to be too late.

James Kotecki:
This is Machine Meets World, Infinia ML’s ongoing conversation about artificial intelligence. I’m James Kotecki and my guest today is Andrew Burt, managing partner at the law firm bnh.ai, that’s a boutique law firm focused on AI. Andrew, welcome to the show.

Andrew Burt:
Thanks so much, it’s really great to be here.

James Kotecki:
This whole concept of an AI-focused law firm, what do people think you do?

Andrew Burt:
Basically the heart of our law firm, the heart of bnh.ai is this thesis that the only way to get AI right is basically by really closely co-mingling expertise from law and policy and expertise from the world of data science. And so that’s — that’s really why we formed the law firm. There are like two universes, there’s the one universe that’s worried about ethics and law and policy and all the things that could go wrong, and then there’s the other universe that’s actually building and deploying models and they have an extremely hard time communicating. And I’ve spent years doing this and working in the regulated data science and the data governance space. The biggest barrier to the adoption of AI and machine learning is not actually technical. The actual technology is fairly commoditized. The biggest barriers are risk-related and they’re policy-related and they’re law related. If you think about these issues at the end, at the backend, it’s just going to be too late.

James Kotecki:
What are some of the issues or the problems that you’re actually tackling as a firm?

Andrew Burt:
I would break it into two categories. One is before the bad thing happens. We do some operationalizing AI ethical principles where organizations will build AI ethical principles, and then they’ll not understand how to operationalize them, they won’t know how to map them to laws and regulations, and they won’t know how to map them to actual technology and the packages they actually might want to use to deploy some of these models. We do AI liability assessments, but a lot of the work that we do also happens when something goes wrong and organizations need to figure out, number one, what their liabilities are from a legal and a risk standpoint and then number two, what to do about it and how to actually get it fixed.

James Kotecki:
So what are some common mistakes that you see business leaders make when it comes to this stuff?

Andrew Burt:
So we frequently do tabletop exercises with clients just to get them to see ,here’s the bigger picture, here’s where liability could arise when you’re investing all this money into AI. So I would just plug this emerging field of AI incident response, which is, AI is great, but if you want to be serious about responsible AI, you need to be ready to respond when something actually goes wrong. That includes the tabletop stuff that I mentioned, but also response plans, documentation, having a clear sense of what are you going to do if the model starts discriminating against people, or what are you going to do if your model is extracted or hacked or connected with a data breach. Frequently in practice, people are just so excited about the benefits of the technology, they’re just not prepared for the risks and so we very frequently counsel clients on how to prepare and what actually that ends up meaning.

James Kotecki:
What kind of laws are actually being applied here? Because I imagine it’s not a situation where super tech savvy lawmakers wrote these laws that are specific to AI or machine learning and so you’re applying that. I imagine it’s a much more murky area where existing laws that were maybe written for the telephone or the telegraph or something are actually attempted to be applied here in this case and you have to fight through that.

Andrew Burt:
Yeah, so it’s both. Even without new regulations on AI, there are a whole host of laws and ways that AI can create legal liability right now. In the credit and the employment and the housing context, for example, with those types of decisions, laws governing automated decision making systems have been around for decades. We actually have some concrete legal precedent that we can draw from. The second point is basically that there are a whole host of new laws and regulations that are actually on their way. In California, they passed a law called the California Privacy Rights Act, which is going to amend CCPA 1.0 and it will be actually CCPA 2.0. And it actually has specific provisions that govern AI, and there are actually specific explainability requirements for automated decision making systems that are deployed at scale. There are a whole host of new laws that are coming down the pike.

James Kotecki:
What are some of the major themes we’re going to be looking at in lawmaking around AI in the next 10 years?

Andrew Burt:
One of the issues that all parties agree on, maybe the only issue, is that digital technologies are not being regulated effectively. It goes from antitrust and the recent action against Facebook, to privacy laws. I think that they’re going to be new pretty wide-scale privacy laws enacted at a federal level in the US, provisions of them are going to impact AI directly. The number one thing to watch out for is, I think, some form of impact assessments are going to be required where if you’re using AI in a way where it either touches a lot of consumers or where if it failed it could cause a lot of harms, you’re going to have to demonstrate basically how you’ve thought through the risks, how you’ve documented that, and then how you might’ve mitigated them. Some people call them algorithmic impact assessments. In some shape or form those are coming. And then I would also just highlight explainability requirements. I mentioned the CPRA in California, which was just passed via referendum, and that actually has specific provisions on explainability.

James Kotecki:
And are lawmakers able to define AI, ML, Algorithms in a tight enough way so that people don’t try to skirt this by defining their technology as something else when it’s actually against the spirit of what’s intended?

Andrew Burt:
No, absolutely not. I think everybody talks about AI and nobody really knows what it is, but everybody agrees that it’s coming. I would predict that we will see terms like “the use of automated decision making systems.” What counts as AI is basically just going to expand. I think you’ll be able to see there’s a gap that doesn’t make a whole lot of sense. You might actually need help from a boutique law firm such as ours. Anytime someone talks about AI and is hyping it, I say, “How is this new, how’s this different than a rules-based system that might have existed even in the 1980s or just a really, really complex piece of software that also makes decisions?” So I think all of those other forms of, we’ll just call it broad AI or loose AI, I think are going to get swept up into the regulation that’s coming.

James Kotecki:
So when people aren’t involved in a decision, that’s when these laws might come into play?

Andrew Burt:
The devil’s going to be in the details. Some very broad-brush definitions just say, “artificial intelligence is anything that simulates the intelligence of the human.” And it’s like, well, what exactly does that mean? Doesn’t software simulate the intelligence of a human? What about TurboTax ? What is that? Is that AI? Regulators, and I’ve seen this all over the world, they get tripped up by the definition because it actually is unclear. It’s very hard to put a very precise point on like this exact thing is AI, this exact thing is not AI. I think we’re going to end up with pretty loose definitions. It’ll be clear when it needs to be clear, but I think there are going to be all sorts of edge cases as well.

James Kotecki:
Is there an emerging sense through law, regulation, or precedent about, intuitively, as a business leader in AI, how I need to approach what I’m creating? Do you have any intuitive guidelines or rules of thumb?

Andrew Burt:
All the things you should worry about when you’re adopting AI, we break them into several buckets. So there’s privacy, security, fairness, accountability, negligence. Some of the most intense risks come from just the involvement of third parties. In some senses, the risks aren’t actually that new. AI itself has been around for a while, analytics, even complex analytics have also been around for a while. What’s new about AI is just the volume and the scale is just completely different, and also the pace of adoption is different. We see a complexity that’s really just very, very difficult to completely understand and then to adapt to. It’s good in some senses to move fast and break things — you can innovate, you can get there faster — but with AI it’s very, very dangerous. So we counsel our clients to balance utility and speed and innovation on the one hand against the responsible use of these technologies, which is not just something abstract, it really means, in our world, minimizing liability. You are trying to not hurt people and you’re trying not to get sued.

James Kotecki:
Is that how you would define AI ethics as well? Does the term AI ethics have much meaning to you as an abstract concept or do you really just think about it in terms of the practical legal implications?

Andrew Burt:
It’s very easy to think that AI ethics is just some term that gets bandied about without a whole lot of meaning. It’s easy to be skeptical frankly, I think, in the world of AI ethics. From my perspective, from our broader law firm perspective, we think about ethical AI in very simple terms. It’s trying to maximize the benefit while minimizing the harm. It’s not black or white, all technologies ended up being dual use. It’s very hard to deploy any technology that’s meaningful that’s not going to cause at least some harm. The standard is not is your decision-making system perfect because that would be unreasonable. The standard is how much work has gone into minimizing any externalities or bad side effects. We’re big believers in the technology and we’re also big believers in its ability to be a force for good, and so part of that means thinking really, really practically about how the technology is being used, what it’s replacing and how we can, again, practically think about maximizing its ability to help people.

James Kotecki:
Andrew Burt, managing partner bnh.ai, thanks so much for joining us today on Machine Meets World.

Andrew Burt:
Yeah, thank you for watching.

James Kotecki:
Thanks so much for being part of the conversation, and don’t forget you can email the show mmw@infiniaml.com. You can also like this, share this, rate this, you know what to do. I’m James Kotecki, and that is what happens when Machine Meets World.

Machine Meets World from Infinia ML

Weekly Interviews with AI Leaders

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store