The Ethics of Everybody Else: New video posted
I had heard about Wrangle for a while — a data science conference where folks come to talk about the hardest problems they’ve faced and how they’ve found their ways around them. It also has a rancher-rustler theme, though you can’t see the cowboy boots I wore in the newly-posted video of my talk.
But here’s how I kicked off my 20-minute talk, called “The Ethics of Everybody Else”:
You can get to the video by going to http://wrangleconf.com/. You’ll have to register but that takes 27 seconds and you don’t have to check the box for Cloudera newsletters. If you get curious about the whole conference, you can also check out Cyndy Willis-Chun’s blog post, Facing bias, ethical obligation, and your audience.
If videos aren’t your thing, here are some other ways to get the content:
- The slides are public on Slide Share — if you go there you may ask, “Wait, how did Tyler get through 60 slides in 20 minutes?” I hope the answer is “with panache”.
- The presentation builds off of a paper published as part of an ethics workshop for ACL earlier this year: Goal-oriented design for ethical machine learning and NLP. Computer science papers are short so it’s only 5 pages of text.
- I also made a graphical/poster version of the paper
- And here are some further thoughts on the ethics workshop (5 min read)
A lot of this work takes as an example facial recognition projects that have deeply problematic ethical issues. But there are many other ethical concerns product designers and engineers grapple with in AI, including model explainability, fairness, and data privacy. Given integrate.ai’s focus combining multiple data sets to boost model performance, I’m extremely happy that our recently announced advisory board features Helen Nissenbaum, who has written the book on privacy. Well, the books, plural. To read more about privacy as a byproduct of the norms that govern our behavior in different social contexts, see her work, Privacy in Context or more recently and rebelliously, Obfuscation.
And here are some other references to check out for thinking about machine learning and ethics:
- Kate Crawford’s piece in The New York Times: Artificial Intelligence’s White Guy Problem
- Jennifer Eberhardt and Rebecca Hetey (and team)’s data driven work on police stops, handcuffings, and arrests
- Matthias Spielkamp’s MIT Technology Review article on bias in court sentencing algorithms
- Joanna Bryson’s post on three kinds of biases in AI following work she and colleagues published in Science earlier this year
- And for much more on problematic image processing (scientific racism), check out Blaise Agüera y Arcas, Margaret Mitchell and Alexander Todorov’s blog post
Tyler Schnoebelen (@TSchnoebelen) is principal product manager at integrate.ai. Prior to joining integrate, Tyler ran product management at Machine Zone and before that, founded an NLP company, Idibon. He holds a PhD in linguistics from Stanford and a BA in English from Yale. Tyler’s insights on language have been featured in places like the New York Times, the Boston Globe, Time, The Atlantic, NPR, and CNN. He’s also a tiny character in a movie about emoji and a novel about fairies.