What Google’s AI Principles don’t tell us
Today, Google CEO Sundar Pichai published a blog post setting out seven principles for how Google will use and develop AI, as well as a list of things it won’t use these technologies for.
In a perfect world, Google would have had principles for AI in place much earlier in the game, rather than establishing them after staff backlash over the company’s involvement in Project Maven.
While I think they’re a pretty solid set of principles, I was left with a feeling that there was a lot left unsaid. Here are a few places where reading between the lines throws up more questions than answers:
There are a lot of strategically used qualifiers in Pichai’s blog post. I’d love to know exactly what he means by the following:
- Unfair bias (is that even a thing, isn’t bias inherently unfair?)
- Unjust impacts
- Appropriate cases
- Appropriately cautious
- Overall harm
Moving from principles to practice
I love a good set of principles. I also love a good plan for how principles will be put into action. In his post, Pichai said “These are not theoretical concepts; they are concrete standards that will actively govern our research and product development and will impact our business decisions.”
While this is good to hear, I will remain skeptical of the impact they’ll make until I see a concrete plan about how they’ll be put into action. What will employees and leaders be expected to do on a day-to-day basis? Will Google implement some sort of AI impact assessment (AIIA) for technologies they’re developing? What implications will there be for non-adherence with the principles? And if they do these things, will we ever hear about it?
Google has a lot of power and, in the end, can do what it wants
Near the end of the post, Pichai starts a sentence with “While this is how we’re choosing to approach AI”. This makes it very clear that at the end of the day, Google doesn’t have to adhere to any principles. I hope that, in time, these will become embedded into the way Google does things, so they’ll be hard to change or alter without significant backlash or consequences.
Google is a key player in AI, which in turn has the ability to change how society functions, and that’s not a power that should be taken lightly.