Why user friendliness and applicability are not the same thing
Cassie Kozyrkov moves onstage with the ease of a veteran presenter. Chief Decision Scientist at Google Cloud, she has given her lecture “Decision Intelligence / ML ++” already numerous times (watch it here). At Data Natives 2018 in Berlin, her call for engaging with machine learning (ML) algorithms playfully is a standalone event with one key message:
You don’t have to know how a microwave works. You just want to use it to cook. ML is just like that.
There is a murmur, eyebrows lifting, then frowns relax over Cassie’s melodic speech. Her talk and person are incredibly likeable but has me wonder — is her enthusiasm for the opportunities that ML provides to laymen actually genuine? And is her message, coming from Google, not somewhat incomplete?
To code or not to code
The Data Natives conference attracts professionals who deal with data on code level, mostly developers and data scientists. As designers, my colleague and I formed a minority. Although the call to “just play around with ML” is highly appealing to non-coding Mugglers, I want to argue that the matter is not quite as simple as Cassie puts it.
As she explains in her talk, and we may all agree, the building-microwaves-from-scratch focus in public education for ML is not preparing interested parties to navigate ML-application. With its new ML service offering on Cloud, Google now seeks to provide all the “ingredients to cook” with the “microwave” it offers in tune. Selling point: users don’t have to bother about the application’s inner workings but just get started.
Democratizing machine learning
Using the Google Cloud ML Engine, everyone can “make algorithms learn from example”, Cassie further explains, which is, no doubt, easier than writing code for a certain purpose. However, Cassie’s analogy can be used to illustrate the risks of democratizing ML-application just as well: For microwaves, instructions have been written that advise the public to e.g. not put things in that do not belongthere (metal, living creatures etc.). With the new offering that Cassie markets, people that are not very familiar with AI-based technologies on a theoretical level are enabled to apply ML to their liking. Which is great, but:
Offering this service without clear instructions of what not to put in the system and how to interpret and classify the results seems shortened rather than empowering.
More hints and doubts
Pointing out, as Cassie does, that the decision-making process of humans is highly obscure and therefore no one is to be trusted more or less than Google’s AI is false equivalence to my ears. Also, her choice of expression, encouraging users to seize the opportunity to “tinker” and “fiddle with” the ML-algorithm on Google Cloud, seems indicative of a lack of foresight.
There is evidence that playful engagement and open-sourcing of new technologies can be a means to crowd-sourced innovation and finding application niches. Still, it is abbreviated (and, in the light of people’s dealing with microwaves, unwise I may add) to just create a new offering and then to point to its general application friendliness rather than investing effort in the service’s theoretical embedding. In this case, e.g. by defining and excluding use cases and creating supporting educational material for the application of Google Cloud’s ML engine.
What should we take from Cassie Kozyrkov’s talk? I suggest it be that making technology accessible to people does not suffice. Those who take decisions, who build complex services and products to which AI or ML are building blocks, need to be familiar with the application and the applicability of relevant technology.
Therefore, designers’ abilities are called for more than ever. I hold that they are needed to set application boundaries conceptually and write proper manuals for ML.
Kudos for inspiration and bringing me to #dn18 go to my colleague und co-author Thomas Otto