The other side of Gmail’s ‘Smart Reply’ and ‘Smart Compose’ features
Before folks conclude Gmail’s new ‘Smart Reply’ and ‘Smart Compose’ features are two more pluses from their freebie email accounts, they should acquaint themselves with a likely driver for Google’s “generosity”. By using either feature, Gmail users join the team of volunteers (meaning not paid workers) shouldering the load of labeling text strings for Google’s natural language processing (NLP) efforts.
The choices Gmail users will make as they use these toys will be saved in their personal data profiles. End result? The data diet NLP neural networks need to thrive will get a massive infusion of new calories. Great for Google, perhaps great for Gmail users, but, at their core, these features continue the practice of mining personal data for reasons other than simply to benefit Gmail users.
Some meaningful percentage of Gmail users don’t seem to like them (my wife just told me she, too, hates them). The Atlantic published a story on a related topic today. Rachel Syme’s “Gmail Smart Replies and the Ever-Growing Pressure to E-mail Like a Machine” illustrates why UX design for an exercise like this data labeling project is critically important, but not often successfully planned and delivered. Safe to say it might not be working well here.
Smart folks like Ms. Syme (and my wife) find the features to be irritating and far from helpful. By restricting the choices users can make to complete their sentences, or to quickly reply to emails, some portion of the test sample (1.4bil, approx, global Gmail users) will drop out of the exercise because they are bored or outright dissatisfied.
It would have helped if Google had educated us about why canned choices are the only way to go with this kind of work (given the severe limitations of today’s NLP tech, prescribed scripts composed of syntactically exact match questions and answers are the only way to go). But Google has chosen not to educate users on this point.
Offering Gmail users an opportunity to tack prescribed text strings onto their own otherwise free form (think conversational) emails expands the dictionary of prescribed (think trusted) pattern matching text strings Google’s NLP solutions can use. Not a bad return on a freebie.
Come to think of it, almost the whole process of introducing these new features has been presented to users as a kind of techno trojan horse. Cool toys to play with, but never mind what we are going to do with the historical records of choices you have made to juxtapose our phrases with other text strings, or to use one of our choices to quickly reply to an email. We own the data unless you want to claim it by jumping through lots and lots of hoops.
More than anything else, perhaps it is the matter of Google masking the other side (Google’s self serving side) of using these features people like Ms. Syme find most irritating. Given the number of very smart folks, the “Googlers”, at work on stuff like these features, it’s surprising and even disappointing they didn’t decide to come clean with users. Too bad.