Apps, privacy and sensationalism

Enrique Dans

A couple of months ago we saw the appearance of FaceApp, a Russian app that transforms photographs of faces to make them look younger, older and that can even change their sex, generating speculation as to whether the photographs were being used for some sinister purpose. We are now hearing similar concerns about a Chinese app called Zao that uses deepfake technology to superimpose faces over those of actors in scenes from well-known movies or television series, among them Leonardo DiCaprio, Marilyn Monroe, Sheldon Cooper, etc. The app is the creation of chat and dating app Momo, Inc., a company quoted on the NASDAQ.

These apps are designed to be fun to use and go viral quickly, topping the app charts. It’s very unlikely that anybody bothers to read their terms and conditions, and even if they did, the fact that the companies are based in Russia or China doesn’t inspire much confidence anyway.

In the case of FaceApp, there doesn’t seem to have been much potential for misuse: the app didn’t garner much information and there was no way to know whose photograph was being used. While it was possible that photographs were being collected to train facial recognition algorithms, it seemed unlikely that the information could be used for identity theft or other nefarious purposes.

Zao raises some interesting questions about security and privacy: firstly, it is from China, a country seen in the west as having no respect for either of the above. That said, it has already been restricted by WeChat for security reasons, although that does not mean there are any such risks. What it does mean is that more and more platforms are beginning to worry abusive clauses in the apps they distribute.

Secondly, what generated concern was a clause in Zao’s terms of service expressly assigning the rights of the content, which was generated freely, irrevocably and permanently, and that the rights can be transferred and relicensed. These types of clauses are common in all applications that do something with the content users upload: all social networks, for example, have that clause or something similar in their terms of service, otherwise they could not allow us to publish that content because we could immediately demand from the company rights over that publication. But neither Facebook or Instagram will market our photos due to the fact that we have signed those terms of service, but neither could we prevent it, as some seem to believe, by stating in our profile that we oppose it.

In the wake of the barrage of negative reviews in app stores, Zao has updated its terms of service: it now expressly specifies that it will not use the photographs or videos of its users for purposes other than improving the application or for uses previously and expressly agreed by users, and adds that if users delete the content they uploaded, the application will also delete it from their servers.

In practice, these types of clauses are a relative problem: could a company, for example, use a user’s content in its advertising, or sell it to a third party? Possibly, but usually, in the case of a complaint for such uses, the bad press generated would be a major disincentive.

Does that mean we should ignore such clauses? No: the chances of misuse begin with clauses that nobody reads and end with what happens if our information is stolen. An extreme case happened recently at BioStar 2, a company that manages access to warehouses and office buildings, allowed a group of researchers to access a database with twenty-seven million records that included biometric information, even allowing them to add data, for example new fingerprints or a photograph on somebody’s file, which obviously invites misuse by criminals.

These types of problems are much more common in companies where we have provided a lot data rather than a silly app that we simply upload a video or an image to, which in many cases is not even accompanied by any additional identifying data. Devoid of more data, a photograph can be used, as I commented previously, to train an algorithm, but really, for little more — again, unless we have given permission to the app, through those terms of service that nobody reads, to access our data, for example, on other apps.

Are apps like Zao or FaceApp trying to steal our identity? I don’t think so, despite the media uproar over uploading a photo to an app. Doing so without any reliable verification is unlikely to represent a security risk. If we have logged into the app with a more complete profile, for example, such as Facebook or Google, there may be a risk. It can’t be ruled out, okay, but the risk doesn’t merit the media brouhaha. We’re talking here about a few developers with a fun idea that has gone viral and who have enjoyed a moment of success that might make them some money. Should we find out more about them, where they are based, their terms of service or the permissions they request when to install their app? Yes we should, and we should do so for all the apps we install.

The job of the media is to encourage people to think about what they are downloading and putting on their phones, rather than crying wolf every time a viral app comes along. Zao offers us the chance to understand the terms of service of apps with social content, where the information we provide is published by ourselves, and above all, allows us to verify that the issue of deepfakes has advanced to such a level that “seeing and believing” is now relative.


(En español, aquí)

Enrique Dans

Written by

Professor of Innovation at IE Business School and blogger at enriquedans.com

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade