Who Owns YOU?

Avi Bar-Zeev
Nov 21 · 9 min read

The Road to Individual Autonomy

You’re walking down the street. You notice random teenagers gawking at you and then shying away. And not just a few. What’s going on?

You ask a savvy friend for help. They eventually figure out what happened. When you use Trawler’s (the leading social network) AR glasses and add the “BodyNet” plugin, your clothing (as far as you can tell, just yours) suddenly becomes transparent to any person wearing the glasses. It’s like x-ray vision.

Nude. You. What?!

You don’t even have an account with Trawler, let alone your own AR glasses to look at yourself. You both search the company’s website for how to stop this.

Unfortunately, you learn you’ll need to open an account, agree to their terms (including no lawsuits against them) and give them permission to upload all your photos, including selfies. Then maybe they’ll help find and block this abuse with their opt-out privacy controls.

Right now, you don’t exist.

This isn’t a scene from some new teen comedy gone wrong. It’s one possible effect of augmented reality, better known as “reality” after AR glasses and contact lenses become ubiquitous. Then:

A week later, the forced nudity stops and you can safely leave your house again. Your friend shares this “trawl” from Trawler’s SVP of Devices:

New Trawl from Rogier “Dodgier” McKinna @dodgier — 19h
Contrary to media reports, only a small number of users were affected by the BodyNet applet. It used our APIs to track faces and selectively “augment” the clothing of users with imagined pixels, using its own AI algorithms, not ours. No Trawler accounts were hacked and no user data was ever stolen.

Trawler always puts people first.

You feel better (and also worse) on reading that. But just when you think it’s over, your friend shows you a new personalized ad coming through their Trawler headset. The ad features a photo-realistic avatar of you driving a new car outside. It’s a car your friend likes but can’t afford. In the passenger seat sits the life-like avatar of someone you’re pretty sure your friend has a crush on.

What?

The crush-atar flirts with your avatar. Your avatar smiles back and puts on some Trawler glasses. Both drive off together, having successfully provoked vehicle-envy.

You swear to your friend that you had nothing to do with this. And you’re at least part right. You did, after all, accept the company’s terms of service.

You may think these are unlikely science-fiction stories, or edge-cases we can easily address. This kind of thing was all too real for me. In 2008, I created an account wih Facebook only after scammers used my name and photo to lure a thousand Microsoft co-workers into friending them in a likely phishing scheme. I didn’t know until someone asked me about my supposed account. The only thing that worked to stop it was my claim of copyright on the photo.

As for the “forced nude” technology, we already see cases where AI is capable of taking a clothed picture of a woman and virtually remove her clothes. Face-ID works, but not for everyone equally. Tracking people IRL is not far behind.

One app tried making such illicit nudes last year and was rightly shunned. A new company now provides its nude photos through a Telegram bot, with watermarks advertising their “free” service. The marks can be removed for about $1.50 in rubles each time.

This is just the latest abuse of computer vision and machine learning (CVML) technology in a pattern called “deep fakes.”

Digital hallucination is better used in apps like Photoshop when you want to remove people from photos. The computer must guess the missing pixels from context.

Does it matter that the algorithm is only imagining your nude bits after digesting thousands of samples of other women nude?

It might be very little comfort for a victim to know the image is not strictly real. Someone is likely ogling you. And as the photo is shared out more widely, the damage gets exponentially worse. Such harassment can lead to loss of income and social status, depression, even suicide.

So can we stop this kind of abuse through legislation and prosecution? Here’s an analogy:

Imagine if everyone put our money and valuables out on the street and then expected law enforcement to deter or prosecute the resulting crimes.

Putting our pictures on websites is far more common and reasonable than putting our valuables on the street. But the protections are even weaker. Most people wouldn’t think of copying and editing your photo as any kind of theft.

Now, identity is not the same as tangible property. You don’t normally lose the ‘original’ when your identity is ‘stolen’ or abused — it’s more like making a print of a unique oil painting. And there are legitimate reasons to share this data freely, without expecting any harm. But in some ways identity is less replaceable than property and it’s harder to undo any damage. Perpetrators can more easily disappear in jurisdictions we can’t reach.

Right now, we treat taking and sharing photos as an individual freedom. In this perspective, our cameras simply collect any light reaching their sensors and produce a digital copy of whatever they see. As a photographer, a curator of stories, we have an implicit copyright over our compositions, even if we don’t ‘own’ everything in the photo.

However, if we photograph a landmark building, the building’s owner suddenly has some rights. The building’s appearance is likely copyrighted and that could trump our rights to photograph and share. But no photos today include this kind of information and it may be harder to discover it.

One unfortunate truth seems to be that we don’t yet place the same value on individual safety and comfort — and the loss thereof — as we do for say a famous building or Disney characters. “Crimes against Intellectual Property” have a clearer monetary loss, apparently, or at least more lobbying power. Actual humans seem to be more disposable.

Permissioned Experiences

It wasn’t always this way. In a more polite age, we might be taught to ask someone’s permission before doing something they might not appreciate.

“Can I take your picture?” is a reasonable question. “Can I share it?”

“I’d prefer not,” is a reasonable response.

Of course, that politeness has traditionally failed to help someone lacking privilege. There is an important story, adjacent to this one, where many in our society have, for centuries, been treated as disposable. Many of the protections for property rights trumping human rights remain ingrained in our institutions and discourse — to this day.

This may be one of the reasons Google Glass failed in its first incarnation— it gave the wearer new social power over their peers —including the ability to take and view pictures covertly. It wasn’t the first camera in public. And we tend to ignore all of the other random surveillance devices around us. But Glass was right there, in our faces, being rather rude.

And we spanked it.

It’s not surprising if any other internet company follows Google’s approach, even after their public drubbing. It’s less about what’s right and more about what’s easy. We’ve accepted the lie that the status quo is the only way.

Ted Nelson, inventor of hypertext and an early influencer of the internet’s other founders, favored a model where any piece of information is stored only once, globally, and everyone else just links to that original. That method might permit the information-owner see how and where their content is used. Nelson favors sharing over copyright.

But that’s not what happened. The internet became a giant copy machine.

Once we make blind copies, all bets are off. We can modify, strip ownership, and share modifications widely, without the originator’s knowledge or consent. DRM (Digital Rights Management) arose from the lack of a more inspired approach, providing owners a crude way to encrypt their content such that viewing it requires valid security keys, controlled by them. Device-makers were pushed to enforce this with technologies like HDCP (High-Bandwidth Digital Copy Protection), which encrypts the data in transit and theoretically prevents easy copying, or even a screen-cap.

But DRM too-often fails us. It can strip customers of their rights to keep using content they paid for and makes devices intentionally defective. It also fails to protect the content owners, because piracy now happens as a result of DRM.

Turns out, consumers will pay for a reasonably priced content delivery service that doesn’t suck.

So let’s begin to re-imagine how we capture and convey people’s likenesses, protecting individual rights to decide how and when our likeness is used or modified. We can draw on those previous ideas, but we are not beholden to them. We have new tools to use as well.

Using Faces

In 2010, Germany’s stronger privacy laws forced Google to blur the faces of anyone who didn’t want to be featured in the company’s mapping products. About 3% of the country opted-out, which is a significant number of exceptions for any tech company to process.

Following Google’s notorious “accidental wifi-data slurping scandal,” the company didn’t have any valid purpose to keep random bystander’s faces in their maps. If we wanted to “find Jim,” we’d do it another way. So Google turned to automatic face-blurring across all maps, for everyone, globally.

In many countries, and especially in the US, the law generally treats public photography as freedom of speech, more concerned with infringements.

A photographer or videographer will still obtain consent from subjects for commercial purposes, if only because those subjects might later demand compensation. Movies shot on location will send PAs out to obtain releases from anyone wandering into the shot, or more likely try to prevent it.

Not surprisingly, actors have been on the leading edge of the debate of who owns their likenesses. In 1990, Crispin Glover sued the makers of “Back to the Future II” because he said they used his likeness (his old face mold applied to a cheaper actor) without compensation. He acknowledged it was totally fine for the filmmakers to re-cast his role, as they did with Marty McFly’s girlfriend. Glover’s concern was their apparent use of his likeness against his will.

This is coming up again more recently in the use of “holograms” of deceased celebrities. For the first time, I believe, a deceased celebrity has been depicted as a hologram without the consent of their estate. We’ll see how that plays out.

TV News crews in the USA have no such requirement to get permission, nor do amateur photographers out in public. However, we’ve also seen TV stations pay for exclusive rights to amateur video of newsworthy events. Stations will generally credit other stations when replaying their footage.

The rules vary in private spaces too, largely governed by the posted policies of the property owner; and of course, by country.

Image for post
Image for post

While Google automatically blurred human faces, Black Lives Matter protestors in 2020 turned to various photo blurring tools to more selectively protect protestors from retaliation. In the very same protests, we have a compelling reason to keep public officials, like police, well-identified even if they try to hide, to address any individual or systemic abuses.

Following that line of thought, what if we could automate this kind of selective privacy for people captured in any photographs or video and only obscure those who want and deserve protection?

A Proposal in Brief

In Part II, I’ll go over a theoretical implementation and UX for smartphones, with some alternatives considered. Concrete examples are generally easier to digest and critique. And there’s a lot to critique here. Because giving individuals more rights to control their own identities and likenesses probably requires us to trade some related freedom and convenience.

The hope is that we’d see no significant friction for taking pictures of the people we care most about: friends, family, officials, and even celebrities; in any location, public or private. But for other strangers, they’d gain a level of privacy-by-default and a bias-for-politeness that we seem to have forgotten.

Fortunately, anything we might solve for common photos and video today should apply equally well to deep-fake videos and future 3D avatars, which can not only alter our appearance but also our apparent behavior. We will collectively need to solve this problem, one way or another, safely up front or unfortunately after many more people are harmed.

Can we rely on each platform policing itself? Already we find so many cases of unequal enforcement of anti-harassment policies, censorship, labeling of misinformation, and so on. There seem to be no fixed standards, only shifting sands. And addressing harassment after the damage is done is not solving anything.

Conclusion

Having better privacy begins at the moment of capture for the most sensitive information, ideally adding a level of politeness and respect we should all come to expect in a civil and democratic society.

Ask for permission. Protect privacy, where needed. Honor our agreements. Put people first. If we only ever try to regulate the dissemination of personal info after the fact and by punishment alone, we make our work so much harder.

We should fundamentally each have the right to control what happens to our bodies, with our likenesses and chosen presentations, both in person and in the digital realms. No company owns or can possibly hope to own these facts.

This is on us, reclaiming the rights to ourselves.

The Startup

Medium's largest active publication, followed by +730K people. Follow to join our community.

Thanks to Daniel Cardozo and Steven Webster

Avi Bar-Zeev

Written by

Design and Technology Leader (fmr. HoloLens, Apple, Google Earth, Second Life, Disney VR)

The Startup

Medium's largest active publication, followed by +730K people. Follow to join our community.

Avi Bar-Zeev

Written by

Design and Technology Leader (fmr. HoloLens, Apple, Google Earth, Second Life, Disney VR)

The Startup

Medium's largest active publication, followed by +730K people. Follow to join our community.

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store