Ethics in a connected world

Jeremy Scott
Loud&Clear
Published in
4 min readAug 15, 2017
Photo by Ricardo Gomez Angel on Unsplash

Imagine you woke up tomorrow in the year 2050.

Your wearable tech would give you a rundown of how you slept, for how long, along with a rundown of your vital medical information. You hear a thud at the front step…and open the door to see a delivery drone flying away, a package left sitting for you.

You didn’t order it. It’s a free sample for a product the seller thought you’d like to buy, based on your browsing history.

Walk down the street, every store records your movement and tracks your eyes to see what you like, instantly forming personalised advertisements that will hit you here, at home, on your phone, and your wearable devices. Traffic data is fed to you through smart glasses and personalises your route to work.

Your food is ordered for you online after your fridge detected you ran out of eggs.

You hop into the car to go to work, but from the backseat — self-driving technology takes you all the way there. The car automatically adjusts itself to your seating preferences, temperature, and entertainment on the screen and speakers around you.

Sounds cool; a bit strange, perhaps. Maybe terrifying to some. But each one of these futuristic scenarios is happening now — soon they’ll just become more commonplace. And they all share a theme:

The mining and use of personal data for commercial gain.

The more we browse online, the more we buy, the more we freely give up, the more we’re going to see ourselves categorised and judged by the data we create, how we interact with people on our social feeds, and the millions of connections we make with people, devices and websites. That information is forming the bedrock for new products and services over the next 30 years.

And I’m not sure we’re considering what the ramifications of that will be.

There’s an episode of the show “Black Mirror” in which jobs, social lives, even where you live, are dictated by a type of social ranking. We’re hardly there yet, but is it so farfetched to imagine a future whereby you can only shop in certain stores, or live someplace, based on all that data we’re handing over?

Hell, if you have a high enough Klout score you can sit in better airline lounges.

We’re just at the beginning of a generation of services basing themselves on data, and right now we’re focusing on the technology to gather that information instead of on the ethics that define the rules around it.

This doesn’t necessarily make sense now. But think 30 years ahead, when and if our lives revolve around data more intimately: people may want to hire “data coaches” to improve the metrics of their lives. After all, why not? Especially if those improved metrics gave them access to higher social status, or even job offers.

That, of course, leads to another risk. By hiring data coaches to game these institutions to accept us, we would end up creating a mass of “average” humans that are forced to fit in. Anyone who dared to rebel against that type of system may be punished with less access to finance, fewer jobs, or even lower social status. Even medical care could be at risk.

It’s ironic that data meant to parse even the tiniest differences would end up forcing us to comply to more social norms.

Right now, we collect data with the assumption that it will be contained and controlled for the strict purposes we design — whether it’s a website, an app or something else. But we don’t take much into consideration how the collection of that data could affects it owners’ lives in the future.

When asking for data, if we were forced to consider what that data could be used for in five, 10 or 20 years’ time, would it change how much information we collect?

Or the type of data itself?

This is especially true as we start to see the rise of predictive data. In the past 10 years gathering data has mostly been about drilling into databases to find insights about behaviour that already happened. Now, that’s completely changing as we use those same databases to predict behaviour that hasn’t even happened.

If we’re part of a world that bases your financial capability, your job, your friendships and opportunities based on data, is predictive data an opportunity, or a threat?

We’re not considering that as much as we’re obsessed with the latest programming languages that allow for such models to be created.

There’s another part to this argument. As the growth of automation erases jobs, a universal basic income becomes an inevitability. In that scenario, what we contribute to society becomes worth more rather than what we earn, and if all our data is collected, aggregated and shared, that social contribution can be measured and judged.

In that scenario, your only saving grace is your genetic ceiling. If you have a genetic capability to do more, to contribute to society more, then your health is an intrinsic part of how you integrate. Without that advantage? You can imagine the accusations: “the data speaks for itself — you aren’t contributing”.

Do we want to help build that type of culture?

The rise of connected devices and predictive analytics should have us thinking at least about the types of data we ask for, how we store it, how we use it, and what the ramifications might be — even if those ramifications are decades away.

Our hungry pursuit of data could either lead to a world that’s efficiently personalised, or one that turns against us. Let’s spend less time focusing on the “how”, and more focusing on the “why” — and then rethinking all of our data collection-based decisions as a result.

--

--