The Future of Humanity Has a Chance Today, Maybe Not Tomorrow.

Frederik Pohl (the sci-fi author) once said: “A good science fiction story should be able to predict not the automobile but the traffic jam.” This quote leaves us with a lot to think about, especially these days when everybody seems to be pointing out automobiles, ignoring the massive traffic jam we’ve got ahead of us.

Traffic jams usually start in one of two ways: (1) There’s an accident, construction or something unexpected happens that congests the road — or (2) The system directly can’t handle the massive amount of automobiles on the road. With the one we’re heading into? We’ve got them both.

So where’s the traffic jam and how our current privacy standards could block our latest exit to avoid it?

A true story

A couple of years ago I lived in Shanghai (China) and I had a personal experience I will never forget. The Chinese government, in order to track everyone, asks you to notify the police when you change your address. In my first apartment the landlord did the paperwork for me but that wasn’t the case when I moved to another one. My latest landlord didn’t do it.

A week went by since I moved, it was Sunday morning, and somebody started to kick my apartment’s door, so I ran to see what was going on… There they were, the police. Yelling at me in Chinese, which I didn’t understand, and after a couple of minutes I called a Chinese friend to do the translation for me. Long story short, they took my passport and I had to go to the police station, waiting for the worst.

After eight or 10 hours of waiting I had my passport back. It was a weird-scary experience, but what shocked me the most was: How the hell did they know where I lived in the first place? I had been in that apartment for just a week, and they had time to know I was no longer living there, to locate me and to look for me?

Did they use facial recognition to track me where I was?

Did they locate me using my data that WeChat had (still has) on me? If so, did they check my buying behaviors, who I talked with and what and when I said something?

You betcha.

You can see in this video how facial recognition works in China:

https://www.youtube.com/watch?v=Fq1SEqNT-7c

Or in this one how the Chinese authorities can find anybody in 7 minutes. I just imagine myself in one of those big screen walking down the street:

https://www.youtube.com/watch?v=pNf4-d6fDoY

Some questions arise when you start thinking about this experience, but the two most important are: (1) How is the future going to look like if we go down this path? and (2) What will happen when the rest of the world starts copying surveillance practices and we completely lose our privacy once and for all?

Well, that’s already happening. The US government has access to information on every US-based company (that’s most of the digital products we use in the western world), which means they can access to information from all around the world, just like Edward Snowden revealed. (In fact, Amazon with AWS hosts half the Internet in their servers, including services like Netflix, Spotify, or even CIA computing power.)

Maybe the only difference between China and USA is that the Chinese do tell you what they do.

An existential problem

We’re about to discover the most extreme inequality we’ve ever imagined, all thanks to a root cause: lack of privacy and ownerless data, which are leading us to three fundamental problems.

Let’s tackle them one at a time: the disease, the immediate consequence and the existential crisis.

1) Attention Based Business Models: The Disease

In the race towards profits, companies have forgotten that they have a social responsibility. Too often they excuse their bad behavior saying (in private though) that “a company has to make money.” Which is true, however, there’s a point where companies have too much influence and power that they have a social responsibility. Especially when billions of people use their products.

It turns out that big tech corporations (or should we call them media companies?) on the record would agree with this, and some of them actually make some social good, but they have ingrained a malignant business model that spreads like cancer: advertising.

Advertising is a disease — a disease that thrives in the attention business.

Mobile apps, phones and social networks are design to suck up our attention. There’s a race to the bottom for attention and they’ll do whatever is necessary in order to steal it from you.

Some might say that they can change, but they won’t. It doesn’t matter what they say or how they say it. They are public companies, and the promise they made to investors doesn’t match with the one they promised to us. So in order to keep investors happy they have to keep their promises, which is earning more advertising dollars through massive manipulation (I don’t say persuasion, because it is used when the outcome is positive.)

This is not something new, but now they have more powerful tools that don’t only grab our attention, but change the way we think without we even knowing it. Regardless of what they say, they are not in the business of making a better version of ourselves. They are in the business of sucking up our attention at any cost, which leads to addiction, stress, anxiety, filter bubbles and harm social relationships.

By the way, this is not just about fancy Silicon Valley tech companies. This affects big corporations selling your data to the highest bidder, for example cable companies like AT&T or Verizon selling your data by controlling your internet connection. (And that’s why the FFC voted to repeal net neutrality and let big cable companies to control the Internet.)

2) The Trillion Dollar Job Market Take Over: The Immediate Consequence

With the rise of AI we’re heading towards an scenario that might not need us as workers, which in some cases is great, as long as we figure out a basic universal income or something that would take care of the people displaced by a robot. The debate on this is getting more urgent, and there are a lot of people talking about this. Elon Musk said we urgently need to work on a basic universal income. Bill Gates suggested that we should collect taxes from robots if they displace humans. There are several suggestions, but any of them are attacking the root of the problem.

There are two opposites in the argument of the future of work. A positive one and a negative one. The latter is easy to guess ‘they are going to take over our jobs’ — which will happen eventually. And the positive one is ‘new job opportunities will come up’ — true as well. That’s a no brainer. The question to ask is: What’s going to be the immediate impact? How will it affect this change in people’s lives?

Yuval Harari in a TED interview said:

“If you look at the trajectory of the new industrial revolution, when machines replaced humans in one type of work, the solution usually came from low-skill work in new lines of business. So you didn’t need any more agricultural workers, so people moved to working in low-skill industrial jobs. And when this was taking away by more and more machines people moved to low-skill service jobs.
“Now, when people say there will be new jobs in the future, that humans can do better than AI, that humans can do better than robots, they usually think about high-skill jobs, like software engineers designing virtual worlds. I don’t see how an unemployed cashier from Wal-Mart reinvents herself at 50 as a designer of virtual worlds, and certaintly I don’t see how the millions of unemployed Bangladeshi textile workers will be able to do that.
“If they are going to do it we need to start teaching the Bangladeshis today how to be software designers, and we’re not doing it. So what will they do in 20 years?”

That doesn’t make developing a universal basic income an easy task. We are in a global economy, therefore universal income can’t be a national solution, because technology won’t just take over American jobs, or European, it will take over jobs from every part of the world, so how do you make an universal basic income system?

Let me put this another way:

How can we make a universal basic income system globally while most governments are worried on short-term results on a national scale? Focusing on a long-term global scale doesn’t make a politician look good.

The truth is a bitter pill to swallow but we’re going to face one of the biggest lost of jobs of all times. And no one is paying attention.

If you think about the top 10 market capitalization companies, the list looks way different than 10 years ago. In fact, the Top 6 are tech companies (US companies and Chinese ones). All of them have invested billions of dollars in AI, most also have invested a big chunk of money on virtual and augmented reality and they are all focus on providing services for companies (Google GSuite, Facebook with Workplace…)

Consider Google Duplex, an AI assistant that talks like a human and for the first time you can’t tell it’s a robot. In fact, they did a live demonstration at the Google IO 2018 where that AI system had a couple of conversations making appointments, where the person on the other line didn’t even notice it.

Now consider these two questions:

  1. How many jobs do you think a system like this could take over? Call centers, personal assistants, telemarketers, go do down the list.
  2. How do you think they have been able to perfect this system? With the data we’re giving them away for free (and have given away for a really long time).

This technology will change the world, and I’m sure in the long-term, for the better. But the consequences are disastrous for us in the short-term.

You’re supposed to be profiting from your data (if you choose to), not them!

And the beauty of the deal is that they’re stealing your privacy and data for cheap. What do you get in return? “Free” products and a great shot to be replaced in your job by a robot.

It turns out, Google Duplex is a tiny example. So let’s put it this way:

What Facebook, Google or any big tech company, want is to maximize their revenue, right? Put yourself in the shoes of a big investor with a big chunk of shares in any of these companies: From a business point of view, if due the circumstances there were a possibility to lead one of the biggest business changes in the world, one that could make billion of dollars in revenue, would you go for it? From a business point of view that’s a once-in-a-lifetime opportunity. From an ethical point of view, though, that’s a disaster, but Wall Street has never shown any sort of empathy nor care for human beings, has it?

3) Liberal democracy is doomed: The Biggest Existential Crisis.

Our thoughts in the open
When I studied marketing there was a field that started to arise: Neuromarketing. I loved the idea of studying the brain in order to create products that satisfy better our needs and wants, however as always, there’s a fine line between marketing and manipulation. Even though neuromarketing is still in its infant days, most marketers sell it as the “buying button of the brain.” And that’s BS, for now. But that’s going to change when we upload our thoughts to the cloud. I don’t know you, but given the record history of marketers exploding every single media, I don’t trust most of them, especially when we are the most vulnerable. That means when they can read our thoughts.

As I’ve pointed out introducing this project, there are a few companies (including Facebook) working on a brain interface. It’s a matter of time before we replace our phones and start using something completely different that connects our thoughts on the cloud.

(Read WaitButWhy’s article about Neuralink.)

Liberal democracy has to reinvent itself.
Again, Yuval Noah Harari nailed it in another interview, here are three quotes from that debate that we all need to think about:

“Once you have an external algorithm that understands you better than you understand yourself, liberal democracy as we have known it for the last century or so, is doomed. It will have to adapt to the new conditions. It will have to reinvent itself in a radical new form or it will collapse.”
“If there’s an algorithm out there that understands your feelings better than your own mother and can press your emotional buttons better than your mother, and you won’t even understand that this is happening, then liberal democracy will become an emotional puppet show.
“We have these slogans of ‘listen to your heart’, ‘follow your heart’… but, what happens if your heart is a foreign agent? A double agent serving somebody else.”

So, what would happen when our thoughts are in the cloud, a cloud where governments and big companies have access to, and AI algorithms are so advanced that can control our emotional reactions through the information we provide from a fancy brain interface?

What all these three arguments have in common is that the less privacy and more openness towards data, the better they do. It’s not coincidence that the two most powerful countries in the world want to kill privacy: China and USA.

This is way bigger than the lost of millions of jobs. That’s the medium-term consequence. In the long run we’re dooming democracy as we know it. All because we don’t protect our privacy and digital data.

AI’s fuel is data. The more data the better and faster it improves itself. The way to extract more data? Selling the idea of “openness” and giving up our privacy in order to get “free” and “customized” products.

In the end technology is neutral, you can use it to oppress or to liberate. The problem is the greed from the people who control technology — and now greed got greedier.

The Future of Humanity Has a Chance Today, Maybe Not Tomorrow

There are good news and bad (and really bad) news. The bad news is that they have enough data on us already. The really bad news is that the value of data increases over time, today for example through an AI algorithm they can predict your mental health, so what will they be able to analyze with the data they have already on us? The good news though is that we can change the future, as Abraham Lincoln said “The best way to predict your future is to create it.”.

It is up to us, the people, to make change happen. It’s always been that way.

We need to start claiming our privacy and create a fully recognition of data as a personal property.

The future of humanity depends on how we deal with privacy today.

We can’t stop technology and I’m not trying to ignite a Luddite-like movement. But there is a problem here and if we don’t come up with a solution, fast, there will be huge economical gaps between classes. And not just economical, privacy inequality is even worse.

If you’ve watched Black Mirror’s episode on social credit you know what it looks like. But you know what? The Chinese government is already forcing the use of a social credit system, where if you don’t have enough social you can’t just leave the country — they block you the access to airplanes. If that’s not enough, they’re testing the emotional surveillance. As South China Morning Post pointed out: “Government-backed surveillance projects are deploying brain-reading technology to detect changes in emotional states in employees on the production line, the military and at the helm of high-speed trains”.


I’ve lived in China for a while, and what surprised me the most is that, when it comes to technology, the west ends up copying what they do. And there’s a big chance lots of countries and tech companies adopt these measures.

So, what do we do?

The beauty of the deal is that no one is responsible. Because nobody is paying attention.

The truth is we’re deep into it now — we’ve been drinking the kool-aid for a long time.

This is happening. We need to deal with it NOW. As I said at the beginning of this article, we’re heading into the worst traffic jam we’ve ever imagined and there’s only one exit left — we can’t miss this one.

If we don’t do this for ourselves, let’s at least do it for future generations. This will be our legacy. We have such an important influence in the way they perceive the world, that we need to give them the best cards in order to play a great future. We can’t sacrifice short-term gains for long-term importance.

In the end, we can teach them to embrace openness or stand up and defend their privacy — “just like we did” we will say, won’t we?

Let’s make change happen.

Join The PrivateID’s Movement, and build with us the kind of world you want to live in.