Humanity, Corporate AI and privacy…

I invite the reader along for another mental exercise to entertain the notion that the collective intelligence of citizens can build a future that is superior to what can be built by our outdated procedures of governance, predatory economics and confused approach to capitalism that is evolving our tools of networking into something else entirely.

Many people right now proclaim we find ourselves in a Nation that is “more divided than ever”. Perhaps they’re right. I’ll be the first to admit on the surface, it certainly seems that way. I do however think it’s worth the reminder that it’s a matter of perception.

The internet, social media especially, is a massive repository with so much beauty, brilliance and expression. Naturally it’s also rife with examples of unsavory characters, bizarre beliefs, lifestyles, opinions, violent/hate-filled content, unproductive, non-factual arguments as well. In many instances, when an internet user views such content one can also find themselves responding in a similar fashion. Especially if stressed by their own environment. Perhaps we see a friend who re-posted a bogus article after only reading the title. With equal frustration at the world we bring down the humiliation-hammer… hard, fast and public. Perhaps if certain environmental stressors weren’t bearing down on us; might we have responded differently?

I think we can admit it’s obvious some communities have been dealt better hand than others. While we are all doing what we can to carve out a little bit of happiness in this life, we each find different obstacles and dramatic variance in the growth opportunities provided by our individual environments. How many people right now feel lost, flailing in an environment they can’t seem to surmount? Who are otherwise, at their core, wonderful individuals or who have the potential to be.

Are these the key elements that make up the sum of one’s character? An awful snapshot of you online certainly suggests so to those who don’t know you. Is this is an accurate sample of the population to draw from? I feel people are seeing a mask and making an assumption that’s who people are. I challenge this notion. I feel this kind of “anecdotal” assessment of humanity, is often treated like an open and shut case for a lot of people.

More alarming is that corporations and governments draw similar conclusions about us from the same type of data-mask, and it plays a role in how they treat us.

“We need to start organizing societies to prepare them for the post privacy era...”

There can be no doubt Dr. Michal Kosinski is exceptionally knowledgeable on such topics. I also believe him to be a talented and superbly intelligent Data Scientist and psychometric analyst. Additionally, he seems to be quite pleasant and kind and the information provided in this 3 part video is very informative.

However, someone like Dr. Kosinski should never advocate the end of privacy as something we need to throw our hands up and go along with. I know the trajectory towards a zero-privacy world is desired by the data-hungry corporations and governments who have fallen in love with it. While it seems to be a permanent path humanity is stuck on, it’s not.

This man has the power to influence other data scientists and individuals in both related and non-related fields with his impressive work and respectable resume. These scientist’s influence can help shape the way industries and government agencies interpret this data that could lead to the more responsible usage of it. While outlining benefits of these algorithms, he also points out flaws and dangers of Big-Data conclusions, the threat of data markets and how they have changed the whole game, then proceeds to tells us that the sooner we realize this:

“The sooner we can focus on making societies more tolerant and open-minded. Educating the voter to make them more difficult to manipulate, and basically organizing the societies and the people for the post privacy era.”

I have a few questions about this statement:

Who is the “we” that is going to focus on “making us more tolerant and open-minded”? Tolerant and open-minded to what specifically? These are broad terms.

Is he saying we should be open-minded to the idea that no-privacy is good for humanity?

Should we be tolerant to the fact corporations and governments will continue this invasive trend? Redefining their tactics to be more subtle and effective at capturing data people don’t know they are capturing?

Perhaps he means more tolerant and open-minded of each-other? If this is what he meant, I’d love to hear him expand further on how he thinks this will unfold while we are all casually strolling towards a “post-privacy era”.

He also says we need to focus on educating voters so they are “more difficult to manipulate”. Again, who is the “we” that will be focused on “educating” us? Data scientists themselves? Perhaps he means the companies and government agencies who actually create and shape the environments that capture our digital footprints for the purpose of concentrating information, wealth and power? Are Facebook and Google going to educate us on all the details of how they sell this data and who they sell it to so we might be better informed as voters? I imagine being honest and transparent about this would hurt the business model and the shareholders would not be happy.

So then, who will provide this “education” and what information will it contain? Will this information be conveyed in our news feed? Orchestrated by the same algorithms adapting and evolving their own processes that “extend beyond the comprehension of its designer”? Will everyone receive the same “education”? Or will it be custom-tuned based off the algorithm’s conclusion of our intelligence? Will data collected also include whether or not we each read this “educational” material? Or do we need to sign up for Kosinski’s mailing list where he and his colleagues can educate 100’s of millions of voters personally? Won’t big corporations get mad at him for messing with the business model by teaching us the ways we are manipulated?

Or perhaps he spoke a catch-22 without thinking through the logistics? I’ve done that. Though I feel that would be an important thing to do before declaring to the public that we should give up on privacy.

Why is he not vividly expressing the need for privacy in society? Especially while in such a unique position to advocate for it?

He doesn’t seem to have an understating of how essential privacy is to the human organism. If he understood it’s importance, he would know why it needs to be preserved.

An alternative path away from this whole mess is quite easy to comprehend if you aren’t looking at it from within the modern academic and corporate environments that ingrain this linear perspective.

In future writings I will explain how privacy is crucial to the integrity of the human future and how easy it will be to preserve it without losing any of our modern conveniences of connectedness OR the genuine benefits of Big-Data. In fact, this new approach can out perform our current social and economic industries.

When I watch Kosinski say this, I can’t help but see a man who has given up hope on something priceless. I see a man who has proclaimed that, despite all his knowledge, he has absolutely no idea how to address the privacy violation habits expanding within our misguided agencies and confused industries.

Why should we give up privacy? So a number of big corporations can show improvements to their bottom line ensuring shareholders are happier than they were last quarter? I’m not sure that’s sustainable.

Do venture capitalists deserve that synthetic warm and fuzzy feeling from their ROI more that the next generation of children deserve to experience what privacy even is, and why is crucial to their life-long development?

I feel people are entirely different at their core in a way Big-Data and search results can’t compute. While I’m quite cognizant of the habitual pitfalls of humanity, I honestly see more evidence that the majority of people are good, intelligent, level-headed, logical, caring, and very capable thinkers who don’t need algorithms to tell them what they like. I also have an understanding of how web/app interfaces and environments are designed.

You are not your digital footprints and your Facebook does not represent who you are as a human.

While any one of us can get lost and/or lose sight of who we are in various chapters of life, or when personal circumstances take a turn for the worst, I do feel people are a product of their environments, for better or for worse.

This naturally includes the human Dr. Michal Kosinski and the millions of other human beings who’s personal data he and his colleagues have analysed to drawn conclusions from. Not only is he is analyzing people’s behavior while engaged in certain types of environments that by their very design cultivate addictive behaviors; but the data is incomplete…

What of the elements of humanity that are not captured by an algorithm. Elements of humanity who’s very essence can only exist in the presence of individual privacy.

Just because we have more data than ever on human behavior, doesn’t mean it’s the whole story. Certainly not enough to warrant the suggestion that the end of privacy as a responsible course of action.

Every citizen is like a nerve in the human organism, capable of relaying feedback that is crucial to the homeostasis. We have had to rely on governance forced to ignore much of this feedback, or was it never in the position to receive it due to logistical constraints. Naturally issues will compound and valuable citizen input goes unheard. This direct result of the system’s capacity leads us to today: A governing system with too few brains, and too few perspectives to address the complexities of our modern society. Especially it’s incoming future.

Even if there was enough perspective-diversity, the current political environment is too constipated with wealth, power and corporate interests to be efficient. This includes our local governing systems.

This bottle-necking and backlog of unaddressed issues cultivates environments that have dramatic impacts on the people living in them. Especially local issues with no digital representation. The historic echos and modern symptoms of these impacts are then captured by Big-Data collectors as behavioral “facts” about you to create an assessment of who you are as a person. This includes your children, your parents, your grandparents, your friends. The line we thought was drawn in the sand will only get more blurry moving forward. While such data is predominately used for advertising, why do people assume that it will remain as the primary use?

Despite all that, the efficiency we need in policy making is possible to achieve but our governing system needs upgrades, maintenance and most importantly, a new perspective-rich outlook. Not just new faces for the same roles.

Facebook recently announced that they have staff “whose role it is to help politicians and governments make good use of Facebook”.

In the same article, a strategist for (responsible for “Brexit”) tells us:

“You can say to Facebook, I would like to make sure that I can micro-target fishermen in certain parts of the UK so that they are specifically hearing that if you vote to leave that you will be able to change the way that the regulations are set for the fishing industry. Now I can do the exact same thing for people who live in the Midlands, who are struggling because the factory has shut down. So I may send a specific message through Facebook to them that nobody else sees.”

Another brags:

“This is where you can do the interesting stuff,” said the campaigner, who wished to remain anonymous. “You are not just advertising to them once; if they click, you know a specific person has shown an interest. You can feed that back and know who and where they are, down to the postcode. Then you can change the messaging to suit what you need politically.”

I thought representatives were suppose to represent the will of the people? Not hire Campaign companies to purchase our personal data from Facebook to manipulate and create custom-tuned propaganda to get a emotional-based response out of you. Often using psychometrics to manufacture stories to play with your hopes and dreams of the future by pretending to give a shit about the things you love and care about. All of this happily facilitated by Facebook’s business model. Each time there is public outcry they will “fix” the problem and subsequently find a new, more effective, way to let us be manipulated for their gain.

Will this kind of manipulation and data collection become more invasive when we enter the age of telepathy?

“Today at F8, Facebook revealed it has a team of 60 engineers working on building a brain-computer interface that will let you type with just your mind without invasive implants. The team plans to use optical imaging to scan your brain a hundred times per second to detect you speaking silently in your head, and translate it into text.”

Telepathy would be a huge advancement for humanity and if the tech interface is 1st party secure and open source, I might be first in line as I am a huge proponent of these kinds of advances. They make me terribly excited.

However, if corporations like Facebook own the exclusive keys to the tech, you are literally turning all your thoughts over to Facebook (or Google or Amazon etc...) A publicly traded company who is financially incentivised to share your literal thoughts and personal information with other corporations willing to mislead you. Even on topics of great importance like an election. They also keep the content of your drafts, even when you delete them. So in the same way you use to type out an angry comment or email and before you post it, a change of heart leads you to erase it. Well, that data is stored by your favorite tech giant and it doesn’t account for what triggered your change of heart. It is added to you data-mask by their algorithms to draw more conclusions about who you are and sell it to others who do the same. What will they do with your thoughts?

Perhaps the new advertisements will come to you in the form of thoughts? All of a sudden, for some reason you’re craving menu items for a nearby restaurant you’ve never been too. Is it you craving it? Or is it the advertisement disguised as a thought? It can be hard to tell, just like those old sponsored news stories we use to get tricked by back in 2014?

Or maybe, right before you are about to cast your vote during an election, a strange unforeseen feeling comes over you. A thought slips into your mind and makes you feel justified in switching your vote last minute. After 30 minutes, that feeling begins to wane. Wait… was that my own thought, or a political psychometric advertisement disguised as a thought?

Stop and think about this seriously for a minute. Does anybody else see the perfect storm that is brewing?

In the same way Facebook is trying to predict suicidal behavior in their users, might they try and predict criminality in the future? Perhaps turning your thought-stream over to authorities because the AI’s statistics you’ll never get see have concluded you’re likely to commit a crime, prompting a Minority Report response at your office because your were having a bad day and merely thought about punching your CTO in the face. This might inspire new changes to the corporate business model to take advantage of this new paradigm…

Especially years in the future when their shareholders constantly demand a better bottom line quarter after quarter. Would this data include any thought while logged into Facebook? What about your teenager’s thoughts while he or she happens to be standing near Amazon’s Echo 7? Or for that matter, any IoT device that was force-upgraded against your will, bathing your thoughts with latest malware known as “Windows 14”… What might they be willing to do to extract more profit?

“Act now, apologize never…”

There’s nothing wrong with the tech, but there is so much wrong with the environment from which it is being birthed. That environment will play a major role in how it is used in society. Or used against us. Remember, the name of the game is subtlety.

I believe that for some time now many of our technological tools consistently outpace our social/societal development. I feel most people do not have a grasp of the true nature of many new emerging technologies or the scope of their benefits vs potential ramifications. It won’t be slowing down either.

I believe the only way we can maintain a healthy society is by reclaiming our personal data as we move forward. We must solidify individual privacy without losing our convenience and connectedness. And that can be done.

To bridge this gap in a way that doesn’t further inflame the situation, but soothes it instead, we need the right environment for citizens to conceptualize and implement solutions.

Solutions abound in the minds of citizens, but they lack the co-operative tools to implement them.

In order to cultivate improved environments for social interaction, civic engagement, human rights/privacy protection, economic prosperity and more, we need the right tools for the job… Those tools are here now and we get to use them differently, all while integrating new developments and breakthroughs to keep pace.

My intention is to help upgrade the capabilities, opportunities and tools available to every citizen so we might get a better picture of this human condition.

I plan to help assemble tools that will be orchestrated by the people to steer their own future. I feel if done right, humans can carve a new path if they begin to distrust the one they are currently on.

Taking a step towards the genuine preservation of the individual, while simultaneously connecting us more.