Personal Freedom, AI & Why Elon is Wrong & Why the FB Data Breach is Just the Tip of the Iceberg

Rosi Haft
15 min readApr 9, 2018

While America has been up in arms (pun intended), over their rights to have guns, there is one fundamental right that is being overlooked and not treated as significantly: Privacy Rights. While many are upset about the recent Facebook data breach, the less obvious and less finger-point-at-able is the privacy rights that come from security cameras. In general with Google maps, and apps and web pages that use our current location, it can be pretty easy for the entirety of the internet to able to figure out what we’re doing at just about any moment in time. It is surprisingly easy to hack security camera, and even though you can prevent it, I still don’t think we’re taking the right approach.

As a founder of an augmented reality company, following Google Glass’s push back because of the embedded camera, this is something I’ve had to consider and take into account in the launching of our cutting edge AR glasses, and I’ve already seen people abusing their power or at least pushing the boundaries of appropriate because of their access to extra information. With technology, blockchain, cameras the world can open up to us — access to jobs and work when it’s needed, transparency to know that companies and products are what they say they are and also know who’s wronged us. We can quickly lose our sense of privacy, give power to people in ways they wouldn’t have it otherwise, and make ourselves vulnerable to misinterpretation and misrepresentation of who we are.

Life, legislation and literature have taught me a few things I’d like to share.

First of all, let me say that I believe in the right to bear arms. In no way do I believe this is exclusively meant to protect guns. Arms, IMHO, should be anything that allows us to protect ourselves. When the Bill of Rights was written, guns were the best way to create balance between the people and the government, people and predators and in some instances people with the intent to harm. It is my hope that in the last few centuries, we’ve evolved a bit to look to other means to protect ourselves. Even being able to protest against or for gun carrying and being able to write this article is a fundamental freedom that is not available in all places. We are living in a day and age where the truth can much more easily be found, and these truths should be what is used to protect us as a first line of defense with the proper checks and balances that are reasonable and manageable. Ideally, with the right justice system, it would be possible to not only gain the same protection from guns (to be let alone) but without the killing.

Secondly, let me represent the benefits of computer vision and why we want to make sure we want to keep cameras around. While taking a class at Stanford, I learned a whole lot about how cameras, especially incorporated into AR and VR headsets can prevent disease. It can be used to translate languages in real time, to teach people to read, to do personal training and even be used to give you nutrition advice and that’s not even the full list of things we’re working on in confidential mode at Lumenora. The advancements in tech can so incredibly significant and beneficial and help so much of the world, but it can also be incredibly dangerous, leading us to the inevitable doom the brilliant, capable and cut through any sort of bullsh… well, you know… Elon Musk sees to be inevitable. I mean if he has no hope and has done things that the world’s top engineers and scientists could have only hoped to do. Then, I mean, should we all just die now?

Reexamining Our Defenses

Before we start a mass-suicide mission, my hope is that we take a step back and see how much we are misusing and underutilizing our first lines of defense. Technology has given us transparency in ways that it didn’t exist before. It is significantly more difficult to threaten harass or harm someone else without leaving some sort of digital footprint — texts, emails, phone calls, Google Maps tracking, security camera footage, etc. This is great when one party is not wanting to engage in an argument, but when both sides go at each other or one person feels totally separated and segregated, too quickly we resort to gun violence, especially without the appropriate methods and systems to explore emotions, to set boundaries and effectively demonstrate to another the inappropriate and difficult pain that they are going through alone or that someone else is putting them through. Facebook has an AI algorithm that is able to recognize when people are in distress, and it is very possible that these benefits of AI could be used as an appropriate first-line of offense towards us hurting and harming one another.

It’s no big secret that some people think that USA’s legal system is broken. The work of Adam Foss, Dr. Carl Hart & Alice Goffman has pointed to how many people commit crimes because they don’t know how to provide for themselves another way, they lack the mentorship and support to make good decisions or to get out of bad situations or away from love connections that take them down, or in the all too frequent story of people like Chris Wilson, the legal system fails to give them a way to believe in justice or that they will be protected. So many people are using AI to make every day easier or to optimize how to make money, but for many who are considered oppressed and a failure of the system, AI isn’t helping them, primal protection becomes needed and this is an area and an opportunity to not only see a new way to use AI to figure out how to solve these problems but to also understand the boundaries we need to set in order for technology to not be our downfall.

The radio was invented in 1895. Radios were used during WWII (and other wars) to not only stay on top of world events but to be warned when there was impending danger. It was used to manage mass hysteria and influence the outcome of death tolls of the war. We still have them in cars today, and we’ve improved on this concept by making it more personal.

The television was invented in 1927. It wasn’t until the 1950s that it became a household item and within 10 years, there was significant push back wanting it to be shut down. Mr. Rogers shared before congress not so long ago, there is a fundamental need to ask others to be in touch with their emotions and sort through them.

For decades, we’ve had the choice to use technology for good, to connect the world, to fight the tough fights together. And frankly, I think Musk is wrong, AI won’t be any different. We can choose to use it for good, to figure out what benefits the world, or we can let it destroy us.

Technology has always danced around two topics, what is personal and what is general. Today, tech can be used to as a first line of defense, and yes, a slow, drawn out and expensive process to push back against oppressors, and to make incremental changes in our life that strengthen us, and give the opportunity for others to grow. The biggest difference lies in how we use it. If we build AI from the ground up as a system that does good for all and that in a global sense, we make sure it is always within the bounds of ethical guidelines that respect all human rights, not just rights of a few, we’ll end up with a very different AI than the one that we are building now. AI, in many ways, crunches numbers and optimizes based on the inputs we work on. Right now it’s mostly AI driving robots that walk and transport people, identifying people and holding them accountable. If this is all we build and invest in, I agree, AI will probably figure out how to drive all over us and prevent us from turning it off. If we figure out how to use AI as a line of defense for those that have no defense or offense of their own, I believe that AI would be able to combat AI that inherently knows the difference between killing and not killing, and eventually self-preservation and not-self preservation. Yes, I am suggesting that AI should have a purpose that is beyond being a slave to humans to make life easier and more affordable.

Why hasn’t this been done? Probably because the people with money have been choosing and have been choosing things in their favor. I think it is 100% possible that we can have AI that we never need to worry about if we start to take into account and make sure we are building AI that optimizes for all humans, and not just the wealthy few.

Diving More Deeply

Before I have too much push back about how much the pen may be mightier than the sword but it isn’t a bullet proof vest, I will offer a corollary and how in the wrong context, technology can also be used to harm us. In this age of digital media, we can be found just about anywhere in public spaces. Our every move watched and it can be much easier to ‘catch’ people doing things ‘wrong’. Elon Musk was recently warned against AI being an immortal dictator and after reading Brian Falkner’s Brain Jack, I can definitely see where all of our next gen tech that’s currently being developed could be harmful. And I am sure you all have heard not only of Facebook’s AI developing its own language but also the story (fact or fiction aside) of Genesis and how humans chose to not choose what was good for all, leading to the inevitable cycles of trying to return to something, it is my best guess to assume, both Musk and Sam Altmann’s talks about the matrix, or something like that.

Somewhere between our primal instinct to kill anything that threatens us, an Orwellian future where there is indifference to anything that doesn’t provide instantaneous stimulation and Minority Report where the very idea of someone creating a crime in the future gets them tried for it, I think we need to reorganize and reprioritize what is important. This is especially important because in the last week, I’ve seen at least three major announcements about brain computer interfaces (two of which MIT’s Mind Reading headset, HTC’s focus on multiple inputs). Despite the number of artists and thought leaders that have warned against the dangers of the technology, we continue to forge forward without offering any alternative or respite.

Let me get back to the main purpose of why I am not working on my next gen prototypes, investor meetings and customer development and why I am keeping you from all the very important things you are putting off doing right now, personal rights. I think all of these problems come down to how we treat each other as an individual. When America’s founding fathers created America and its undertones, they knew what they were getting at. It’s been a while and it seems that many of us need a reminder. This reminder, I hope, will allow us to explore what the hell we are doing with tech, what are the appropriate boundaries to set and how to prevent AI from being a b*, I mean plan b, when we fail ourselves.

While many of us have been focused on the right to have guns, we have stopped thinking about what the spirit of the document in which this exists overall is trying to say:

We all should have a fighting chance to make it in this world. Without being oppressed, without being violated, without someone else determining our fate. Our future should be ours and ours alone and no one should underhandedly be able to take it from us without our first being able to have a chance to choose.

Right now, I am in Berlin. There are many laws, rules and regulations I was so kindly to be warned about upon arrival: always have a ticket on the train and don’t try to torrent anything from the internet or else a ‘controller’ will be there to set things straight. Both of these call into question, how much am I being watched? How much is it someone’s job to wait to see how much I am 10 minutes over my ticket time? How much I am Google searching things that won’t cause me to be able to overthrow the German regime (yes, they are already outside my window and waiting to look in as I am typing and editing).

The United States really became a leader because of a revolutionary concept — democracy and the right to a fair trial. I believe that in reexamining these foundational concepts, that everyone should have rights, and making sure that we are building tech and AI for everyone, that does in fact benefit the many, the oppressed, those without hope and those in need of protection, we will find a very different future and brighter hope for AI. What if we were able to build an AI legal system that was able to settle disputes or to move persons who needed to be moved to new communities to protect them from gang violence? What if we used AI to crunch numbers on what laws are actually useful and which ones harm personal rights and freedoms and cause detriment to society or that are skewed in favor of the wealthy? What if AI could become a layer of protection for all instead of just to give more power to the privileged?

What if we made it a global rule that AI and computer vision couldn’t be used in any way that takes away the rights of another? What if we remembered that we are in power with the ability to turn off, reroute, and reimagine AI at any point in time?

To Be Let Alone.

United States of America has some Federal Rules and guidelines around privacy laws, where we are supposed to have a right to ‘be let alone’, meaning that we shouldn’t be policed for every minor thing we do wrong, constantly in fear not only of laws being developed and enforced that prevent us from ever being able to ‘pursue life, liberty and happ(y)ness.’ While reducing costs to ride the Ubahn by having security cameras watching over passengers, so they are specifically targeted within the hour of having done something wrong, may be or seem appropriate, where do we draw the line?

Here is a what-if for you. What if, an office manager simply didn’t like someone? Were jealous? Didn’t feel a woman, a person of a certain race or religion shouldn’t be allowed to have the freedom to operate a business and build themselves? What if they saw them working in way that allowed them to exercise the grey and interpretable areas of their power? What if those gray areas were captured with security camera footage? And used to push someone around, embarrass them, prevent them from doing business? And eventually lead to the justification for someone to be evicted. What if that person was really just working hard? And doing what they should? Or if there was a misunderstanding or things weren’t as they seemed? What if there is no due process and no clear way nor exploration of the appropriate ways to handle the situation? The abuse of power can happen so much more quickly with technology and if we aren’t careful, it can kill us as a people.

Everyone is worried about AI taking over, but I think the bigger problem is the way we mistreat one another. I think the real risk is AI (actual intelligence) not taking over, and us not working together a society to change things. The people who have access to more information about someone as well as the power to push them around, without due process of law. Technology, gives people the power to change the world, and many the power to change other’s lives. Laws can be used, as seen in Ferguson, to destroy people’s lives when unjustly and inappropriately enforced (75% of people had an arrest warrant out for them for things like jaywalking… is that cruel and unusual punishment?). With my AR glasses, it is significantly more easy to walk down the street and see those people, but should we be allowed to do that? Should police be allowed to do that?

Should someone who is new to an area be held accountable for all laws? What do we do if some laws are considered unjust in some areas but not in others (that poor mom who went to jail for breast feeding in public). California recently tried to atone for the set-backs the war on drugs caused to communities. Many countries have also found that corporal punishment and strict laws and guidelines to be ineffective and unnecessary. In Judaism, there is a staying that you shouldn’t cook a child in its mother’s milk. While some think this means you can’t eat cheeseburgers, I think it means that we shouldn’t punish a child for how it was raised or the lack of nourishment and mentorship it had and a person’s inability to know how to do any better.

Maybe our first line of defense is offense, making sure that we support, educate and appropriate resources properly to prevent crimes (after all, the people listed above, along with Barbara Stitt and others, give me the sense that with a little TLC we could prevent most crimes and in doing so let there be a fair balance between agreeing to laws that guide society vs. having them forced upon us… but that would ruin an establishment).

IMHO, to prevent technology from causing a significant amount of harm, we must first start with what is considered an appropriate way to treat another human. Musk is one of the most avid warners against AI, but he’s also known to be someone to blur the lines of appropriateness of idea taking, or not giving credit where credit is due etc. I don’t know him well enough to know if this is true, but I have to wonder if, especially from his books and other stories, he is someone who does not see boundaries in many ways, has hurt others as a result, and as a result hasn’t taken a moment to consider how to integrate values and balance into the AI systems. He isn’t typically seen lifting others, and it would be assumed that this perspective also influences his opinion of what AI can and can’t be. Or maybe it’s people like him who are respected and aren’t walked all over that don’t have the chance to explore the need for boundary setting… I can’t be sure but I do know something needs to be done, we need to push back and prevent people from being kept down by AI, CV and the machines we build (Don’t worry — and don’t get me wrong — I still love you, Elon, and believe in you. xo).

To me, to get AI right, we need to get privacy right, and personal rights right, right(?). Strict rules and guidelines around how to do this. We’ve been working on this at Lumenora and I hope that you’ll put pressure on companies to be less corrupt, more transparent, clear and upfront about policies and that there is no compromise around the enforcement of what is considered an appropriate punishment for a crime. We must first make sure people are innocent until proven guilty, have the skills, tools and knowledge to not only tell right from wrong but to be able to act upon it. Maybe if we optimize AI to first account for the progress and level of interaction, judging appropriateness before dictation and making sure there is balanced responsibility before accountability.

Ideally, we would not be able to be charged with a crime without there being some sort of trial. While riding whilst a ticket has been expired for 10 minutes is pretty black and white, that the ticket is expired, I think it should still be a human right, and a proper balance to have intent established. Using cameras to monitor for crimes and to hold people accountable will surely lead to Musk’s warnings of a police state, but I think if we push back for the good of humanity, this doesn’t have to be the case.

If we aren’t able to get personal interactions right, how are we supposed to get human-computer interaction right (please don’t send me robotic sex-doll articles to suggest that it’s a way to move forward and explore better). The first AI we should develop should be for and in consideration of one another.

To be honest, AI will be what we allow it to be. It can either help us to crunch the numbers on what actually works for what we want — a chance at freedom, to reach our potential, to be recognized for our talents and to have a fair chance to improve. At the moment, with so much drive towards profit, especially in Silicon Valley where people are encouraged to cheat one another, to lie and to be misguided towards what to do next, I want to ask our AI leaders to take a step back and see how AI can be built differently so that AI does not become a problem and to also recognize the clear boundaries of okness and not okness to create a ripple effect that will lead us away from doom and towards a haven, a world, we can believe in where we are safe… not because there are not guns but because we respect one another.

With so much drive to make profits and money, corruption easily sets in. Without consideration for our responsibility to one another, it will be hard to know the foundations for building the AI of the future.

We also want to take this time to announce Gabe Montes who will be helping us to make sure that we are using ethical means to build our AI algorithms.

In Conclusion

To have a better AI, we need to start setting better rules and boundaries with what works between humans. We can use what we’ve learned from our interactions with other humans, what works, what is fair and where we are failing to help us to know what underlying methods to employ in AI.

Once we know how to treat humans appropriately, we will clearly know when to pull the plug on AI and tech. Afterall, isn’t it our desire for connection that will bring us back to Eden.

--

--

Rosi Haft

Founder of @Lumenora. Technologist, humanitarian, truth seeker and believe in the human potential.