Confirm Humanity
Before you read this, a machine needs to confirm you’re a human
Look at the image below, what do you feel? Is it happiness, or maybe hunger? Does it make you want to play with the dog? Do you feel sorry for him because he can’t get a piece of the pie?
If the answer is yes to any of the listed possibilities then you’re probably a human. The key here? Feeling.
The emotions that images like this one trigger in your brain are considered to be a human-only capability. But perhaps not for long. As the writer and historian Yuval Noah Harari explains, humans are essentially a collection of biological algorithms shaped by millions of years of evolution. There’s no reason to think that non-organic algorithms couldn’t replicate and surpass everything that organic algorithms can do.
We are already teaching machines to identify, for example, the pug in a picture of different breeds of dogs, or to tell if the subject in a picture is showing some sort of emotion, and to recognize a human from a bot as part of an authentication process.
That’s it, we’re teaching machines to think like humans. And guess what, they’re doing a better job than us (think of AlphaGo which defeated the legendary Go champion Lee Sedol in 2016) But what’s next? Teaching machines to feel like humans? Or upgrading humans to be more like machines?
In this article I want to share with you some of the questions that arose in the latest panel discussion we hosted during the innogy Innovation Hub’s UnConference 2018.
No worries if you didn’t pass the test above, you can still read on. This article is both for human and machine audiences.
Machines becoming more human
What would a future where most functions will be taken over by machines look like? How can we ensure that humans remain relevant and that machines treat us in the most empathic way?
So let’s get started with the protagonists of this story. On the left side of the ring is Trent McConaghy, a human who’s also a machine and vice versa. He is founder of BigchainDB and Ocean Protocol and an avid driver of Moore’s Law. On the right side is Marco Richardson, a human with enhanced capabilities (e.g. super-vision with HoloLens) building Inclusify, a startup whose goal is to support inclusion through digitization. In the middle moderating the debate is Kerstin Eichmann, human and leader of the Machine Economy investment chapter at the innogy Innovation Hub, with some serious machine and human empathy capabilities.
Technologies, Tools and Incentive Schemes
What types of technologies or tools will have the most impact in the future to foster an ethical machine-human interaction?
Marco: Here are some very interesting use cases. Take, for example, an impairment — for example, people that are facing linguistic problems, or are simply drunk. So, say you are drunk and try to call an Uber with your voice assistant (Siri) but it doesn’t understand you. Or what if you have an accident and your voice assistant doesn’t understand you? We need more fault tolerant voice assistants — current examples are made only for the perfect persona which is 30 years old, healthy, beautiful, smart and perfect. That’s why we need more inclusion in technology: AI and speech recognition to start with.
Trent: The two most ethically-infused technologies are AI and blockchain. AI because it cuts to the very heart of what it is to be human. E.g. “I’m creative, that makes me special/human”. Nope, AI can be creative too. And Blockchain because it can get people to do stuff through incentives using internet money that it prints on the fly. So every line of code might imply an ethical decision; if you’re not explicitly thinking about it then it’s likely you’ve implicitly made a poor ethical decision.
Liability and Machine Philosophy
Let’s assume Tesla would produce two models of a self-driving car: The Tesla Altruist and the Tesla Egoist. Tesla would leave it to the market to decide. Tesla Altruist sacrifices the owner for the greater good. The Egoist tries everything to save its owner. The customer will be able to buy or use a car that best fits their philosophical view. Who do you think is liable for an accident then? Tesla, the owner of the car or the car itself? (This question was inspired by Harari’s 21 lessons for the 21st century)
Trent: We’re in symbiosis with machines. They’ll have rights, we’ll have rights. There will be a mix of human and machine intelligence, and they’ll all have rights.
Marco: Machines work for us, not the opposite, for us human beings. So we should be responsible.
Machines who can feel — Robots with depression
How do you rate the road to game-changing emotions analytics and a real world implementation of emotion-driven human-machine interaction?
Marco: It is important that machines can deal with human emotions, but it’s a different story if we want them to have “feelings”. WE make the decision. From an inclusion point of view, it is important that we enable THEM to know how WE feel. This makes us humans unique.
Trent: I view emotions as a form of symbolic compression for modeling the world and decision making towards how to next actuate the world. Machines will have all sorts of ways to compress information, perhaps some of them will emerge as what we would recognize as “emotion”.
Upgrade of civilization
Do you think increasing human-machine interaction can be seen as upgrading civilization? What is the implication of such upgrade if it happens disproportionally in different parts of the world, will it deepen inequality instead of leveling the ground?
Trent: It will likely be disproportionate at first, whether we like it or not. The future is not evenly distributed. UBI and USI will hopefully help to equalize the opportunities. My main fear is that the first tiny group of humans finds a way to amass all the power. Hopefully that doesn’t happen. I think our best bet is to gradually join the machines via ever-increasingly higher bandwidth interfaces between our brains and machines, what I call the BW++ scenario.
Marco: With the fourth industrial revolution and the large amount of connected devices and AI, the good thing is that we will see more objective decisions, less emotional, less subjective.
Mechanize humans vs. humanize machines?
Which vision should we be working towards?
Trent: In the near term: we’re already “naturally born cyborgs”, we naturally use tools and our brains immediately decide those tools are part of our body, from wielding a pen or a sword, writing on a keyboard, or riding a bike. It’s a natural symbiosis of man and machine. This extends into tools that also use AI. I wrote AI-based CAD tools that were used by circuit designers to design chips, and we always aimed for a feel where it’s a symbiosis between CAD tool and designer. I think we achieved that aim.
At the innogy Innovation Hub we are a thesis-driven corporate venture fund. We believe the future client of the energy system will be a smart, autonomous machine that is able to generate, aggregate, trade and store energy autonomously. Thus, we invest holistically in the building blocks of Web3 to enable this vision: protocols, middleware solutions, service layer components, key financial instruments and DApps to fully shape the vision of the machine economy.
Sources and some more learning/entertainment items:
📙 Read
- Harari, Y. N. (2018). 21 lessons for the 21st century. New York: Spiegel & Grau.
- Harari, Y. N. (2017). Homo Deus: A Brief History of Tomorrow. S.l.: Vintage Publishing.
- OConnell, M. (2017). To be a machine: Adventures among cyborgs, utopians, hackers, and the futurists solving the modest problem of death. New York: Doubleday.
- Simonite, T. (2016). Teaching Machines to Understand Us. MIT Technology Review. Retrieved from https://www.technologyreview.com/s/540001/teaching-machines-to-understand-us/.
- Suarez, D., & Tann, C. H. (2015). Darknet Thriller. Reinbek bei Hamburg: Rowohlt-Taschenbuch-Verl.
- Wegner, A. (work in progress). World After Capital. Retrieved from https://legacy.gitbook.com/@worldaftercapital
🚗 Test
- MIT’s Moral decisions made by machine intelligence: http://moralmachine.mit.edu
📺 Watch
- AlphaGo https://www.alphagomovie.com
- [Fiction] Philip K. Dick’s Electric Dreams https://www.amazon.com/dp/B075NV935S
- [Fiction] Altered Carbon https://www.imdb.com/title/tt2261227/
😬 Laugh
🎤 Debate
- https://www.kialo.com/general-ai-should-have-fundamental-rights-6295/6295.0=6295.1/=6295.1
- https://www.kialo.com/transhumanism-is-the-next-step-in-human-evolution-13564/13564.0=13564.1/=13564.1
- https://www.kialo.com/artificial-intelligence-ai-should-an-artificial-general-intelligence-be-created-3529/3529.0=3529.1/=3529.1
- https://www.kialo.com/who-should-self-driving-cars-kill-1546/1546.0=1546.1/=1546.1