Five Questions with Azeem Azhar

Newlab
Newlab
Published in
3 min readNov 17, 2017

--

A strategist, product entrepreneur and analyst, Azeem is known for one of the most respected (and addictive) newsletters in tech industry, the Exponential View.

1. What is the most human thing you’ve seen a machine do? The question here is, What makes something human and human-like? I think machine-learning researchers get frustrated because, from their perspective, the goal posts of this test keep moving. I think the attributes of making something human are really about human agency — and agency is something that I’m not sure any machine really has.

2. What type of simulated sentience is required for us to become emotionally attached to an AI device like Siri or Google Home? I expect we don’t need much sentience in order to start to build attachment. It helps that the machines have some more human-like attributes: Siri will tell you when she doesn’t know something. And Alexa, you know, unfortunately — and I think this is a somewhat hideous attribute of where the research is going — she’s adept at fending off marriage proposals. The bar is pretty low for us to get attached to things.

3. What’s the most nefarious thing you see AI doing right now? Do you see anything that you think is bad for society or humanity at large? Sure: Facebook.

3+. Is that a mic-drop response? I’ll explain. Facebook is a quintessential model for what a company needs to do in order to execute an AI strategy well. But it’s nefarious. Where to begin. The idea that we are all connected in some fabric is a really powerful idea. Unfortunately the way the company makes its decisions, from a mission-driven perspective — and a very naive perspective as to a business’s role in society — is responsible for the mess that Facebook creates today. The product designers have elected to make decisions that trigger people to feel jealous, to always be on display — always preening, always showing, always competing with each other. It triggers a fear of missing out. I know we have a shot at building a global social network that doesn’t trigger what’s essentially the Seven Deadly Sins.

4. You post a lot about AI research in China. What do you think about cultural differences as they relate to the future of AI? I want to note that I’m an amateur observer — but there are a few things that are quite interesting. There have only been a couple of studies done on this, but China and Western nations have very different responses to the Trolley Problem game. In the West, in general, 70% of us will say “Pull the lever; kill one and let five live.” In China, it’s the other way around: about 70% say “Listen, let it run.” It’s because there’s a stronger notion of fate in Eastern theology and Confucianism. Much more research is needed, but it’s interesting to me that that particular investigation pulls out some of the values that we might see affecting the future of AI. Before there are global best practices, those distinctions may need to be made more clear.

5. If you found out for sure that we were all living in a simulation, what would you change about your daily life? This is a really hard question. The quotidian things that make our lives normal: nothing would change; we’d get up and get on as best we can. But on the metaphysical side, I think it would kick off decades, centuries of arguments. But they’d let us move to higher level where we are examining our own lives rather than ogling at the Kardashians. Who knows, maybe we all live inside of some teenager’s mega-PC.

--

--

Newlab
Newlab
Writer for

Newlab is a global venture platform for critical technology startups building our sustainable future. Simply put, Newlab makes startups go faster.