Why You Shouldn’t Always Let Persuasive Arguments Persuade You: An Interview with Byron Reese
“Good debaters can argue either side equally well. So the person who seems right could very well simply be a better debater.”
I had the pleasure of interviewing Byron Reese, the CEO and publisher of Gigaom, and a futurist, technologist, author, speaker and the host of the Voices in AI podcast.
Byron has been building and running internet and software companies for twenty years and has obtained patents and has pending patents in disciplines as varied as crowdsourcing, content creation and psychographics.
Of the five companies he either started or joined early, two went public, two were sold, and one resulted in a merger.
Byron’s newest book is The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity; an excerpt is provided below.
What are your “5 Things I Wish Someone Told Me Before I Became CEO” and why?
- Good ideas and bad ideas look exactly the same before you do them. It is only in retrospect that they become so obviously good or bad. Who would have guessed that Sharknado would become a profitable film franchise? Or that New Coke would fail? So if you have no crystal ball, how do you move forward? Simple: learn how to test ideas without fully committing. After all, enlightened trial and error outperforms the reasoning of a flawless intellect.
- The person who makes the better argument is probably not any more likely to be correct than the person who is less persuasive. Good debaters can argue either side equally well. So the person who seems right could very well simply be a better debater. In other words, don’t let persuasive arguments persuade you.
- You will be wrong more often than you are right. Or at least, I am that way. There are more wrong choices than right ones. The trick is to be wrong about little things and right about big ones.
- When you read about successful companies, it often looks like things went smoothly. In my experience, every business is an awful, painful struggle with constant setbacks punctuated by occasional, very occasional, victories. It is hard to build something from scratch. Maybe for some people it is easy, I but I suspect that for most startups, most days feel like you are walking along the ragged edge of disaster.
- So that’s 1 to 4. It is hard to tell bad ideas from good ones, it is hard to discern the path forward, you will be wrong more often than right, and most days will be struggles.
If you are fine with all of that, or even better, love that — then perhaps you can be a great CEO.
Can you please give us your favorite life lesson quote? Can you share how that was relevant to you in your life?
“Why do we still read Shakespeare 400 years later? We read Shakespeare because we still know all of those people because humans don’t change…..”
How can our readers follow you on social media?
Voices in AI Podcast https://voicesinai.com/
TED Talk is here.
Speaker Demo Reel: https://vimeo.com/243008383
Thank you so much for joining us. This was very inspirational.
The following is an excerpt from GigaOm publisher Byron Reese’s new book, The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.
The Fourth Age explores the implications of automation and AI on humanity, and has been described by Ethernet inventor and 3Com founder Bob Metcalfe as framing “the deepest questions of our time in clear language that invites the reader to make their own choices. Using 100,000 years of human history as his guide, he explores the issues around artificial general intelligence, robots, consciousness, automation, the end of work, abundance, and immortality.”
One of those deep questions of our time:
If a computer is sentient, then it can feel pain. If it is conscious, then it is self-awareness. Just as we have human rights and animal rights, as we explore building conscious computers, must we also consider the concept of robot rights?
In this excerpt from The Fourth Age, Byron Reese considers the ethical implications of the development of conscious computers.
A conscious computer would be, by virtually any definition, alive. It is hard to imagine something that is conscious but not living. I can’t conceive that we could consider a blade of grass to be living, and still classify an entity that is self-aware and self-conscious as nonliving. The only exception would be a definition of life that required it to be organic, but this would be somewhat arbitrary in that it has nothing to do with the thing’s innate characteristics, rather merely its composition.
Of course, we might have difficulty relating to this alien life-form. A machine’s consciousness may be so ethereal as to just be a vague awareness that occasionally emerges for a second. Or it could be intense, operating at such speed that it is unfathomable to us. What if by accessing the Internet and all the devices attached to it, the conscious machine experiences everything constantly? Just imagine if it saw through every camera, all at once, and perceived the whole of our existence. How could we even relate to such an entity, or it to us? Or if it could relate to us, would it see us as fellow machines? If so, it follows that it may not have any more moral qualm about turning us off as we have about scrapping an old laptop. Or, it might look on us with horror as we scrap our old laptops.
Would this new life-form have rights? Well, that is a complicated question that hinges on where you think rights come from. Let’s consider that.
Nietzsche is always a good place to start. He believed you have only the rights you can take. People claim the rights that we have because we can enforce them. Cows cannot be said to have the right to life because, well, humans eat them. Computers would have the rights they could seize. They may be able to seize all they want. It may not be us deciding to give them rights, but them claiming a set of rights without any input from us.
A second theory of rights is that they are created by consensus. Americans have the right of free speech because we as a nation have collectively decided to grant that right and enforce it. In this view, rights can exist only to the extent that we can enforce them. What rights might we decide to give to computers that are within our ability to enforce? It could be life, liberty, and self-determination. One can easily imagine a computer bill of rights.
Another theory of rights holds that at least some of them are inalienable. They exist whether or not we acknowledge them, because they are based on neither force nor consensus. The American Declaration of Independence says that life, liberty, and the pursuit of happiness are inalienable. Incidentally, inalienable rights are so fundamental that you cannot renounce them. They are inseparable from you. You cannot sell or give someone the right to kill you, because life is an inalienable right. This view of fundamental rights believes that their inalienable character comes from an external source, from God, nature, or that they are somehow fundamental to being human. If this is the case, then we don’t decide whether the computer has rights or not, we discern it. It is up to neither the computer nor us.
The computer rights movement will no doubt mirror the animal rights movement, which has adopted a strategy of incrementalism, a series of small advances towards a larger goal. If this is the case, then there may not be a watershed moment where suddenly computers are acknowledged to have fundamental rights — unless, of course, a conscious computer has the power to demand them.
Would a conscious computer be a moral agent? That is, would it have the capacity to know right from wrong, and therefore be held accountable for its actions? This question is difficult, because one can conceive of a self-aware entity that does not understand our concept of morality. We don’t believe that the dog that goes wild and starts biting everyone is acting immorally, because the dog is not a moral agent. Yet we might still put the dog down. A conscious computer doing something we regard as immoral is a difficult concept to start with, and one wonders if we would unplug or attempt to rehabilitate the conscious computer if it engages in moral turpitude. If the conscious computer is a moral agent, then we will begin changing the vocabulary we use when describing machines. Suddenly, they can be noble, coarse, enlightened, virtuous, spiritual, depraved, or evil.
Would a conscious machine be considered by some to have a soul? Certainly. Animals are thought to have souls, as are trees by some.
In all of this, it is likely that we will not have collective consensus as a species on many of these issues, or if we do, it will be a long time in coming, far longer than it will take to create the technology itself. Which finally brings us to the question “can computers become conscious?”
To read more of GigaOm publisher Byron Reese’s new book, The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity, you can purchase it here.