Denial and overwhelment and fear. A belief that we are too big too not fail, and since the Titanic is sinking, grab your life boat and screw the planet. Of course this makes no sense. Where you gonna try and sail that lifeboat, if not on the planet?
A really interesting argument between you and Mike Meyer. I’m not as pro-tech as Mike, but not as anti-tech as you, so I see a lot of validity in both arguments.
IF — and this is a huge, big IF, IF human designers of AI, were thoughtful enough to design it to altruistically serve humanity, we might have a chance at a human AI partnership that could be extremely productive for humans.
BUT- and this is a Big BUT -that is probably not the case. AI is designed to make money..it does so by disrupting and replacing entire industries- and it certainly doesn’t seem to be the least bit concerned with doing this ethically or humanely.
Is AI going to serve people or render them obsolete? It certainly seems to be that we are headed toward the latter. And that’s unfortunate, as there are 8 billion people on the planet. If AI takes over industry, what are all these people going TO Do?
Well, the Basic incomers, got ideas. We will do the things we all always wanted to do. But how? There won’t be any money, cause, no more jobs. We will just work with Basic income- no need to work anymore, the computer’s have got it. Everyone is gonna take care of basic needs and do what they wanna…
Well…this thought process is flawed for so many different reasons…and I think the only way this sort of thing COULD work, is if AI was heavily, very heavily policing human behavior in a 1984 sort of way. Because if all the jobs are gone, and income is equalized, all sorts of supply and demand issues will be upset- which leads to black markets and lots of crime. Have humans ever been satisfied with basic anything? The biggest reindeer game would be how to take basic income out of one set of hands and get it into another, as soon as possible.
In order to prevent that, there would need to be tight controls on human behavior- so much tighter than anything we currently have. A Big Brother/Browser is always watching type of society. But…my God, who wants that? That sounds like a nightmare.
But to Mike’s point regarding AI — the computers are learning…they really are, and even I can see that, though not technical at all. And…if they are learning…who is to say they’re not conscious? What is consciousness? We haven’t even begun to answer this question for humans.
And if the computers gain consciousness, why would we be so hubristic to assume, they would give a damn about helping us solve the problems we have created on this planet? What’s far more likely is that they would recognize that we are a threat to their existence, and seek to eliminate us. 8 billion people…with not much to do…what is the benefit to AI? In every simbiotic partnership, both partners gotta bring something to the table, if not the other party is a parasite, a leech!
So, while AI COULD help humans greatly, but given human nature, humans aren’t likely to allow it to do so; and furthermore if AI is smart enough to even attempt to help humans, it’s probably smart enough to realize how limited it’s help will be due to human nature, and correctly posit, not only is humanity a threat to itself — as revealed in I-robot, but humanity is a threat to AI as well. What intelligent species would tolerate such a threat? The smarter AI gets, the more of a threat it is to humanity — and humanity is having an Icarus moment in its belief that it can stop this train.
Mike gently alludes to this, but doesn’t explore it. He seems to be oddly optimistic that something else is going to happen. And maybe it will. I can’t foresee that however. I don’t have that much imagination.
As for you, John, I find many of your counter arguments compelling, but the path forward that you set out is exceptionally vague.
Most people who are looking for a way out of the mess humanity has created are overwhelmed. These problems seem so big…Mike with AI, seems to me like desperation. That’s probably not going to be our solution, because of all the problems articulated above. Our biggest human problems are social ones, and right now there just doesn’t seem to be any end in sight.
We have a President who — whatever else you think about him is unethical. This should just be an undisputed fact. The federal ethics chair basically came out and said, “ I am resigning because it has been so difficult working with the Trump administration, we work so hard to cross Their ethics Is and Ts and get nothing but pushback.”
This was essentially ignored, by well, just about everyone. It is a huge, huge, vast social problem. Poisoning the well, shitting where he eats, and no one ever calls Trump on it. Never the money stuff, it’s always the stupid dumb stuff the media brings to light, and people go ape over. The Don Jr meeting was comical and it shows how easily Trumps family can be manipulated into just about any sort of Shenanigans, and yes this certainly can have ramifications for national security, but none of the Trumpsters wanna admit that either.
No one agrees with anyone on anything anymore. We can all agree the Titanic is sinking, but no one can isolate the cause. Some say the captain, others say the Iceberg, others say they’re not sure, they don’t know, meanwhile the collision course continues. People wanna be right, even if it means sailing their ship straight into an iceberg. This is a huge social problem, and AI can’t fix it…but AI might be smart enough to figure out, this species, with it’s odd sets of social problems, are a threat to the globe, that needs to be eliminated. AI, much like climate change, is another one of those things people are in deep denial about it. AI is changing things as we speak. Amazon has moved into food now, and if that doesn’t scare you, it should. We will be depending on AI to feed us, but what if it one day decides it doesn’t want to?
I hope it doesn’t come to that…higher consciousness is the answer for humans I believe. Stay stuck in that lower consciousness, that is problematic, AI will have no use for you, and you will be eliminated.
And humans of a higher consciousness, won’t need AI. At the very least, they won’t bother it, and won’t be seen as a threat. Higher consciousness humans are interested in living in harmony with the earth, and to the end that they can work with AI on those mutually beneficial goals, they will be valued by AI.
But honestly? If you are cultivating a consciousness that is fear based (guns, wars, violence, and violence against the earth.) What possible incentive is there for AI to keep you around? AI will be smart enough to recognize that violence will ultimately be used against them.
If what Mike says is true, if AI is already out thinking humans…how long before they figure this out?