Are we supposed to fear AI?
I don’t think there’s a need to panic, but … the people who say Let’s not worry at all, I don’t agree with them.
— Bill Gates
Well, to start, it’s important to know what artificial intelligence (AI) really is:
In computer science AI research is defined as the study of “intelligent agents” that is any device that perceives its environment and takes actions that maximize it’s chance of success at some goal. (Russell & Norvig,2003).
In science fiction often portrays AI as robots with human-like characteristics, AI can encompass anything from simple applications (SIRI) to self-driving cars to autonomous weapons….
This year (2017), I started learning AI at the college, where we talked about the four approaches of AI:
1. Thinking Humanly — activities that we associate with human thinking, activities such as decision-making, problem solving, learning . . .” (Bellman, 1978).
2. Thinking Rationally — The study of the computations that make it possible to perceive, reason, and act.” (Winston, 1992).
3. Acting Humanly — The art of creating machines that perform functions that require intelligence when performed by people.” (Kurzweil,1990).
4. Acting Rationally — is the study of the design of intelligent agents.” (Poole et al. 1998), where an Agent is something that perceive and act.
When we debated about the first approach, a big question came to head, then I asked the teacher:
— Teacher, aren’t we building something that as a potential to exterminate us????
Well, during some reading, I found out that I’m not the only person who is aware of AI potential, recently Stephen Hawking, Elon Musk, Steve Wozniak, Bill Gates, Mark Zuckerberg, and many other big names in science and technology have recently expressed concern in the media and via open letters about the risks posed by AI and cording to it, I dived in to further reading to see if is it worth to continue studying and developing AI agents, and in the reading, I found this:
Dangers of Artificial:
Artificial Intelligence Systems are not static to follow a “programmed code”, over time, it can make decisions by itself, and we do not know what decisions it will take, if they are good or bad for us.
Some AI systems are programmed to do some devastating things:
— Autonomous weapons are artificial intelligence systems that are programmed to kill. In the hands of the wrong person, these weapons could easily cause mass casualties, and if can take it’s own decision, I dont know what can happen.
— Some AI systems are programmed to do something beneficial, but it can develops a destructive method for achieving its goal: This can happen whenever we fail to fully align the AI’s goals with ours, which is strikingly difficult. If you ask an obedient intelligent car to take you to the airport as fast as possible, it might get you covered in vomit, doing not what you wanted but literally what you asked for. See more here
Pros of Artificial Intelligence:
“Everything we love about civilization is a product of intelligence, so amplifying our human intelligence with artificial intelligence has the potential of helping civilization flourish like never before — as long as we manage to keep the technology beneficial.”
The first thing I loved to hear was: with the development of strong artificial intelligent agents, we can join our forces to fight against problems, so that we can have more time to be more human (Fei Fei li) 😉😉😉😊
It’s Cost-Effective — Unlike humans, robots and machines do not have to get paid every month for the work they do.
It Enhances Efficiency — Depending on the type of intelligence. If machines would be built without any flaw. So, no doubt they would be able to perform even the most complex tasks without any error.
They Don’t Take a Rest — Unlike humans, machines can train over and over again the same task to master in it, without needing to take a break
Machines Don’t Have Emotions — This is a great advantage because having no emotions means that nothing is going to affect their performance
Yes we are supposed to fear AI, but it doesn’t mean that we should stop developing AI systems (I think that we can’t stop AI), I particularly think we should keep our studies in AI, so that if the apocalypse is coming we should be prepared to face it, and if we don’t develop it, somebody else will in secret and if something wrong happens we will be unprepared to fight against it.
Beside the possibilities of apocalypse AI can help us achieve what would take us years.
Before you go!!!
If you enjoyed the writings leave your comment and claps 👏 to recommend this article so that others can see it.
With ❤ by Richaldo L. Elias!