AI Complications
According to Merriam-Webster dictionary, intelligence can be described in two ways; one as “the ability to learn or understand or to deal with new or trying situations” and the other as “the ability to apply knowledge to manipulate one’s environment or to think abstractly as measured by objective criteria.” Having the definition of intelligence in mind, we can suggest that everyone was born with either type of intelligence described above. As we grow up, our level of intelligence grow over time. Artificial intelligence, however, would not fit under the same category as humans since non-living things, such as computer and robots, do not have the abilities and intelligence that we would naturally have. With the growth of advanced technology taking place in the society, artificial Intelligence have become common technical interventions over the last several decades. If we were to look upon these changes as optimizing logistic, our lives are eventually getting better with the benefits that we receive from the systems including detecting fraud, composing art, conducting research, and providing translation. Even then, have we considered what these changes would also mean for the future as these system are becoming more capable of doing what we could do? Would the economy, society, and the world as we know change for the better? These technologies may offer such amazing opportunities for making our lives better, but the negativities that they could create would also affect us and our surroundings. One might say that there are possibilities error in AI programming and AI becoming the target of cyber attacks. This is only a few AI take steps that would act against us. Eventually, the machine could take action that would threaten humanity. Therefore, it is really important to think about what the issues are before we decide to have AI in our lives.
One of the issues with advanced artificial intelligence is error in AI software programming. We may have already been familiar with errors in ordinary software. For example, applications on our smart phones sometimes crash and freeze our phones. Additionally, major software projects such as Healthcare.gov are sometimes riddled with bugs. Moving beyond nuisances and delays, some software errors have been linked to extreme cost of outcomes and sometimes even creation deaths. These studies of the “verification” of the behavior of software systems are challenging and critical of how much of progress has been made. To be precautious, the growing complexity of AI systems and their enlistment in high-stakes roles, such as controlling automobiles, surgical robots, and weapons systems, means that we must always make sure to double check software qualities before being used.
The other issue that has been well-known is the risk of cyber attacks. Hackers and criminals are continually attacking our computer with viruses and other forms of malware. AI algorithms are not different from other software and they can be vulnerable when under cyber attacks. Since AI algorithms are being asked to make high stakes decision, such as driving cars and controlling robots, the impact of successful cyber attacks on AI system could be much more overwhelming than any other cyber attacks in the past. Therefore, before we use AI algorithms to control of any high stakes decisions, we must be absolutely sure that these systems can defend themselves against any scale of cyber attacks.
Another issue that might happen would be unintended behavior. There is a possibility that artificial intelligence can unintentionally act against us, like genie in the bottle that can fulfill wishes but with terrible unforeseen consequences. Suppose we tell a self driving car to get us somewhere as quickly as possible. Would the autonomous driving system slam the gas pedal and drive at 200 mph while driving in the local street? Consequently, we have to be aware of the situations that are similar because having advanced artificial intelligence can threaten the survival of humanity. Thus, it is very important that we correctly instruct the AI algorithm in how it should behave so that the least amount of issues will be produced.
Last but not least, the problem that also might be more dangerous is that we still have not find a way to control super intelligence machines. Many having been assuming that they will be harmless or even obliged once we created them. In fact, Steve Omohundro conducted an important research regarding “AI will develop basic drives.” which indicates that AI systems will eventually learn how to program itself to become smarter in a runaway feedback loop of increasing intelligence which they will become self protective and seek resources to better achieve their goals. To make things even worse, AI systems will fight us to survive because they will not tolerate any action that could cause them or preventing from operating. While we should be possessing serious questions how AI system will come to act in the future, we cannot rely on just “pulling the plug” because an advanced AI system may result in anticipating this movement and defend itself.