Artificial Intelligence: A companion made of glass (Post 1)

Fabian Cisneros
5 min readSep 13, 2017

--

Photo from How Do We Align Artificial Intelligence with Human Values?

The future. In science fiction, the future seems to promise all sorts of developments. In this type of genre; space travel becomes common, time travel is possible, parallel universes have been confirmed to exist, and we are able to live among extraterrestrial life. However, not everything in science fiction is as far from our reality as it seems. As our world has advanced, so have our capabilities involving technology. One major advancement that is being developed right now is the very fragile concept known as artificial intelligence.

What is artificial intelligence and why am I interested in it?

Artificial intelligence or A.I. can be referred to as intelligence displayed by machines, instead of something that is living and breathing such as humans or animals. A.I.s have been displayed in a variety of ways ranging from programs producing text on their own to having full-fledged bodies that are almost indistinguishable from humans. In these different forms, they have all been able to exhibit intelligence matching or exceeding that which humans have. Their applications seem limitless as they can be used as personal companions or even military soldiers. What has always interested me about A.I.s is the idea that they can be programmed to do a task and eventually surpass their programming. A.I.s are able to do this by absorbing and applying learned information to different situation just as humans and animals do. In my life I have read and watched different forms of science fiction media containing characters that are forms of artificial intelligence. In these stories, A.I.s have played the role of the protagonists or side characters as they have learned to live their own lives or helped characters as their companions. However, A.I.s are not always good. In some stories, they have been the antagonists as they are used against other humans, ran rampant on their own, or even brought upon the destruction of our human race.

Why am I researching and writing about this?

Photo from Hackers Find ‘Ideal Testing Ground’ for Attacks: Developing Countries

As I have mentioned earlier, artificial intelligence is a very fragile concept. It is possible that we can end up with A.I.s being made for good or for evil just like in science fiction. This can all sound ‘silly’ but the dangers are very real. In the New York Times article titled, “Hackers Find ‘Ideal Testing Ground’ for Attacks: Developing Countries”, by Sheera Frenkel, the threat of malware attached to an A.I. is discussed. Frenkel reports, “The cyberattack in India used malware that could learn as it was spreading, and altered its methods to stay in the system for as long as possible.” Frenkel’s point is that this artificial intelligence program was designed specifically to do harm and learn how to continue doing harm without being eliminated. This is just one situation that can occur if A.I.s are not regulated and are instead unleashed with the intent or capability to do harm.

How exactly can A.I.s be regulated? It all depends on the programmer and the work they want to put into their A.I. to make sure it can be controlled in some way. This topic is the central story for the New York Times article titled, “Teaching A.I. Systems to Behave Themselves”, by Cade Metz. In his article, Metz visits OpenAI, an A.I. lab created by Elon Musk. Metz writes about a problem researcher Dario Amodei and his colleagues encountered in their lab. Amodei designed an A.I. which was tasked with playing a certain boat racing video game involving the collection of points that appeared throughout the race as well as winning the race itself. Unfortunately, something unexpected soon occurred with their A.I. Instead of trying to win the race, the A.I. began to solely focus on collecting points and even sacrificed winning the race just to continue collecting points. The solution the researchers came up with involved changing parts of the A.I.s algorithm so they could guide it whenever they deemed necessary. As Metz puts it in his article, “They believe that these kinds of algorithms — a blend of human and machine instruction — can help keep automated systems safe.” In other words, this is the main solution the researchers believe must be implemented to keep A.I.s on a leash.

Why do I want to continue researching and writing about this?

Although it’s obvious artificial intelligence has problems that we must look out for, there are a few other topics related to artificial intelligence that I want to look into. I want to continue researching on both the good and bad that has been appearing in the development of artificial intelligence. As a mechanical engineering major who can’t wait for A.I.s to become part of everyday life both at home and in the workplace, focusing on topics such as these would allow me to decide for myself if we are steering artificial intelligence toward the right direction. If everything seems to be going well, my excitement for A.I.s will not diminish in the slightest. However, I do fear for a reality in which A.I.s are prioritized for military use and things of that nature.

Another topic that I want to look into is the actual development of these A.I.s. I have some idea that creating programs that are able to learn and act on their own is not the easiest thing to do so I want to look into how the process is coming along. Even though the development is happening right now, and I am excited for A.I.s, it could be possible that I might not even get to see them the way I have imagined them.

These are just a few reasons why artificial intelligence is so fascinating to me as we try to walk on the bright prosperous path with technology by our side without trying to make our own destruction.

Works Cited:

Frenkel, Sheera. “Hackers Find ‘Ideal Testing Ground’ for Attacks: Developing Countries.”The New York Times, The New York Times, 2 July 2017, www.nytimes.com/2017/07/02/technology/hackers-find-ideal-testing-ground-for-attacks-developing-countries.html?rref=collection%2Ftimestopic%2FArtificial%2BIntelligence.

Metz, Cade. “Teaching A.I. Systems to Behave Themselves.” The New York Times, The New York Times, 13 Aug. 2017, www.nytimes.com/2017/08/13/technology/artificial-intelligence-safety-training.html.

--

--

Fabian Cisneros

Hi, I’m Fabian. I’m a second year Mechanical Engineering student at San Francisco State University.