Blog Post 4

Samuel Venick
e110oneohfive
Published in
2 min readApr 19, 2018

Everytime I hear of artificial intelligence, commonly referred to as AI, I always think of either The Terminator movies, or Wall-E. Whether it is the extreme of cyborgs and AI trying to wipe the earth of all humans, or people becoming lazy and depending on AI to do everything for them, I am always skeptical of the negatives it brings. My biggest plausible fear with AI is that humans will become too overly dependent on it. We will become the people from Wall-E who sit in hovering chairs all day, have machines do everything for us and who are not physically active except for picking up food to eat. I suppose my scope of AI is similar to that of microeconomics, as opposed to looking at the bigger picture and how AI effects us all, not just personally. As far as large scale, I am sure there are great applications for AI technology, I just pray that it is kept in check. As the future of technology continues to grow, so will AI and the responsibilities of those researching and implementing it.

Judith Newman uses the story of her son, Gus, a child with autism, to illustrate that technology is not all that isolating. Newman notes that we live “In a world where the commonly held wisdom is that technology isolates us”. In a time where technology is becoming more and more advanced, technology has engrossed many people, with some saying it has diminished our abilities to interact face to face. Newman offers a different viewpoint, one that argues this to its core. Siri makes sure Gus is polite, which can be noted when his brother encouraged him to say expletives to Siri. She responded “Now, now” along with “I’ll pretend I didn’t hear that.” She is teaching him how to be nice to people and how to make nice conversation, in essence saying that bad words should not be used in conversation.

Ted Chiang on the other hand, instead of focusing on the small scale interactions with AI, he focuses on it large scale. He is responding to the relationship between Silicon Valley capitalists and AI explaining they both lack insight. Chiang opens his article with an example from Elon Musk about an AI tasked to pick strawberries, and how the AI might determine the best way to maximize its output would to be to turn the entire earth into strawberry fields and wiping out human civilization as a result. This is not what companies have in mind when they gave it the task but the AI lacked the insight to take a step back and think about the situation. He also notes that while corporations are run by actual people, capitalism does not reward them for using insight, only worrying about profits and that the firm is doing the best it can.

--

--