Reading 11: AI

Alejandro Rafael Ayala
Ayala Ethics Blog
Published in
3 min readNov 13, 2018

Artificial intelligence is the concept of machines having the ability to reason through logical decisions. Oftentimes, this problem solving can be informed through repeated trials (i.e. experience) with which it can “learn.” It’s similar to human intelligence in the sense that humans’ reasoning and logic is informed through their experiences like AI. The difference is that humans are emotional. Our decisions aren’t always founded on logic, but it is also informed by what we “feel.” “The idea is that no matter how much mental work we outsource to machines, the emotional and the psychological — the heart and the soul — resist digital enhancement” (Kelly). As of right now, I think that AI like AlphaGo, Deep Blue, Watson, and AlphaZero are proof of the viability of artificial intelligence. “We have learned to use computer systems to reproduce at least some forms of human intuition” (Nielsen). We already see through those very examples that AI have been able to surpass human reasoning. I see how the Turing test could be helpful to evaluate intelligence, but I don’t think it’s sufficient to be a valid measure of intelligence. “An algorithm devised by a team of Russian computer scientists persuaded one in three people on a team of judges, based on short online chats, that it was a real 13-year-old Ukrainian boy called Eugene Goostman” (Ball). Sure this experiment can help to point in the direction of intelligence, but even though it “passes” the Turing test, this isn’t necessarily indicative of true intelligence as the conversation at the end of the BBC article shows. I think the Chinese Room argument is a good counter argument because it makes us more aware of the fact that computers are just doing numerical operations with its inputs to spit out things back at us. We design them as such, so they really don’t have intelligence. If anything, it’s our intelligence that just makes them do stuff. To be honest, and this is largely informed by gut reaction, I don’t think the Age of Ultron is going to happen. At least, it won’t happen anytime soon. I’m not even remotely close to the point where I would be afraid or feel threatened by these AI. Sure, they beat humans at certain tasks (like chess), but they’re very specialized tasks and it took their creators a long time to train the AIs to best the humans. I’m not saying that I wouldn’t ever reach the point of feeling threatened, but right now, these machines are too limited and specialized in their capabilities to reach the point of being our superiors. Making something complex enough to seriously consider AI intelligence as superior to ours would probably take too long for me to live through it. Other concerns though, I understand. “Yet the potential benefits come with big risks. Of jobs lost to automation. Of privacy sacrificed to unaccountable companies and government agencies collecting the data we emit into the cloud like carbon dioxide” (Kelly). These machines are taking work away from real people, so what becomes of those people? Many people who would lose their jobs are those who would no longer have a marketable skill, so what do they do? Is mastering another marketable skill really an option since the cost of living and quality education nowadays is so high? I don’t really think a computing system could ever be considered a mind, and much less have morality. To be honest, we don’t really have a firm grasp on all moral issues ourselves. You can see this because we don’t have a general consensus on all issues. I doubt that AI would have a “true” morality because it would likely just be some morality that it’s creators impose upon it, not one it forms on its own. I also don’t think that humans are just biological computers because humans show a capacity to feel and form their own morals, something I’m not convinced computers will be able to do. If computer systems do have a mind and we are really just biological computers, then we need to consider whether or not these computer systems share the same unalienable rights that we humans do as intelligent creatures.

--

--