Reading 11 — Intelligence

Bradley Sherman
Ethics Blog
Published in
3 min readApr 9, 2018

Artificial Intelligence is a sub-field of Computer Science that deals with any task that is trying to make a computer program act intelligently. There are a few different schools of thought, namely “Strong AI” and “Weak AI” that are differentiated by their attempt to exactly model human cognition or not, respectively. I think it is similar to human intelligence in that AI uses evidence from previous experiences to make a decision for the current problem. However, unlike human intelligence, I believe these decisions are deterministic. No matter what, if the same program is fed the same training data and then the same testing example it will come to the same conclusion. I don’t think human intelligence works this way, as we have many different emotions that can sway our decisions even if the evidence is pointing us in a different direction.

I believe projects like AlphaGo, Deep Blue, and especially IBM Watson are proof of the viability of artificial intelligence. I understand and agree with Roger Schank when he says that Watson is not strong AI and should not be marketed as that as IBM has done. However, that doesn’t mean that these systems cannot be useful to us. We use these systems in many applications like healthcare and search to improve our lives and make certain decisions easier, or provide insight to a problem that would take a human much longer to discover. Because of that I definitely think that these “toy projects” as some people call them are great first steps to uncovering the viability of AI, even if it isn’t exactly cognitive computing.

I don’t think the Turing test is a valid measure of intelligence. It is a good indicator of the validity of the program, but I agree with the Chinese Room Thought Experiment that the computer is not actually thinking, it is simply following a procedure based on inputs. This is similar to what I was saying earlier, that AI is deterministic and as of now will always give the same response given certain inputs.

I definitely think that the growing concerns about artificial intelligence and its impact on humanity are warranted. We are just beginning to see what this new technology can do. Also, we are thinking much more about what else we can do as opposed to what we should do. I tend to be optimistic about the future of artificial intelligence in our lives. I think it will open up doors for humans to achieve incredible things. However, I also agree with Max Tegmark and Nick Bostrom that we should be very aware of what we are creating and think long and hard about any reasons that we shouldn’t employ certain artificial intelligence programs.

I don’t think that a computer could ever be considered a mind. In my opinion, we will be able to simulate the brain very closely, but I don’t think we will ever be able to create a computer program that has emotions and consciousness. The computationalism is interesting, but I don’t think that even if we could create a state machine that contained every possible emotional state present in humans that it would be the same as the human brain. It might imitate it, but it’s still just a machine responding to inputs and producing an output.

--

--