I find myself spending a lot of time in a room named Turing these days. It reminds me of the movie “The Imitation Game” every time. The room as it turns out is actually named after Alan Turing who, in modern day, is considered the father of artificial intelligence.
It was Alan Turing who first considered the idea of entrusting ‘wisdom’ to machines in his renowned 1930s’ experiment ‘The Imitation Game’.
For years it was assumed that computers were only meant for processing arithmetic, filing & organization, and chess playing. Icons such as Alan Turing and others developed ideas that paved the way for artificial intelligence in the modern era. In his 1950s paper called “Computing machinery and intelligence”, Turing posed the question, ‘Can computers think?’. In this paper, he goes on to express his interest in exploring the possibility of producing machines modeled after the actions of the brain rather than just the practical applications of computing. In fact, Turing had programmed the Turing Machine by adhering to that notion itself: by considering how a person does additions in his mind. In other words, Turing wanted to deconstruct the brain to model the machine after the human brain. That is the precept of artificial intelligence. This has provided us with a threshold from where we can address exciting questions as to whether or not computers actually think, or if the thinking process of a computer compares with the advanced thinking process of a human being.
Turing’s early work has helped set the stage for the models we use in developing AI today: Deep Learning, Machine Learning, Neural Networks. AI achieves it’s own form of automation and virtual independence through the application of image/speech recognition, virtual AIML chatbots, sentiment analysis and natural language generation.
(…. more on image/speech recognition in a future post)
AI achieves its sophistication through a process which includes learning from the data that it is being fed; reasoning based on the rules and clauses set by the developer, and self-correction based on expert systems.
My own journey into programming has taught me the power and efficiency brought on by automation and has also given me some tools to achieve it over the last several months. However, given how the models are set up for any AI based system, the system is extremely prone to human bias as it is a human that is feeding the machine the data that it receives and the clauses that the machine follows to determine its’ course of action. As developers/future developers, we have a responsibility to be aware of the inherent human bias and monitor it closely as we build our own AI applications.
We have come a long way since the days of these groundbreaking ideas being initially analyzed, however, many of the questions that were posed still remain relevant coupled with fear and paranoia on one end and far-reaching expectations on the other, and also the ethical implications of using AI. More about that in future blogs. Keep blogging :)