Are you afraid of Artificial Intelligence
Or asking for the right solutions
A few days ago, I read this,
Los Alamos physicists performed the analysis and decided there was a satisfactory margin of safety. Since we're all…idlewords.com
TL;DR, the author claims that we are worrying too much and this is really a very far fetched scenario. I do feel something similar but my reason is the problem that AI is trying to solve.
Often times we are trying to solve completely wrong problems. We are focused on the symptoms than the underlying cause. That makes our solutions a band aid approach than a medicine which cures the disease. I guess the biggest problem we will face with AI is of the same kind.
The machine is there to do repetitive tasks. Or tasks which are too time consuming or mundane for a human being. Think of a traffic controller. We employ a machine because periodically changing from red to green and back is not really a job for a human being.
The problem comes after when start automating more and more things via a machine. Let’s say you program a machine to report any violation of traffic rule, then that automated algorithm will do that diligently. What if there is an emergency and an ambulance wants to break the signal? Will it be reported?
You’d say no. You will program the machine to allow emergency vehicles to pass through. Then think about a scenario where a pregnant woman goes into labor and the husband can’t wait to reach the hospital? This and many other corner cases do exist.
Our target should not be reporting traffic violation, but to ensure the smooth flow of cars. That means in case of such emergencies the primary importance is given to value of human life and the flow of traffic.
As the programming matures, we add more and more conditions to make the system resilient against these corner cases. That is very natural. In doing so, our focus should always be on the core problem we are trying to solve.
So long as our algorithmic conditioning keeps the focus on the right disease and not its symptoms, and hard-wire that as a limitation of the AI system we develop, we should be fine.