Fooling Artificial Intelligence
Is it really that simple?
I used to live in Saudia Arabia a few years back and I remember how the national craze of drifting at high speeds with luxurious cars forced the traffic police to regulate strict traffic rules to avoid the number of growing accidents that ended up costing lives of enthusiasts. From the red traffic light threshold area crossing sensor to speed monitoring radars, every corner of the street was equipped with AI-guided cameras.
Most Saudi nationals, who have a strange inclination toward keeping luxurious cars and showing car stunts such as drifting, overspeeding, and overtaking, of course, felt too suffocated under the traffic rules and restrictions imposed by the government; so much so forth that they’d end up playing tricks to dodge the system from identifying car details that’d otherwise trace the identity of the owner.
I remember the incident in which one Saudi driver, before coming to a standstill position in waiting for the traffic signal to turn green, accidentally, or maybe because his car’s brake system faltered (which I doubt could’ve happened :D) trespassed the stopping threshold beyond by a few inches and the camera across the street gave an instant flash causing a ticket imposed on the driver with a fine of SAR 600. Those were the times when the public was newly introduced to the penalty system and most lacked an understanding of it. According to the new system rules, the first ticket charged amounted to SAR 600, the second was double that amount, and so on. The driver, confused with a flash, decided to reverse his car so as to remain within the restricted area thereby instantly receiving another camera flash with a double penalty ticket surcharged of straight SAR 1800. Of course that made him astonishingly furious!
That was one penalty incident that I recalled. There had been countless incidents where the rage of spoilt rich kid Saudis even resulted in the wearing and tearing of the systems or similar barbaric acts. Those who refrained involving in barbaric acts used to trick the system by covering part of their number plates so as to shield from the camera eye the identity of the car and ultimately that of a driver.
And that is true, the system does get dodged and the culprit escapes easily!
Although considering the act of hiding number plates a serious offense the penalties imposed by traffic monitoring agencies in the country were identified to be even more crucial, however, replacing the system with the new deep learning AI-based traffic monitoring system was proposed by hazen. The new system rather than relying on number plates only captures the photo of the driver (for face Recognition)using the vehicle to match the records with the national database authority. And even though the system has been effective, however, according to Berkley University researcher:
Any system that uses machine learning for making security-critical decisions is potentially vulnerable to these kinds of attacks — Alex Kantchelian
And this is true especially due to the relative speed with which the car is moving and the static camera that acquires the image. Since the computer/AI system does not see the actual image as the human brain does it rather weighs in on the pixel intensity within the acquired image. Therefore, the addition of noise that adds to images; for example, due to wind waves, air turbulence, and vehicle movement; can affect the system’s ability to identify the right person.
Deep neural networks that form an integral part of AI systems are brilliant at detecting patterns, however, the addition of small noise can make the system appear completely hijacked. For instance, the image of a stop signboard on the road was modified with a few plain stickers colored black and white and run through DNN based AI detection system. The result?
The system detected the sign to be saying ‘Speed Limit 45’.
Imagine if this wrongly detecting AI system was installed in a self-driving car. What then would be the consequence of the wrong detection?
Simple! such a car approaching a stop sign rather than slowing down, heeding the warning of the busy intersection ahead, would accelerate into the amiss.
That’s how easily the AI systems are hacked!
Although AI researchers are trying hard to fix these neural network flaws, however, as of now the problem still remains unresolved, and often do we come across Tesla self-drive failure leading to crash incidents. It still is a long journey before researchers actually find a solution to this issue.
Read more of my articles and others’ content on Medium written by outstanding authors.
Get Medium’s Monthly Subscription for $5 only and get instant access to unlimited fun content on the platform.
Disclosure in its Entirety: The above link is a membership link, meaning if you join the medium platform as a reader or maybe as a writer via this link, I’ll earn a small amount of commission with absolutely no extra cost to you.