RoboCop: The 180kg mobile police robot.
The panopticon is an 18th Century design for a new kind of prison created by Jeremy Bentham, a British social reformer of the day. Its key feature is a central observational turret with cells organised around it in a circular manner, whereby the prisoners who are incarcerated are unaware whether the prison guards are present or not.
By using natural light the prison guards are obscured from the prisoner's view.
In this way the prisoners are unable to distinguish whether they are being watched, with the idea that they feel like they are under surveillance all of the time. The purpose of this was to supposedly to engender social reform in criminals, such that when they left prison they would still feel like they are being watched and would not relapse into old behaviours.
Patterns and Robotics: A Complex Reality | Data Driven Investor
Hayek's famous work 'The Theory of Complex Phenomena' (Hayek, 1964) delves into the topic of complexity and asserts…
While Jeremy Bentham thought this early form of social engineering would benefit society, commentators remark how this kind of thinking has opened the way for Totalitarian states.
The more enlightened of us might believe this to be outmoded thinking in the 21st century.
Unfortunately not. Enter RoboCop.
While in the recent NBC News report it wasn’t yet effective at stopping a crime, there are some seriously unethical opinions expressed by its originators that are very similar to what the panopticon sought to achieve.
“ Despite HP RoboCop’s months-long tenure in the park, there is no signage describing what it does or why it is there. Lozano said that’s because the department does not want to falsely advertise the robot, but the information will be posted when the machine’s features are properly connected, he added.
Leveraging people’s uncertainty about the robot is core to its value as a security tool, The fact that the public is unaware of all of the robot’s capabilities is integral to Knightscope robots’ mission to be a physical deterrent to crime, confirmed Stacey Stephens, the company’s executive vice president and chief client officer.” said. “”They could have any kind of grand thought about what the robot might be able to do, which could lead them to say, ‘you know what, I’d rather not risk it. Let me go somewhere else,’” Stephens said.”
In the same way, the prisoners didn’t know whether the prison guards were or were not present in the panopticon, it is deliberate that the robots capabilities are hidden.
The very act of placing an 400lb/180kg autonomous robot with ambiguity about its capabilities as an anonymous deterrent not just for criminals and “crime fighting” but for lawful surveillance of all members of the public is madness.
The fact that their CEO is states the target audience is the public and is sponsored by a police department means that again technology is introduced that has the potential to create harm, in this case through inaction.
This machine (which is greatly aided in its role by police branding) has outpaced societies ability to place new technology in a legal and ethical framework where the implications are thought about well in advance of its introduction. Even the police are winging it and figuring it out as they go!
Along with the deeply disturbing Realbotix, its worthwhile thinking about why this is emerging and why technology is yet again being used to pursue a people-control agenda, just like the Panopticon was 200 years ago.
“”Once people really understand what’s at stake, I think they will modify their behaviour in much more predictable ways,” he [Knepper] said”
The parting words from computer science Associate Professor Ross Knepper (Cornell University) about the robots purpose are also along these control lines.
What exactly is at stake? If the controversial Chinese social credit scoring system is supplemented by fleets of RoboCops, ordinary people with no criminal or fraudulent opportunity or intent will be enmeshed in a network of surveillance to catch the minority who have intent. And the minority will invent more ingenious ways to work around the network, while the majority go about their daily lives with more and more strictures.
It seems that some members of the AI and robotic industrial and academic community need to do some serious soul searching and get a stiff dose of humanism since the future of this kind of world looks rigid, boring and fascist for the majority. There must be better answers.
Followup email from A/Professor Knepper dated October 13th 2019:
“Dear Dr. Hart,
Thanks for sharing your op-ed piece. I think the connection to the
panopticon is apt.
You are right that the quote at the end of Ms. Flaherty’s article, which
you excerpted, is unclear out of context. I think that she does capture
my meaning accurately elsewhere in her story:
“As researchers, we are very eager to employ all these new technologies,
speech and acting in a socially competent way, but if we don’t have this
other side of setting expectations of awareness of how people will
interpret these behaviors, we run the risk of actually making an
interaction with the robot worse and not better,” Knepper said.
The overall point is that people will react to the robots by modifying
their behavior, but we run the risk, as a society, of letting robots
modify our behavior in undesirable ways. Part of that undesirable
behavior modification is what you address about feeling always watched.
Another part is that when somebody has an emergency, they may over-rely
on the robot for assistance that it is not capable of providing. I
think the introduction to Ms. Flaherty’s article does a good job of
capturing this danger.