On the Defensive:
“Do you want your children to grow up to be Uber drivers for Chinese tourists?”
That was the opening question posed by an ethicist at Q Berlin, a conference on freedom and responsibility. This quickly segued into a doomsday scenario of AI weapons run amok and the dangers of universal suffering triggered by artificial consciousness.
Anyone with just a limited understanding of artificial intelligence most certainly left his talk with a palpable sense of dread. This can hardly be considered a responsible approach to the debate around AI.
And it is certainly not the tactic one would have expected from a member of the EU Commission’s High-Level Expert Group on Artificial Intelligence (HLEG-AI), a body assembled to develop the Ethics Guidelines for Trustworthy AI.
This kind of charged apocalyptic language is doing far more harm than good as it ultimately drowns legitimate concerns in a whole lot of drivel.
Yes. AI comes with real social threats as it reshapes the way we live, work, and interact with one other.
Yes. A world with uncontrollable weaponized AI is terrifying. One of the few potential AI applications more terrifying than its dystopian contributions to the “literary” and “musical” worlds…
But when valid arguments on the importance of devising ethical guidelines for AI get packaged in ominous threats, they sound like hysterical pandering to angsty tinfoil-capped conspiracy theorists. So… not quite the target audience of AI industry titans and high-level experts who are ideally positioned to influence ethical regulations.
This kind of polarizing language only creates a starker divide on an issue that would benefit far more from a unified and cross-disciplinary approach.
And compounding the divisiveness created by these scare tactics is the accusations of ethics washing by companies looking to demonstrate their contributions to trustworthy AI. Though ethics committees created by Amazon, Microsoft, Google, and Facebook might be more of a band-aid than a solid solution, labeling corporate efforts “fake ethics to stop lawmakers” is more drawing lines in the sand than establishing a united front.
Devising AI ethical guidelines and committees might be self-serving. It might even be a push to scrub clean a tarnished image in the face of political and consumer criticism. However, it is hardly akin to thwarting real regulatory action.
Let’s be real.
No one actually knows what they’re doing when it comes to AI ethics guidelines. It’s uncharted territory for the policy experts, the ethicists, and big business. Which is why it is essential that those involved waste less time forcing stakeholders in the debate to defend themselves and any initial attempts at ethical protocols and invest more time in sharing expertise and finding common ground.