New laws of robotics suggested by Austrian curator
Isaac Asimov wrote as science fiction three laws for robotics which have morphed into the real world as the rules by which the robotics revolution is assumed to be governed — if it is governed by anything.
Now the director of Vienna’s Museum of Applied Arts has come up with three additional proposed laws to bring greater humanity to the interaction between people and robotics. Coinciding with an astounding exhibition on robotics at the MAK, Christoph Thun-Hohenstein, has published the new rules to supplement the foundations of the Asimov “laws”.
Journalists and media figures attending the Global Editors Network annual conference, the GEN Summit saw the “Hello Robot” exhibition which embodies the technology and innovation inherent in the GEN approach to the challenges of modern journalism.
The additional “laws” Thun-Hohenstein has proposed are:
1. Intelligent robots must serve the common good of humanity and help us humans to lead an ecologically, socially, culturally, and economically sustainable life.
2. Intelligent robots may replace human labor only to the extent that this is compatible with humans leading a meaningful life of dignity, culture, and creative self-realization — except where this Rule conflicts with Rule 1. (Note: A conflict with Rule 1 would occur if, for instance, respect for human labor were effectively to prevent intelligent robots from serving the common good of humanity and from helping us humans to lead an ecologically, socially, culturally, and economically sustainable life; this proviso deliberately puts pressure on us humans to live such a life.)
3. Intelligent robots must be programmed to be cooperative self learning machines and to always function cooperatively — except where this Rule conflicts with Rules 1 or 2.
And a reminder of those original Asimov rules:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. More on Wikipedia.