Mines Robotics
Published in

Mines Robotics

Using Confucian Ethics to Design Communicative Robots

Something is wrong in the state of natural language generation. While deep neural end-to-end language generation systems have proven to be eminently capable of generating fluent, human-like text, they’ve also been demonstrated to be fatally flawed when it comes to generating text that is accurate and morally sensible. These challenges occur in part because neural language generation systems are trained to bullshit (in the formal linguistic sense), saying whatever will net them sweet, sweet reward, without caring whether what is said is true or false (or offensive).

In contrast, we’ve been exploring how robots can be designed to carefully, strategically, and intentionally generate language in order to ensure moral sensitivity. To do so, we’re drawing new insights from a (really) old area of the literature: Confucian Ethics. To hear the three ways we’re using Confucian Ethics to push back against bullshitting-based language generation, you can watch the video below, which we recorded as our remote presentation for HRI 2020.

Read the paper and find out more about the MIRRORLab’s work here.

--

--

Official Hub for the Robotics Program at the Colorado School of Mines

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Tom Williams

Tom Williams is an Assistant Professor of Computer Science at the Colorado School of Mines, where he directs the MIRRORLab.