Using Confucian Ethics to Design Communicative Robots

Tom Williams
Mines Robotics
Published in
1 min readApr 21, 2020

Something is wrong in the state of natural language generation. While deep neural end-to-end language generation systems have proven to be eminently capable of generating fluent, human-like text, they’ve also been demonstrated to be fatally flawed when it comes to generating text that is accurate and morally sensible. These challenges occur in part because neural language generation systems are trained to bullshit (in the formal linguistic sense), saying whatever will net them sweet, sweet reward, without caring whether what is said is true or false (or offensive).

In contrast, we’ve been exploring how robots can be designed to carefully, strategically, and intentionally generate language in order to ensure moral sensitivity. To do so, we’re drawing new insights from a (really) old area of the literature: Confucian Ethics. To hear the three ways we’re using Confucian Ethics to push back against bullshitting-based language generation, you can watch the video below, which we recorded as our remote presentation for HRI 2020.

Read the paper and find out more about the MIRRORLab’s work here.

--

--

Tom Williams
Mines Robotics

Tom Williams is an Assistant Professor of Computer Science at the Colorado School of Mines, where he directs the MIRRORLab.