Human-Centered AI: Reliable, Safe & Trustworthy: HCIL Symposium

--

By John Dickerson, Hernisa Kacorri, and Ben Shneiderman

Introduction

Dramatic breakthroughs in artificial intelligence algorithms have provoked worldwide interest. Popular books promise bountiful results from AI and promote fear of out-of-control systems, deadly robots, and massive unemployment. More than 300 national policy documents, including U.S. White House reports, advocate for fair, accountable, transparent, responsible, secure, auditable, explainable, and interpretable systems.

Sample of books on AI promises and dangers

To advance these discussions, we organized a workshop on Human-Centered AI: Reliable, Safe & Trustworthy as part of the 37th Annual Symposium of the Human-Computer Interaction Lab at the University of Maryland on May 28, 2020. Some of the speaker’s slide and videos are on the workshop website.

We believed that well-designed technologies, which offer high levels of human control and high levels of computer automation, will increase human performance, rather than replace people. These Human-centered AI technologies are more likely to produce designs that are Reliable, Safe & Trustworthy (RST). Achieving these goals will dramatically increase human performance, while supporting human self-efficacy, mastery, creativity, and responsibility.

--

--

Ben Shneiderman
Sparks of Innovation: Stories from the HCIL

BEN SHNEIDERMAN (http://www.cs.umd.edu/~ben) is an Emeritus Distinguished Univof Maryland Professor in Computer Science, Member National Academy of Engineering