Human-Centered Machine Learning
Jess Holbrook

An excellent article. I was just writing up an essay about some of these issues, though more focused on the general goals. You touch on it a little in #4, but I’m definitely concerned with the concept of teaching the system “values”. As humans, we make lots of unconscious (and some conscious) decisions based on values (e.g., do no harm, be environmentally proactive, protect privacy), and if we want our systems to come up with good answers, we need to find ways to have them weight those values against the more obvious values (i.e., maximize opportunities for profit or success) that we already teach it.

Thanks for sharing this. Very interesting points, and well described.

Show your support

Clapping shows how much you appreciated Ben Langhinrichs’s story.