34 Followers
·
Follow

Machine learning is often considered a field still in its youth, despite roots in established sciences, and if that’s the case, ML fairness and ethics is still in its infancy, despite similar historical resonances. Yet, the increasing use of ML in everyday computing makes it a salient and critical area to review for its biases in race, gender, class and other areas that affect human lives, sometimes in invisible ways.

BIAS PROBLEMS

In Industry

Scholars identify a multitude of industries that use ML, and often struggle with fairness. Four are most commonly cited.

The first of these is criminal justice, where statistical risk assessments are used (Davies), recidivism is modeled (Gebru), and algorithms are used for targeting people or populations by police (Garcia). There is a profound question of fairness in ML in criminal justice (Benthall), and particular concern about race neutrality of sentencing models (Lee). …


In favor of a ten thousand-foot view of design ethics.

Would we always know when our designs do damage, in the long run or short? Would we have the courage to admit it and the initiative to change it?

Image for post
Image for post

We live in a culture of instant gratification and persistent distraction. The two qualities go hand in hand, reinforcing each other. Our game designs are fast-paced and reward-laced, addictive. …

About

Xian Gu

Maker, giver, learner, and all-around nerd. UX researcher and strategist with a background in HCI and psychology. Currently @ Microsoft

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store