For example, the few studies that have been done into the use of AI and algorithmic decision-support systems in core social domains have produced troubling results. A recent RAND study showed that Chicago’s predictive policing ‘heat list’ — a list of people determined to be at high-risk of involvement with gun violence — was ineffective at predicting who will be involved in violent crime. However, it did lead to the increased harassment of those on the list. Similarly, a ProPublica exposé showed criminal risk-assessment software produced results that were biased against black defendants. To ensure people’s rights and liberties are upheld, we will need validation, auditing, and assessment of these systems to ensure basic fairness.
Artificial intelligence is hard to see
Kate Crawford
72344

Agree that there is a need for greater transparency and also standardization. In the NIST Big Data Public Working Group, we are seeking to identify existing as well as emerging techniques to nurture transparency, audit, governance and forensics where algorithmically modified data, or decision support code — is exchanged between producers and consumers.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.