Algorithmic Discrimination

We’re learning about facial recognition and voice profiling this week, and we’re thinking about how “natural” it has become to accept the use of biometric data as an inevitable (and sometimes welcome) aspect of our digital tools and technologies.

For me, the most important reading of the week was a podcast that features Professor Joseph Turow in a conversation with Bret Kinsella, a podcaster who specializes in voice technologies. The conversation is nuanced with insight from both experts, who each hold distinctive stances towards the potential benefits and harms of voice profiling.

This was the first time I developed a “study guide” for a podcast, which consisted of a series of questions aligned with some of the topics and issues discussed in the book.

  1. What are the differences between voice recognition, identification, and voice profiling?
  2. How can voice identification be used in positive ways?
  3. When you call a company to solve a problem, your recorded voice is used “for training purposes.” What does that actually mean?
  4. Why is Professor Turow opposed to the use of voice profiling for upselling?
  5. How could voice profiling be used in negative ways?
  6. What inferences can companies make about people just from voice data?
  7. How can voice data in the home be used to monitor and shape people’s technology use in the home?
  8. Is voice data a type of democratization of knowledge? Why or why not?
  9. How does marketing reproduce inequality?
  10. What is “mass customization”?
  11. What is Google’s and Amazon’s policy about the use of voice data?
  12. Is there a need for new laws about the practice of marketing surveillance? Why or why not?
  13. What does Turow mean when he says voice data is a form of “seductive surveillance”?
  14. Location tracking, voice profiling, and facial recognition are all forms of biometric surveillance. How do people evaluate the different kinds of risk associated with these 3 practices?
  15. ❤️ Why does Professor Turow think it’s important for there to be a balance between segment-based media and society-making media?
  16. What are the pros and cons of regulation of voice profiling for commercial and political marketing?
  17. Why does Professor Turow think contextual advertising is better than personalized advertising?

Students offered some terrific insights on the experience. Among the most thoughtful was Catherine, who explained how most people have a spirit of resignation when it comes to these technologies. Are we making rational decisions about using voice technologies? Turow points out that people said that both these two statements were true:

“I would like to control my privacy data”

“I feel I have no control over my privacy data”

If the only option is not to participate in society, then many people will feel resigned to algorithmic profiling and tracking. That’s why we’ll be studying a bill proposed to address the harms of algorithmic profiling: the Algorithmic Justice and Online Platform Transparency Act, which prohibits the discriminatory use of personal information by online platforms in any algorithmic process, to require transparency in the use of algorithmic processes and content moderation, and for other purposes. Transparency alone may not be sufficient to address all the problems with algorithmic discrimination, but it’s an important start.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store