Racist Algorithms
The problem with our computer systems getting better and better at anticipating human behavior is that human beings are incredibly flawed.
If our computers are flawed like us, they might make better guesses at to what we will do next. Then again, they might make our flaws more powerful by burying them into systems, processes, and algorithms that very few people will ever touch.
The data science design choices we make now will stay with us for decades.
Do we want our systems to be flawed like us? If not, how do we make them better?
And lack of action, that is still a choice.