Agree 100%. The current state of machine learning is very much just stereo-typing on steroids. All of our human prejudices are encoded directly in the real world data that we feed into the machine learning model building process.
For any decisions that require traditional human judgment as opposed to defined mathematical formulas, it is unreasonable to expect the machine learning model will be any less biased than we are. In fact, it could be more biased if the real world is less ideal than our own individual beliefs.
Yet, many people make the wrong assumption that just because a machine made a recommendation that it is unbiased. Many think that machines by their very nature has no bias. Unfortunately, this could not be farther from the truth when it comes to machine learned models.
I believe this is one of the biggest ethical issues we face in the current accelerating adoption of AI and machine-learned models .
We need to educate the public on the dangers of simply trusting anything based on machine learning. We also need to solve the issue of being better at understanding the underlying rationale of machine-learning decisions.
Right now most machine-learning models are relatively difficult to unravel as to the primary factors that influenced a given outcome. This needs to get much much better.
If an applicant-ranking model rejects some candidates vs others, we need to understand why as opposed to just trusting that it is making statistically valid decisions. Otherwise, our AI would be just helping reenforce the unfair biases of the world we currently live in.