It is not clear to me that the problem is that the “algorithm” is biased. Algorithm picks up the data that is out there and detects patterns therein. If we are being charitable, we might say that the sampling process is biased. If we are not being so charitable, we might say that the data generating process is biased — that the sources of the raw data are themselves filled with biases of the kind that we might not want to invoke in a polite company. If anything, we are asking AI to “lie tactfully” and ignore what they see that might offend people. Now, that is something that most humans cannot pull off gracefully. Maybe that is doable, as a pure technological endeavor, but, it does beg the question as to whether we are asking the right questions. Algorithms will do what they are designed to do honestly — they may exceed their programming by “learning,” but how they learn will still follow their design. But, in our own human naivete, we take for granted the social conventions that may not be altogether “logical” and are surprised when honest machines don’t recognize them.
This goes beyond just the problem with AI. We need to have a better understanding of what things like tact, ethics, and decency mean among us humans, not just in moralizing terms, but in some logical terms that a computer can understand, before we can expect AI to recognize and respect them. We need to figure ourselves out first, before expecting computers to figure us out and treat our sensibilities as we’d like them to be treated.