Overall, I believe we need to make sure this is not a question that data scientists and engineers need to solve themselves. These general ethical discussions should include upper management and legal teams, as being fair and ethical could compete with short term profit seeking objectives of the firm.
The good news is the research community is working on the problem. Here is a paper I like that explicitly embeds this idea of affirmative action into the loss function. http://proceedings.mlr.press/v28/zemel13.pdf. This is one such method, and I’m hoping these become more mainstream, to both prove their merit in real-life settings, as well as inspire more research.
For a general review of methods to both measure and remove bias (in the racism/sexism sense), check out this [shameless plug] paper I’ve recently published: http://online.liebertpub.com/doi/pdf/10.1089/big.2016.0048