Moises; this is a very well researched article. I read all the way to the bottom, and then realised it’s YOU from UCSC! So wonderful to read your work!
BUT, I digress…
The issue of algorithmic accountability is very interesting, and one that we are constantly trying to address within data science. For example, in some sectors in Australia (such as insurance), it is illegal for organisations to include variables into their risk algorithms that are classed as discriminatory (such as gender, income or racial background). Yet, as algorithms are very good at predicting these things using other variables, such as address, job, education level etc, they cannot help but produce the same outcome. Gender though is an interesting one, as if it was to be used in car insurance algorithms, then women would be able to get cheaper premiums as they are significantly less likely to be involved in an accident. So, is it OK to include variables that “profile” us if it produces a better outcome (such as a better fitting shirt, or cheaper car insurance?) Or should they be left out of calculations all together?
(I too have had trouble finding any clothes that fitted me in several countries in Asia, and in Singapore my colleague told me clothing shops were referred to as being for “twigs” or “trunks” — you can guess the shops that were for the westerners… )