Henry Kim
1 min readMay 1, 2017

--

It is not clear what exactly it means for “machine itself” to be held accountable. If the decision rule is, to put it crudely, if 3, then A, in a data where 99% of 3’s are A, what is problematic is that the decision rule (if the remaining 1% has serious consequences) should not have been employed, and if the consequences were not even anticipated by the designers of the algorithm, it should count as an awful malpractice in data use. I imagine things get hazy when we are dealing with the mechanisms where the causal linkage is hazier (say, connectionist models a la Deep Learning), but then, what exactly it means for “machine is aware of the consequences of its actions.” The machine was not exactly “brought up” by humans, so to speak, that is, on a training data typical of humans of a “responsible” age, leaving aside the fact that Deep Learning is not quite same as how brain works. What could possibly “ machine being aware of consequences of its actions” mean?

--

--