Technological Criticism as Politics By Other Means
An interesting, if revealing story about an IT mishap:
With a few taps on a computer keyboard, a student’s entire school history from kindergarten to high school graduation was supposed to show up on the screen. That perfect score on a third-grade spelling test, that trip to the principal’s office for talking too much in class, that day of ditching math as a senior.
The computer software was supposed to help school officials schedule the classes a student needed to earn a diploma or attend college and to allow parents to track their children’s grades and attendance.
These kinds of problems re-occur routinely in so many facets of our life. We allow things like scheduling programs or other mundane forms of automation to control our lives. And yet there are no demands for “scheduling algorithm transparency.” Additionally, why was it adopted? Oh, wait, because analog, non-automated solutions led to systematically bad outcomes towards a disadvantaged group:
The district agreed to implement the student information system as a result of a federal class-action lawsuit two decades ago. The suit alleged that LAUSD violated special education students’ rights, in part, by keeping such disorganized records that it sometimes lost track of those students’ needs.
The truth is that “algorithms” have been ruling our lives for a very long time now. But the only time we care is when Facebook can make money off of them. If the nature of the objection is “X decision system that optimizes Y normative goal is embodied in an computer system” than I do not see how this is possible without knee-jerk opposition to every observable implication of it, as opposed to just the private sector’s. Such as, for example, payroll software or the multi-threaded algorithms running under the hood of the very word processing applications that “algorithms” fearmongerers use to type out their diatribes. We don’t use the same loaded, fearmongering language or demand transparency when it comes to most other ways automation structures our lives. Once again, this quote is relevant:
Second, because what the algorithms is designed to optimize is generally going to be something like ‘maximize ad revenue’ and not anything particularly explicitly pernicious like ‘screw over the disadvantaged people’, this line of inquiry will raise some interesting questions about, for example, the relationship between capitalism and social justice. By “raise some interesting questions”, I mean, “reveal some uncomfortable truths everyone is already aware of”. Once it becomes clear that the whole discussion of “algorithms” and their inscrutability is just a way of talking about societal problems and entrenched political interests without talking about it, it will probably be tabled due to its political infeasibility.
That is (and I guess this is the third point) unless somebody can figure out how to explicitly define the social justice goals of the activists/advocates into a goal function that could be implemented by one of these soft-touch expert systems. That would be rad. Whether anybody would be interested in using or investing in such a system is an important open question. Not a wide open question–the answer is probably “Not really”–but just open enough to let some air onto the embers of my idealism.
Criticism of algorithms often really just amounts to trying to advance other policy preferences via the backdoor. Unsurprisingly, those preferences aren’t shared by other people with relevant decision authority over how the algorithms are coded. So where does that leave algorithmic transparency?
Dropping “algorithmic” from “transparency” will focus a lot of energy towards actual efforts to change our relationships with large institutions that have power and authority over us. Instead of just yelling “ooh scary scary something something computer AI HAL Skynet” and hoping that such fearmongering works out.