Great article, but as a bit of a side-joke I can only imagine the outroar of “activists” complaining that their scale is fatphobic because it’s taking a stance on gaining weight being something worthy of an alert.
I agree that we need to be brave… or even, if not brave, at least use the damn fucking data for something, since most companies just sit on TBs of data that’s no use for anyone. But it’s also important to know the social repercussions of using said data. Users can feel like they are being watched, or privacy is being violated… and as i said, sorta tongue-in-cheek might even complain about data taking a stance (A stance they might disagree with).
I mean, this is the same internet that accused an automated Twitter bot of becoming a Nazi supporter and homophobic… a chat bot. It happened to Facebook and people saying that an automated algorithm is taking a liberal bias. There is a fine line in which people forget that they are interacting with data sets and inert pieces of plastic and metal, and start acting and feeling like they’re interacting with real, cognitive beings.
Great read and great examples. Thanks for sharing!