Critical Object Final Proposal

Brent Bailey
Critical Objects
Published in
4 min readApr 29, 2019

Topic: Data transparency

Limitations: Make own models, be transparent about every step.

Framework: Daniel Weil’s “interpretation, representation and communication.”

For our final project, Adi and I plan to make an object that critiques the notion of a “black box” in artificial intelligence: the excuse used by many executives for misuse of their products (or ill-advised products in the first place) that the algorithms themselves are inexplicable, somehow above rationality, or unrelated to the humans that make and implement them. We want to create a transparent form of AI, prompting the audience ask themselves whether a transparent form is better, what forms transparency might take, and what it means for these “black boxes” to be opened.

We drew out initial inspiration from AI Now’s Discriminating Systems report. It documents a culture wherein algorithms have literal life or death consequences: who gets hired, who gets health care. These decisions are trained on previously extant data from systems that are well-documented to be biased: a hiring AI is trained on a system that historically has privileged white men who went to Ivy League schools, a health care AI is trained on data from a system that has historically paid more attention to and taken better care of white people than people of color. When these systems are questioned, the people who implemented them often dismiss the questioner, or plead innocence since the algorithms are acting on their own. However, when you examine the data being fed to them, the patterns that lead to this behavior become clear. Biased existing systems are being weaponized at scale by artificial intelligence that realize on the biased datasets they’ve created. The AI Now report asks what must be done to create a better form of AI, and questions whether such a form can even be created. How make datasets that are representative of people traditionally excluded from them, or to even believe that a dataset can be unbiased, are problematic topics in their own right.

We hope to engage the user in this debate, though from a more metaphorical perspective than my previous attempt. Right now, we’re envisioning a camera that runs image classifier algorithms, but communicates the process that’s happening at every step of the way, inspired by Daniel Weil’s Radio In A Bag. Rather than hide the algorithm, we intend to try to visually communicate it through the screen, describing both the process of image classification and the dataset that it’s pulling from. In doing so, we plan to train multiple image classification models on different datasets to make it clear to the user how important input data is to the model itself: there are no universal or infallible models. A model trained on flowers is different from a model trained on dogs. In form, we’re envisioning a transparent sphere so all its component parts are visible, inspired by the classic crystal ball or magic 8 ball.

What we’re still struggling with is the final output: we’d like to make something poetic, rather than merely descriptive, but we’re not sure what the best way to move forward on that is. We’ve discussed seeding a randomly-selected RNN with the Image Classifier’s output: for example, generating a text from the work of James Baldwin or a set of police reports. Hopefully, this will demonstrate to the viewer how important the input data is to the output of a model, and how biased or different a model may be based on its training data.

Some sketches/ideation are below.

We’re inspired in this by Zach Blas, Ross Goodwin, and Daniel Weil.

Mood board:

--

--