Machine Learning in Art, Design, Architecture, or: “Wolves in Sheep’s Clothing”

Written for 48–727: Inquiry into Computation, Architecture and Design at Carnegie Mellon University.

‘We are still in the silent movie stage of computation.’
 — Neil Leach, quoting Benjamin Bratton at ACADIA 2016

It’s tempting to call machine learning transformative. The ability to offload pattern recognition and knowledge acquisition to computers is in many respects still primitive. But it seems to hold the promise of a practice, across all professional, academic, and creative disciplines, that is increasingly less about working directly on or with objects/artifacts/tools, less about direct observation, recording, and analysis in a scientific or design method. Instead, machine learning suggests an iterative, emergent way of working. Stan Allen, in his essay Field Conditions, describes the composer Iannis Xenakis as “working… with material that was beyond the order of magnitude of the available compositional techniques.”[1] While Allen is referring mainly to Xenakis’ use of graphic notation, the leap in scale from (for example) analyzing the syntax of a single sonnet to the entire Shakespearean corpus is similarly an “order of magnitude” greater. The choice of working manually (and tediously) for months or years versus utilizing an algorithm to do similar work in minutes seems clear. However, the trade-off, as with any new technology, is a certain loss of control, hiding processes in a “black box” that the technocentric among us view as having great authority.

The strongest criticism of machine learning (though still woefully under-recognized among enthusiasts) is its inherent implicit bias. Though the term has been in use among psychologists and sociologists for much longer, only in the past two years has it been brought up in association with computers. A 2016 Princeton University study concludes that “…language itself contains recoverable and accurate imprints of our historic biases… These regularities are captured by machine learning along with the rest of semantics.”[2] The usage of algorithms which contain human biases have real-world effects. In 2013, the Chicago Tribune detailed the “heat list” used by Chicago police to track likely criminals, disproportionately identifying black citizens over other races.[3] Private companies such as PredPol are contracted by cities such as Hagerstown, MD, Tacoma, WA, and Los Angeles, CA to provide “customized crime predictions for the places and times that crimes are most likely to occur.”[4] However, while implicit bias carries a wealth of hidden danger, it may also provide opportunities for those who read the results to reframe policy problems. Creative works that critically use machine learning offer a path for others to rethink the supposed objectivity and efficacy of algorithmic computation.

Failed iterations of the cellular automata solver from Paul Harrison’s “What Bricks Want”

In Paul Harrison’s paper “What Bricks Want: Machine Learning and Iterative Ruin” (presented at the 2016 ACADIA Conference), the author uses machine learning to derive unique structural arrangements through rigid-body simulations of building collapses. However, rather than viewing the results of his experiment as optimal structures, Harrison characterizes his software as a design tool: “a user interface allows the designer to specify a pre-simulation genotype… Beyond simply controlling the tower’s shape, the user interface also allows for bricks to be removed from its overall form.”[5] The designer is in conversation with the software; this conversation becomes richer when the designer is aware of and reacts to the tool’s agency. In other works, such as Robin Sloan’s “Writing with the Machine,” a recurrent neural network trained on a corpus of science fiction texts offers autocomplete functionality to a text editor. Type a few words, such as “It was a dark and stormy…” and the tool might suggest “…robot who responds to timeless gravity.”

Sloan characterizes the way in which the project ‘helps’ as “less Clippy, more séance.”[6] In both projects, machine learning provides “answers,” but the human/designer is expected to further investigate those answers — not take them as gospel. Unfortunately, even in humanistic fields such as art, design, and architecture, prominent voices are imbuing algorithms with ever more authority, encouraging an uncritical bowing to the machine.

In October, 2016, Mario Carpo delivered the keynote address at the ACADIA Conference in Ann Arbor, Michigan. Titled “The Second Digital Turn,” Carpo’s thesis is that, if the first digital turn in architecture — the advent of computer-aided design programs — allowed for unprecedented geometric forms, then the second digital turn — widespread access to computing power and machine learning algorithms — will allow anyone to know anything without knowing how to find it. In an earlier version of the talk, given in 2015, Carpo slyly suggests, “Today… in many cases, Google can already replace science.”[7]

His radical proposal is that nearly all of modern epistemology — logic, deductive and inductive reasoning, the scientific method — can’t keep up with the accumulation of knowledge by algorithms, and that humanity had better just give up with all of that.[8] Confronted after the talk in Ann Arbor by a question on implicit bias in machine learning and the dangers to minority groups, Carpo dodged, seeming to lean back on a false sense of objectivity in algorithms, referring to their “0s and 1s.”[9] His resolutely apolitical response demonstrates either a willful ignorance toward the messy realities of technology or a knowing denial of them. The former seems unlikely. If it’s the latter, however, we have to ask — what is Carpo’s (and by extension, machine learning advocates in art, design, and architecture) agenda? What is the role of humanity in the future they envision? Perhaps we should ask the machine.

  1. Allen, Stan. “Field Conditions,” in Points and Lines: Diagrams and Projects for the City (New York: Princeton Architectural Press, 1999), 90–103.
  2. Caliskan-Islam, Aylin, Joanna J. Bryson, and Arvind Narayan. “Semantics derived automatically from language corpora necessarily contain human biases” Princeton University, University of Bath, August 25, 2016.
  3. Gorner, Jeremy. “Chicago Police Use ‘Heat List’ as Strategy to Prevent Violence.” Chicago Tribune 21 (2013): 2013.
  4. “How PredPol Works.” November 19, 2015. Accessed November 1, 2016.
  5. Harrison, Paul. “What Bricks Want: Machine Learning and Iterative Ruin.” University of Toronto, 2016.
  6. Sloan, Robin. “Writing with the Machine.” May, 2016. Accessed November 1, 2016.
  7. Carpo, Mario. “The Second Digital Turn.” September 14, 2015. Accessed November 1, 2016.
  8. While Carpo falls short of scrapping the entire idea of public education, he does recommend ending the teaching of calculus in high schools, in favor of, obviously, algorithmic thinking.
  9. Also of note: an entertaining exchange with Neil Leach, who, with follow-up Q after follow-up Q, wouldn’t give up the microphone until Carpo literally waved his arms and yelled, “It’s over!”