Semantic Arithmetic

Eamon Abraham
DBRS Innovation Labs
8 min readJul 8, 2016
(image: Fletcher Bach)

“… like mathematics, literature could be manipulated.”- Jean Lescure

Allison Parrish is a computer programer, educator, and poet whose work deals with the materiality of language. With a background in both the formal study of language (she received her BA in Linguistics from UC Berkeley) and years of experience as a software developer, Parrish’s work has lead her to explore words not only as symbols but also as objects in their own right, teasing the boundaries between medium and message to ask questions about how communication happens in a digital context.

Electronic media open up entirely new possibilities for the manipulation of language, but they also lay bare some of the thorniest problems of textual interpretation. If we are writing for other humans we can assume a baseline understanding of how language works, an understanding which computers still do not share. If this seems abstract, consider the way that language comes to be represented as a text file in a computer: words are compressed into ASCII characters and stored as bits on a hard drive; the fundaments of human communication, having emerged out of obscure prehistory and evolved for centuries, accumulating layers of connotation and nuance along the way, are now encoded as tiny electrical charges in a matrix of transistors.

“Despite the proliferation of new media in recent years, words are still the medium of almost all serious business and scholarship,” said DBRS Innovation Labs Director Amelia-Winger Bearskin. “That is why it is so important to develop robust and dependable tools for natural language processing. If we want to use computers to augment our understanding in these areas, we need methods by which computers can interpret the subtleties, idioms, and contextual cues that crop up in even the most official of documents.”

This is where machine learning comes in.

Left: Vectors in three dimensional space with male-female semantic relationship. Right: same vectors flattened onto two dimensional space. (image: Joelle Fleurantin)

For this project, Parrish used a machine learning algorithm called Word2Vec, a neural network that takes text as an input and assigns each word a vector as an output. (Don’t be misled — as I was — by the word “vector” — in this context it simply means a numerical value, not a force with a direction as it does in high school physics.) Word2Vec determines the values of the vectors by comparing instances of a given word in the various contexts in which it appears, and constitutes something analogous to the word’s meaning.

“With Word2Vec, words that have similar meanings should have closer vectors than words with divergent meanings,” said Parrish. “So for instance, the vectors for ‘cat’ and ‘kitten’ will be closer in value than the vectors for ‘abacus’ and ‘mastadon’ will be.”

Image: @aparrish

The process of converting words into vectors is called “word embedding,” which Parrish summarizes as “basically a way of translating a word into a particular coordinate in high-dimensional space.” “High-dimensional space” in this case refers to the number of dimensions along which a given vector is extended. Because any given word can be used in a wide variety of contexts, its use is contingent upon its relationship to a whole host of other words and other contexts. It is easy to visualize a comparison made along one, two, or three axes, but after that our naive visual understanding begins to betray us. Hence high-dimensional space can be understood as mathematical abstraction to account for comparisons made across many different axes in a virtual space. (Those with humanities backgrounds may find useful analogies to Derrida’s differance or Wittgenstein’s formulation of the “family-resemblance,” but it is important to remember that machine learning algorithms deal in statistics and linear algebra, not philosophy.)

The challenge for Parrish on this project was “How do we help people understand the nature of word embedding?” These techniques are extremely exciting in the field of natural language processing, but communicating how they work to a lay audience is no easy task.

One way to approach this difficulty was to represent these operations in a format that people already understand, a technique that Parrish has been refining for some time. For example, as a part of her master’s thesis at NYU’s Interactive Telecommunications Program (where she now teaches), she created several interfaces that borrowed from the physical vocabulary of musical tools and applied them to written words.

“I’m interested in coming up with new ways to get text into computers,” said Parrish. “Normally we do that with a keyboard, but we’re also used to hybrid interfaces like spell-check or auto-complete. Almost all of our writing is mediated in some way, often by sophisticated machine learning.”

To illustrate how word embedding works, Parrish borrowed from the world of image processing software. Most people are already familiar with how visual effects work in software like Instagram filters or Photoshop. Because Word2Vec converts words to numbers, it is possible to perform what Parrish calls “semantic arithmetic” on them. In other words, she could apply the same logic that image processing algorithms use to text.

Pixels in an image are representative of underlying numerical values. For example, in grayscale, a white pixel has the value of 255, a black pixel 0. (image: Fletcher Bach)
In image processing software, pixels from source images are averaged. Pixels with new averaged values make up the resulting image. (image: Fletcher Bach)
When words are represented by a series of numbers, we can apply the same processing algorithms to them. In this case, “cliff” is the average of “verge” and “rock”. (image: Fletcher Bach)

“The overarching idea was, ‘What if you had photoshop for text?’” she said. “What would that look like?” She began experimenting with different effects, finally settling on three: Gaussian blur, Blend, and Resizing. Parrish applied these effects to two source texts: Jane Austen’s Pride and Prejudice, and the Declaration of Independence (both texts are available online for free from Project Gutenberg).

To illustrate how the different effects work, SigTxt features images of Jane Austen and Thomas Jefferson, authors of the two source texts. This one corresponds to the interface below, with Blend being in the middle position, Blur and Resize set to zero. Play with the demo at dbrslabs.com/sigtxt. (image: Joelle Fleurantin)

With the assistance of the DBRS Labs team, Parrish formatted the output of her algorithmic experiments as an interactive demo called Sigtxt. The name comes from “signal processing,” a term used by radio engineers that refers to the techniques used to get a human- or machine-readable signal out of the noise of electromagnetic current.

“Eventually as radio became a main means of communication in places like battlefields in WWI, ‘signals intelligence’ became a thing and called ‘sigint,’ and since then, the ‘xyzint’ moniker became popularized in intelligence agencies for different types of intelligence collection,” explained DBRS Labs Developer David Huerta. Like its namesake, Sigtxt processes one signal and translates it to another, taking classic prose passages of the Western canon and converting them to machine-readable vectors.

Sigtxt is viewable in the browser. Each effect is controlled by a slider, a familiar user interface element that should feel intuitive to anyone with access to a web browser.

“Almost everyone speaks a language, and almost everyone is really good at it,” she said. “And we love to play with it… The more you can give an interface that helps people make little changes and see what the result is, that opens up an intuitive understanding that would be difficult to come across otherwise.”

“I performed a series of experiments with that as its premise,” she explained. “Since we can take a text and translate it into a vector, what are the transformations that we can apply to that list of values? because it’s just floating point numbers in a list, it is trivial to apply the same algorithms that we apply for audio or image data, you can just plug the Word2Vec numbers in and apply them wholesale.”

First she converted the words in her source texts to vectors using Word2Vec and spaCy (another powerful machine learning library.) Then she applied algorithms to those vectors, which changed their values. Finally, she re-rendered the text using the new values, to produce entirely new text. The resulting illustration allows users to interact with machine learning and perform “semantic arithmetic” in what feels like real time.

Parrish’s project is a playful way of demonstrating the principles of machine learning techniques as they apply to text. The texts that are generated as you manipulate the sliders to adjust the algorithms are determined by rigorous computation, even if they don’t always read like something that a rational human mind could have produced. Depending on the configuration of the sliders, the texts can become by turns poetic, disarming, nonsensical, and sometimes, quite offensive.

For all of their remarkable sophistication, computers are still a long way from truly understanding much of what is most crucial about language. the algorithm is wholly indifferent to human social and cultural mores. In terms of demonstrating The “failures” of such an approach are more instructive than “successes” would be.

When you play with Parrish’s interface, you are interacting with an application of math onto text. It carries no preconceptions about what is appropriate, or even coherent to say, when the script juxtaposes an anatomical word with the name of a divinity, for instance, there is in a sense it is only exposing connections that were already present, if not in the spirit of the text then in the letter, in the data.

“[If people are] at least getting the understanding that , ‘Okay, computers are converting those texts to numbers,’ then I think is a good insight,” Parrish said.

“The point of approaching this from an artist’s perspective is to be critical about it,” she continued. “This stuff does not work the way that you think it works intuitively… In the process of exploring this conceptual space I’m hoping that people will see that there is no inherent, objective midpoint between ‘hot’ and ‘cold’ for example.”

As natural language processing and machine learning become more and more common features of our digital lives, it is important to understand how they work, what they can do, and what they can (or cannot) care about.

“Computers are never going to replace humans thinking about language.”

Our team consists of engineers and mathematicians, story-tellers and data artists. We interrogate big datasets to uncover hidden trends, make animations that set beautiful geometries in motion, and train machine-learning algorithms to hew insights from raw numbers. Our tools allow us to examine the details of our economy and our world with extreme precision, and to simplify complex information accurately. We are dedicated to finding exciting new ways of helping people see the insights beyond the rating. Learn more at http://dbrslabs.com/

--

--