Machine Learning Literacy Workshop at SFPC

Naoto Hieda
6 min readFeb 21, 2018

--

From February 12 to 18, I participated in Machine Learning Literacy workshop at School for Poetic Computation. As we all know, machine learning and artificial intelligence are gaining more attention as well as its ethical concerns. I have seen some workshops and talks in Montreal too; however, when it comes to an intersection of machine learning, art and ethics, it is still a niche topic and this is why I flew to New York to take this workshop. This article is intended to gather information, to note the hacks made during the workshop and to list what to do next for the Github group. Since the day 1 was an introduction, the article starts from the day 2.

Day 2 — Introduction to Natural Language Processing and Word Vectors by Allison Parrish

The first hands-on session was about text processing and presented by Allison from NYU. Prior to the workshop, we set up Anaconda 3 since the codes are prepared as IPython notebooks, which can be found here. We used spaCy library to process texts. As we follow the notebooks, she introduced word vector, with which arithmetic operations can be performed on words just like numbers. As an exercise, I used cosine similarity provided in her IPython notebook to search words that are related to “movement” in Ko Murobushi’s texts and made these words animate on screen. Since we had less than an hour for hacking, instead of interfacing Python with other graphic rendering frameworks (e.g., Processing), I injected CSS into IPython interface based on this stackoverflow answer. This is a snippet that extracts such words and add animation property to its div tag.

from IPython.core.display import display, HTML
for sent in spacy_closest_sent(sentences, "movement", n=10):
ft = '<style>@-webkit-keyframes mymove {from {left: 0px;}to {left: 800px;}}</style>'
i = 0
for w in sent:

v = w.vector
sim = cosine(v, vec('movement'))
if sim > 0.4:
ft += '<div style="width:10%;position:relative;animation: mymove ' + str(5-sim * 5) + 's infinite ' + str(i)+ 's;">' + w.text + '</div> '
else:
ft += w.text
ft += " "
i+=1
display(HTML(ft))

The example results as the video below, which is quite rough, but you can get the idea. As a next step, I would like to embed p5.js in an IPython notebook for complex visualization. Also, the results may be interfaced via OSC or the text processing algorithm can be ported to other frameworks (e.g., Processing, openFrameoworks).

Later in the week, Gene introduced Google Colab, which is a platform for executing IPython notebook on Google’s server (with GPU support). I would love to work on porting Allison’s examples to Google Colab.

Day 3 — Emotional / Subjective data and dataset creation by Hannah Davis

Among several topics that Hannah covered during her workshop, the activity I liked the most was categorizing “emotional texts.” Before the workshop, participants were asked to submit a paragraph that is emotionally moving. She collected the texts and we worked on categorization. The format was open so we could create any number of categories. Since I knew that this is a difficult task for me if I naively work on the texts, I began the task by sketching the textures that are evoked while reading. Then, I tagged the images according to the sketches (e.g., lines, solid, curved) without reading the texts.

Since my hand drawn sketches are inspired by Processing, after the workshop, I started converting the drawings to Processing sketches. The code is hosted here, and here is an example. The first two workshops of the week opened up potential of working with texts and graphics in different ways.

Day 4 — Generative art with neural networks (pix2pix) by Gene Kogan

Workshops on day 4 and 5 were presented by Gene. Pix2pix is a library to convert an image to another representation of the image, for example, grayscale to color image, aerial photo to map, or a photo took during daytime to night. Among its several implementations, we used the Tensorflow version. The material can be found here, and to execute the IPython notebook, we used paperspace, which is a virtual machine with GPU support. Although we only used the facade dataset during the workshop, later I created a dataset of depth images (left) and images with lighting (right).

Then I trained the neural network to simulate lighting of depth images. The result is shown below. I only ran around 20 epochs for training and there is an overlap between training and validation dataset; therefore the result is scientifically not interesting, but I would like to explore further with different shapes, lighting and colors. Also since I started porting the code from paperspace to Google Colab, I need to document that process. Currently, uploading and downloading dataset takes too much time on Colab, so I want to speed up the process by exploiting IPython HTML.

Day 5 — Generative art with neural networks (Neural Synthesis) by Gene Kogan

For the second day of Gene’s workshop, we tried neural synthesis, which is based on Google’s DeepDream. The material is again using IPython notebook and can be found here. This time we switched from paperspace to Google Colab. Gene already prepared API for neural synthesis, so we could easily experiment with the algorithm by swapping images and changing the parameters of the activation neurons and masks. Masks are generated by NumPy functions or from images, which I think will benefit if it can be interfaced with Processing to feed generative graphics. At the end of the workshop, I created a short video by slightly rotating the image every frame by scipy.ndimage.interpolation.rotate function.

Day 6.1 — Deep Learn Web by Cristóbal Valenzuela and Yining Shi

On Saturday morning, Cris and Yining from NYU (with Daniel Shiffman as TA) presented ML5.js, a deep learning framework for creative coders. ML5.js is a wrapper of deeplearn.js, which runs on a browser, and offers several examples that interfaces with p5.js. The project is still work in progress and there are several limitations, for example, at the moment you cannot train a model. Nevertheless, examples such as image labeling, text generation and style transfer are inspiring and we worked on a small project for presentation. For my project, I decided to use generative graphics instead of a camera feed as an input. Some functions (constant value, Perlin noise and sine function) are mapped to polar coordinate and the resulting image is input to the classifier. A label with the highest confidence is parsed and rendered as shown, inspired by a museum label. There was a small issue; the classifier is supposed to be running asynchronously but it was blocking the rendering once in a while, perhaps because deeplearn.js is also using video texture for computation.

Day 6.2 — Creative games with AI by Stefania Druga

In the afternoon, Stefania introduced Scratch for prototyping with robots and artificial intelligence. During the first session, we hacked robots such as Alexa and Jibo. I teamed up with Derrick and we first tried Alexa, but the authentication process has to be done every time we interact with Alexa, which was very tedious. So we changed the device to Lego WeDo and created a reactive tail. I attached two ping pong balls hanging from the shaft of a motor so that whenever the motor stops the balls will hit each other, which is triggered by a tilt sensor.

During the second session, we created a character that reacts to user input by sentiment analysis on IBM Watson, which is also developed on Scratch. We manually input several sentences with labels (kind and mean things) to train the system. It is a simple example but showed us how the dataset biases the results. For example, we created a cat as a character and added “you are smelly” as “mean things.” Then, when “you smell like roses” is input, although the sentence is supposed to be positive, the sentiment analysis algorithm labeled as “mean things” because of the limited dataset. Although the bias of dataset was a topic that we discussed throughout the week, creating a dataset is time consuming and most of the time we used a pre-trained model. Therefore, I was glad that we could finally create a dataset and test it in such a short time with a playful example.

I would like to thank all the organizers, teachers, and students for the amazing workshop.

--

--