Artificial Startup Style

Neural art about startup fashion

Jeff Smith
Data Engineering
Published in
9 min readJan 12, 2016

--

Note: this post is now obsolete in various ways. It’s left up purely for archival purposes.

Deep learning can now be used to mimic artistic style. This paper kicked off all of the fun, and some people have really taken up this technique as an interesting approach to understanding the intersection of art and artificial intelligence.

Kyle McDonald was one of the first to begin to dig in to how this technique could be applied. Working in the same area, Alex J. Champandard built a thoroughly entertaining bot called The Deep Forger.

There’s also been a ton of related activity in other For an introduction to the recent Cambrian explosion of activity in this space, I recommend this overview on Medium.

To explore the possibilities of deep learning-powered art, I decided to engineer a small Turing test of sorts. As a subject for this project, I decided to use another strange marriage: startups and fashion.

The Looks

The looks in this spread are inspired by the fast-paced world of New York startups.

The atomic unit of startup fashion is the giveaway.

It is usually a 100% cotton garment with some obscure technical reference printed on it.

These giveaways are driven by various motivations: companies looking to hire developers, tech vendors trying to get developers to use their products, and startups using their developers as inarticulate but motile recruiting billboards.

Nearly all of the items in this collection are freebies picked up at conferences I’ve spoken at or startups I’ve worked at. This collection represents a week of work clothes for the modern hacker. They are equally appropriate for coding at a hackathon or networking in the keg line at a WeWork.

From the top-left:

The Styles

Paired with these looks are five paintings from modern masters. They are a mix of the figurative and the purely abstract.

All of these masterworks reflect the powerful styles of their artists, giving the artistic style model ample material to draw from.

From the top-left:

  • Monday-Woman V, Willem de Koonig
  • Tuesday-Nude (Study), Sad Young Man on a Train, Marcel Duchamp
  • Wednesday-Number 8, Jackson Pollock
  • Thursday-Senecio, Paul Klee
  • Friday-Composition VII, Wassily Kandisnky

Learning the Styles

Using Anish Athalye’s implementation of style net, I produced the following styled versions of the startup looks. My expectation was that each would turn out quite similar to their source styles. Not all of the combinations matched my expectations. You can see my (entirely subjective) ratings of the similarity of the resulting images in terms of the similarity of the palette of the image, the texture of the image, and an overall similarity level.

de Koonig Monday

Similarity Ratings

Palette: Questionable

Texture: Questionable

Overall: Questionable

Duchamp Tuesday

Similarity Ratings

Palette: High

Texture: Moderate

Overall: Moderate

Pollock Wednesday

Similarity Ratings

Palette: Moderate

Texture: Moderate

Overall: Moderate

Klee Thursday

Similarity Ratings

Palette: High

Texture: High

Overall: High

Kandinsky Friday

Similarity Ratings

Palette: Questionable

Texture: Questionable

Overall: Low

Analysis

The above images are actually the result of a crude form of manual grid search. That is, I tried a bunch of stuff and picked what I liked. Tuning the hyperparameters of an algorithm of artistic style is an intrinsically challenging problem. There is no objective function that can be applied to any of these output images that will assess its similarity to my largely qualitative assessment of artistic style.

One could conceive of an approach that involved a bunch of human labelling of these images, but for a concept as complex as artistic style, I shudder to think of the average discordance in concept label values amongst human labellers.

That said, this technique is interesting. It does not often produce the output that I was expecting. But given a not-too-bland source image and some great art, it can consistently produce images that are less boring than their source photographs.

Turing Testing Art

The highest measure of any AI-related technique is generally considered to be human-equivalent capabilities. In the Turing test, the interactions of the AI software must be indistinguishable from interactions with a human. My old boss, Ben Goertzel, founded a branch of AI known as artificial general intelligence which aims to build systems that can achieve direct human equivalence in rich, complicated contexts such as obtaining higher education in the same way that a human might.

This leads me to the larger question that I have about algorithms for artistic style:

What is the Turing test for an AI forger?

Copying paintings is not the highest and best use of human intelligence. The history of imitating famous art makes for engaging reading but is not a record of mankind at its noblest. However, it is an instructive example of what natural general intelligence can achieve. A skilled forger can produce a “new” Kandinsky that experts on Kandinsky will attest was produced by the master himself.

So, a truly human equivalent AI forger should be able to produce a “new” Kandinsky that I (and people far more knowledgeable than me) believe is a new Kandinsky.

I suspect that the initial implementations of the style net algorithm are nowhere close to this capability. When I look at most of my results, I find them to be amusing photo filters, not novel compositions. “Instagram on Steroids” is not quite a new Composition VII.

People seem to be particularly enamored with using Van Gogh for their experimentation with neural art. I think this is because most of what impacts us when we see a Van Gogh is his radical approach to the texture of paintings. The style net technique seems to be at its best when its working with textures, rather than full compositions.

In the interest of giving the algorithm a sporting chance, I decide to throw it a softball. This is my favorite picture of my dog, one that I’ve used in several contexts before.

Couture dress (Kansas City)

I used Van Gogh’s Starry Night as a source image (like everyone else seems to do).

Starry Night, Van Gogh

In the Turing test formulation of this exercise, the algorithm should be able to produce a new image which I would believe is a “new” Van Gogh.

Here’s what I was able to produce.

Starry nom

This is a pleasing result. I like this picture. But, of course, I like the source picture. The technique merely added the texture of Van Gogh’s coarse brush strokes to accentuate my dog’s comparatively fine fur. This isn’t a bad image, but it is in no sense novel art. No one would mistake this for a “new” Van Gogh.

Contrast that image with this truly original composition by Dawn Verbrigghe, based on the same photograph.

Sunday Morning in Meatpacking, Verbrigghe

Though my photograph of the painting doesn’t do it justice, you can still see far more evidence of general intelligence applied in the creation of this image. Like the neural Van Gogh, the artist has chosen to use a coarser, more expressive texture than the purely representative source photo.

But Verbrigghe has gone far beyond merely applying a texture to an image. Subtle changes have been made to simplify the background and focus the viewer’s eye on the subject. Color has been used in ways that don’t so much originate from the image as they do in the artist’s feelings about the image. All of the edges are loosened up so that they are mere indications of rough furry and frilly edges that make sense in a way that is fundamentally absent from the source image.

The Future of AI Art

Let’s be clear: I’m not an AI pessimist. I work on an artificial intelligence that runs the gauntlet of the Turing test all day, every day. Artificial intelligence is here, and it’s starting to take over some of the truly crappy jobs that humans used to have to do.

But I think that, as a proponent of investment in AI, we as a community do not overstate our progress or our near-term expectations. That’s how AI winters come to pass. I’ve not seen or been able to produce neural art that passes my personal Turing test. But when I walk down the right street in Manhattan, I can find plenty of examples natural general intelligence that is more than capable of whipping me up a new Van Gogh.

Human art is not dead.

The Deep Forger is no Wolfgang Beltracchi. Art forgery, like chess in the early 90s, remains a game that humans can still best machines at.

AI might get there someday. Perhaps making novel art is just a bit easier than the substantial challenge of doing laundry. A priori, it’s hard to say. Such is the path of progress in AI.

However, I think that the style net technique is already a powerful new tool for artistic intelligence augmentation. Kyle McDonald’s studies using this technique are fascinating, even if they show a fair number of algorithmic wrong turns. The process of grid search that I executed could be used to far greater creative effect than I’ve achieved, in the hands of a competent artist.

I can’t say that I’m disappointed. It’s great news when AI takes over some tiresome chore. But the creation of art is anything but a chore. Art is territory that I’d prefer not to cede to the machines just yet. Capable artists exhibit far more general artistic intelligence in the composition of novel pieces of art than I’ve seen algorithms achieve thus far.

So, sorry, bots. I’m not yet willing to put your artwork on my wall just yet. You’ll have to content yourself with merely scheduling my meetings, picking my music, driving my car

Update

Alex J. Champandard responded to this post with a few example outputs from his Deep Forger. First, he produced his version of my dog as Starry Night.

This is indeed closer to the source Van Gogh. One potential partial explanation is that Anish Athalye’s implementation relies on Adam and not L-BFGS, as the original style net implementation did. There may be more going on here, though. Alex J. Champandard’s Deep Forger is described as “inspired by” the original style net paper but not as a literal implementation of it. Instead, it’s a new algorithm with different capabilities.

In Alex’s opinion, this is his best result with my source image.

It’s yet another interesting image, but I don’t think that it would pass any form of a Turing test. But that’s not the goal, for Alex at least, and perhaps a lot of other users. His characterization of what he’s doing is:

I should also add that I consider @DeepForger (the Twitter bot) a form of Art project that emerges from the algorithm, semi-random painting selection, and social media. It’s more prolific than human artists, but also produces many mediocre paintings — about 10% of amazing ones with lucky or skilled users.

This definition makes it clear that there is a role for natural intelligence in this process. Interestingly, this definition isn’t merely about intelligence augmentation as it’s usually discussed. In the usual formulation, intelligence augmentation is when software helps a human do something they were already doing but better. In this case, I’d say that we’re talking about humans augmenting the intelligence of the machine, or at least collaborating with the machine in a way beyond mere concept labelling.

--

--

Jeff Smith
Data Engineering

Author of Machine Learning Systems @ManningBooks. Building AIs for fun and profit. Friend of animals.