Interactive Application & Summary


The application we developed for this project is relatively simple. It is a shell program that allows one to input a filename, then run the final model on that image and output relative probabilities of the image belonging to each class the model specifies.

Sample command line input and image output

We achieved a ~41% accuracy for style and ~53% for artist on unrecognized images. Much of this was due a relatively small database of images and a model with few training layers. This was a computational issue; larger networks can take up to a week to train and this was not something we had the time for. It’s worth noting that with 12 styles and 28 artists, the probability of getting a correct classification by chance alone is 8.33% and 3.57%, respectively — a significant improvement. With more images and more time to train, a better score should be attainable in the future.

All 28 artists with their painting counts

Indeed, scraping and computation proved to be the two bottlenecks that limited potential outcomes. That being said, what was created was a great starting point for future work. Doing this again with more time and power and more images are the obvious next steps, but a possibly-overlooked accomplishment is a start on a database for painting images that can easily be updated and community sourced. Certain professors have expressed interest in using the images and databases as a starting point for students in an upcoming class.

There are also more application options available. The command-line program that was created is a sufficient proof-of-concept, but better examples are available. If all of the machine learning code is compiled, we can build a web-based version of it with a full graphic interface. We also attempted to use the tensorflow projector to generate a full 3D projection of our dataset, but again, it required too much time to generate the embeddings required as input for the projector. This would be an appealing image, for sure.

Tensorflow projector can visualize high-dimensional data with ease. Too bad we couldn’t get the embeddings in time…

A final point of interest is the barrier around getting high-quality images, even at an academic level. We contacted library resources and made multiple phone calls to various art galleries and databases, only to find that we either could not be granted access or that there was a limit to the images we could access. Of the resources we could reach, only Harvard provided an API, and the API in question was fully-featured but not particularly well-documented. In order for this work to reach its full potential, a greater degree of collaboration and standardization is required between these resources. Yet these exploratory studies are a great way to start.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade