GLAM Guide

Will Tensorflow be your next gallery or museum-guide?

Tord Nilsen
tordnilsen
4 min readMay 31, 2019

--

What is Glam Guide?

Glam Guide is a small app I made for fun, but also a demonstration in principle with the aim of verifying that AI in camera-lenses has practical potential in the GLAM sector.

Why GLAM Guide?

Looking at a piece of art has always wanted me (and probably others) to learn more about the piece. That is why curators put their little white signs next to the art. But what is you’re not so lucky as my daughter and me are where we could get close to the art and read the signs?

Sometimes it is like this:

So with that in mind I wanted to create a app that recognize a painting, a sculpture or a part of a building from a distance, and not by scanning a QR code..

Let’s get started

Disclaimer! This is one of the problems there are many solutions to. It could be done with ordinary Tensorflow and Python, but if the GLAM has lots of items to present, the Model will be very large and not very suitable for handheld devices. There is a Tensorflow Lite solution that could be used, but afaik that is platform depended (Android and iOS). The most elegant and production-like solution would probably be Tensorflow Serving with a REST API, but for such solution I want to wait for Tensorflow 2.0, so I went for Tensorflow Javascript.

TensorFlow.js is a library for developing and training ML models in JavaScript, and deploying in browser or on Node.js

Train a model in tf.keras with Google Colab

I chose to train the model in Keras (read how here) in my new favorite service: Google Colab.

Google Colab is a free cloud service and it supports free GPU!

My initial approach was to take a lot (hundreds) of photos of the objects, but after some testing I saw that it was not necessary. Tensorflow gave me a thumbs up after just a few photos. (3 to 6 photos pr object). But I will do a more intensive test with more photos and more processor power when I try out Tensorflow 2.0 Serving.

My Keras models was saved via model.save(filepath), which produces a single HDF5 (.h5) file containing both the model topology and the weights. This has to be converted before importing to ts.js with a tool called tensorflowjs_converter.

Then I could load the model into my js:

Note that you need to upload not only the json file, but all files generated by the converter. There is a reference in the json for the addition files.

I created a simple app that analyze camera images real-time. I used approx 10 images for each painting. This is the result:

Conclusion

First: The approach I used in this example with real-time image recognition is NOT a sustainable solution. Your handheld device will overheat and the constant calculations and video-stream will drain your battery in less than 30 minutes.

When walking in a huge museum a better way will be to have the user take snapshots of the art. But for smaller installations realtime recognition could work.

There is a lot of possible scenarios for such an app, combined with Augmented Reality some very cool applications could be developed: Cultural Heritage could show building constructions, tourist organisations could create a Public Art guide.

It can also be reverted: A museum could create a minigame for visitors where they get point for photographing certain artworks.

What’s next?

There is two things I want to explore: 1. Tensorflow 2.0 REST API which I think would be very suitable for productionsites and 2. How to use object recognition in a minigame.

--

--

Tord Nilsen
tordnilsen

Digital innovator passionate about the cultural sector. Exploring new ways to engage audiences through strategy, technology, and creativity.