Feed your curiosity — Exploring one ornament at a time

Marleen Grasse
NEO Collections
Published in
7 min readMar 22, 2024

Creating new access to the museum’s collection with sticky addictive practices and AI.

A blogpost by Marleen Grasse, Philo van Kemenade, Igor Rjabinin and Antje Schmidt

Exhibition “Ornament. Exemplary Beauty” at MK&G. Image: MK&G, Henning Rogge

Our goal in the NEO Collections project is to find different ways of exploring the collection of Museum für Kunst und Gewerbe Hamburg and creating new forms of access to it. Therefore, in April 2023 we ran a Data Exploration Sprint, to make all our existing collection data accessible to a group of international experts. And by all, we mean ALL, not only the checked and polished parts of it. We invited designers, software developers, academics, information and museum specialists to a 5 day onsite workshop in the museum, to dive into all the data and experiment with it. One of the ideas that evolved during this week was very much connected to the gallery spaces itself. It turned into a first version of an application now called Ornament Explorer that aims at enhancing your visit at the museum. But how did we get there?

Project team looking at the vitirine with objects of the month at the Museum für Kunst und Gewerbe Hamburg

Spontaneous encounters instead of a search slot

In a vitrine in the museum the “objects of the month” are exhibited based on a seasonal topic. It happened to be spring, which inspired Igor Rjabinin, Philo van Kemenade and Michal Čudrnák to create a prototype for a digital application that uses a sample set of roughly 3000 museum objects tagged with the term “flower”.

Screenshots from the “Objekt forscher prototype” focusing on the theme of flowers

The prototype (called Objektforscher) allowed users to explore more objects connected to this common theme. Using the application you can navigate from the ancient world to modern design with only a few klicks, based on your preferences, no previous knowledge needed. Focussing on one image, four new related images would load alongside it. For us, this kind of access was very insightful and led to surprising discoveries while we almost got addicted navigating through the richness of the archive, always curious which object would show up next, which we might have never seen before.

Exploration for the User Interface

Before, we could either see a fraction of objects in our Collection Online or research them within the Collection Management System, which hasn’t, to be fair, like most of these systems, the user-friendliest interface (a lot of search slots). The Explorer instead offers space for serendipity and spontaneous encounters with objects by exploring the visual similar images. The idea by Igor, Philo and Michal was to show images of objects on the horizontal axis that are visually similar to the object in the middle and use metadata to show objects above or below that are newer or older.

A Screenshot of the Ornament Explorer application with explanation of the concept

Igor & Philo: In order to know which images are similar to each other, we used an AI technique called “image embeddings”. The images are fed into an Artificial Neural Network pre-trained on millions of images, which calculates the position of each image in a high dimensional space, where similar images are near to each other. This means that a drawing of a red flower is nearer to another flower drawing with similar colours than it is to a photo of a ceramic vase with a floral pattern. We tested this with a simple proof of concept web application that takes an input image and shows the most similar and the least similar objects within the flower sample set and were quite happy with these first results.

Experiments with visual similarity

New layers of insights and interaction

With an exhibition on ornaments coming up, we knew that the “Objektforscher” (translated: Object Researcher) would be ideal to further explore this theme and add another layer of insight and interaction for the visitors. “Ornament” is one of the most frequently used tags in our database. There are more than 12.000 objects from all departments annotated with this term. We knew this from looking at the whole dataset and also visualized it during the Data Exploration Sprint.

This information not only demonstrates powerfully how important ornaments are in the context of arts and crafts, but having access to the images gives visitors the possibility to experience the vast variety themselves. Browsing the images a faceted insight into the creativity of the craftsmen and designers not only presented in the exhibition but of the whole collection is possible. By making all the objects accessible that could have been part of the exhibition (but aren’t) we would like to encourage a critical reflection regarding collections, curating and exhibiting.

Screenshots from the “Ornament Explorer”

Yet, from the first prototype developed during the weeklong data sprint, until its presentation in the exhibition, there were more steps to be made and different challenges to be tackled. It was clear from the beginning that this version is only another step in the development of this tool and the work will continue. It is a chance to test it and gather visitors’ feedback and further improve the Objektforscher which we renamed as “Ornament Explorer” for the exhibition.

Screenshot of object view in “Ornament Explorer” with selected information from the collection managment system and link to the Collection Online

How to create a scalable and sustainable product?

Igor & Philo: A main technical challenge lay in upgrading the data sprint prototype to work with a larger collection of objects. With the limited number of around 3000 images in the first prototype it was still feasible to use exported ‘vector embeddings’ files to calculate distances between an input image and all other images in order to arrive at a set of most similar ‘nearest neighbours’. For a collection of over 12000 objects, this would take a long time which would prevent a performant user experience. We needed a more efficient approach.

Embeddings are becoming a common approach in various domains from e-commerce to cultural heritage and a lot of tools are available to make working with them easier. We used a vector database called Weaviate to efficiently create, store and query embedding vectors for all our ornamental object images. Weaviate comes with a lot of functionality included that makes it easy to find similar images while being highly performant on large datasets.

For example, below is a simplified code sample that gets the identifiers and distance for the 4 objects from the Ornament collection that are nearest to the object with the id “OBJECT_ID_XYZ”:

{
Get {
Ornament (
limit: 4
nearObject: {
id: "OBJECT_ID_XYZ"
}
){
identifier
_additional {
distance
}
}
}
}

We also found it helpful to see high level stats of the included objects, gain insights about how often objects are shown and find images by their object id. To facilitate this we also developed an admin interface enabling filtering and search for specific objects that are included in the app.

Screenshot Admin Interface

An important aspect of the admin interface was to assess the factor of ‘serendipity’ when navigating through the collection of artworks. One common issue encountered in utilizing AI for the ‘more like this’ functionality is getting stuck within the bubble of commonly viewed similar artworks.

This situation may lead to the well-known ‘Long Tail’ graph when sorting artworks by view count. To address this, we implemented ‘negative ranking’ to prevent previously viewed artworks from reappearing when loading another ‘similar’ artwork.

This approach allows for a much broader reach of artworks, and the views are distributed more evenly, as evidenced by data statistics.

{
Get {
Ornament(
limit: 4
nearObject: { id: "OBJECT_ID_XYZ" }
where: {
operator: And
operands: [
{ path: ["identifier"], operator: NotEqual, valueText: "OBJECT_ID_ABC" }
{
path: ["identifier"]
operator: NotEqual
valueText: "1919.363.a-b"
}
{ path: ["identifier"], operator: NotEqual, valueText: "OBJECT_ID_DEF" }
]
}
) {
identifier
_additional {
distance
}
}
}
}

Untapped data potential

We have learned that despite the concerns regarding data accuracy —of course, data sets are never complete, some might have errors and images that we would rather use for internally than sharing — our data has enourmous potential. It even proved to be very useful for this kind of exploration where we deliberatly show only a small selection of the available metadata fields to keep it simple. (We still believe it is necessary though to communicate to our visitors that the data sets are not quality checked, to manage expectations.)

Next, we would like to implement the Ornament Explorer, which is currently only on view in the gallery, within our Collection Online. In addition, we would also like to explore how we could use AI to improve internal workflows of documentation, e.g. with the help of visual similarity search. We are gathering user feedback to improve this version for its next iteration.

View of the exhibition, image: MK&G, Henning Rogge

Try yourself?! Let’s get in touch!

If you want to come by and try out the Ornament Explorer in the gallery you can do so until 28/4/2024. If you wish to try it out online, please drop us a line at neo@mkg-hamburg.de

We would be very happy to see the Ornament Explorer applied to other themes and other museums. The code is openly licensed and you can find it on Github.

--

--

Marleen Grasse
NEO Collections

#NEO Collections #openGLAM | Museum für Kunst und Gewerbe Hamburg