Introducing the M+ API

Micah Walter
M+ Labs
Published in
6 min readDec 18, 2018
The M+ API will act as the “brain” for the museum’s websites, gallery-technology, and apps. ( Illustration by Justine Braisted )

My name is Micah Walter, and I am the founder of Micah Walter Studio, a digital studio based in New York City focused on helping museums around the world create digital products and experiences. I’ve been working with M+ for the past year and a half to help the museum bring its collections online through an open access initiative, as well as help them build the tools and infrastructure required to do so. What began as an exploration into the museum’s rapidly growing collections quickly evolved into a long term project, aimed at making these collections as open and accessible as possible. It’s been an amazing journey, and it’s really exciting to finally be able to write about it in depth.

Central to our work with M+ has been the development of its open access dataset and API (Application Programmatic Interface), which will give developers, students, collaborators, and M+ itself programmatic access to its collections. At the start of September, M+ released its first public dataset to the world and held its inaugural symposium on digital art, M+ Matters: Art and Design in the Digital Realm. Along with the public data, the museum also launched its first public API, offering developers programmatic access to the same underlying dataset.

This post will dive into some of the more technical aspects of the API itself, and all of the fun things you can do with it now that it’s available to anyone in the world. If you want further inspiration, please check out M+ team member Diane Wang’s recap of the M+ Data Design hackathon — held this past September as part of the M+ Matters symposium — celebrating the launch of the API and the first M+ open data set.

How To Use the M+ API

The M+ API, in its current form, is a mirror of the data that has been made available on the museum’s GitHub page. For now, the M+ API exists to satisfy the needs of the developer community, and it is the thing to use if you are interested in building an app or a bot that needs access to M+’s data in real time. If you are doing a data analysis or visualization, you should probably consider downloading the dataset as a single CSV file from GitHub.

With the M+ API, you can easily explore the M+ Collections data. You can develop your own complex queries, and integrate M+’s data into your application. The data found in the API is updated nightly, which means your app will be updated nightly as well.

To get started, head over to https://api.mplus.org.hk and create an account. You can sign up using your email address, or login with Twitter or Facebook. Once you are logged in, the system will create a unique API key for you. This is the key you will use to create requests for the API in your application. The dashboard and API playground will automatically add your API key to example code snippets and playground requests.

Now that you’ve got an API key and an account, you can begin by reading through the documentation, trying out some of the example code snippets, or launching the API Playground, where you can create and run as many test queries as you’d like and see their results dumped out in the browser. The API Playground is a great place to prototype ideas and build queries. It has an explore tool, which you can use to find out all of the ways you can query the dataset, and it keeps a running history of any test calls you make, so that you can look back and try things over and over.

Included in the documentation are also a number of sample requests that demonstrate the things you can query. Each one has an example code snippet written in Javascript, and a link to the Playground where you can quickly run the example.

For example, you can run the following query to get started:

query {
hello
}

This query sends a simple request to the API, which should always result in the following:

{
"data": {
"hello": "world"
}
}

It’s an easy way to make sure your API key is working.

This is of course going to get a little tired after a while, so next up, try the following:

query {
objects(page:0, per_page: 5, area: "Moving Image") {
id
objectNumber
title
displayDate
medium
classification {
area
category
}
}
}

In this example, we are asking the API for a list of objects that all fall under the category of ‘Moving Image’. What’s more, we’ve asked the system for some specific results: we want to know the objects id, objectNumber, title, displayDate, medium, and any classifications it belongs to. We’re also asking for just the first five results.

This is a unique feature of the API architecture we’ve chosen, which is called GraphQL. GraphQL is a query language for APIs that allows the developer to customize their request so that they only receive the data they want, instead of an enormous data dump that could be mostly useless and confusing. It also means that developers will already know what to expect in the response, since they created the format of the request themselves.

My First Project

During the M+ Data Design Hackathon, fellow hackathon leader Jane Pong and I decided that we should participate by creating our own little prototypes. Jane chose to create an analog visualization of the collection data using Post-it notes, and I decided to try my hand at building a simple app with the new API that translates artwork titles into emojis.

Here’s a copy of the code: https://gist.github.com/micahwalter/8c4c2fca2f12ab85157736b2d751229f

It’s pretty simple.

I created a query to get the titles of a bunch of objects like this:

{
objects(per_page: 200) {
id
title
}
}

I then put the object titles through a text-to-emoji translator package called ‘moji-translate’. The result is rendered to a web page and looks like this:

Demonstrating the results of my titles-to-emoji app during the M+ Data Design Hackathon.

It’s amazingly simple, but a fun way to make sure I can connect to the API and receive the data I am requesting. Plus, it makes for a fun show and tell!

The Tech Stack

While museums all over the world have been making their data accessible through APIs for years, for M+ we chose a technology stack that can give the museum some flexibility in the future. Here’s how it works behind the scenes:

  1. We harvest data from the museum’s collections management system each night via a stored procedure. This results in a very big XML file that gets saved to a location on one of our servers.
  2. We then take this XML file and filter it, massage it, and bend it so that the data flows nicely into an ElasticSearch index. This gives us a powerful search engine, with which we can query our dataset.
  3. Finally we expose this search engine to the world through our public API that is built using Node, Express, and GraphQL. This gives developers a handy tool to create their own queries, and allows us to continue to evolve the underlying tech stack in the future.

Soon, we will be adding new capabilities to the API: the ability to form advanced text-based queries, the ability to filter the data based on a variety of fields, and the addition of more and more data as it develops in the museum. The API will also evolve to increase the capabilities of the museum itself. Eventually, all data related to M+ Collections will funnel through the API, including in-gallery technology, apps and digital experiences, as well as any future web properties the museum decides to make. This means the M+ museum staff can continue to use TMS as their source of truth and as their content management system for collections, and the API can deliver all of that content and more to a wide variety of applications and tools.

Over the course of the next year, our studio will continue to work with M+ to develop the API. We will be adding more and more fields, and the overall dataset will continue to grow, with the API offering more capabilities as we build them. For now, we’d love it if you’d take it for a spin and see what you can come up with!

--

--

Micah Walter
M+ Labs

Art, technology, museums, design, work, photography, and writing.