Second UX Workshop for Science in the Making

Tom Crane
digirati-ch
Published in
5 min readFeb 9, 2019

This article was first published May 1, 2017

This time, the project team assembled at Digirati London HQ. Using the ideas and sketches we produced in the initial workshop, Diego Lago (our head of UX) had prepared some wireframes to help us explore elements of the user interface. As always, the wireframes are not polished designs, but devices to drive thinking about what things appear on a page and where they go.

The home page is a busy mixture of content with many calls-to-action, and we had a think about what should be curated, what should be completely dynamic, and whether some content is a mixture of both.

The team wanted the content at the top to be wholly curated, but some aggregations could be a mixture, with favoured material more likely to appear but always a random element, to encourage discovery of new things.

The problem of titles

Existing archive material rarely comes with snappy, web-ready descriptions that entice users. Clickbait was not foremost in the archivists’ minds when the record was created. Some archive titles are very long and might not reveal much in the first few words. Others are succinct, but aren’t necessary a complete description when taken out of their context in an archival hierarchy. It turns out that all the material we are looking at for the pilot has reasonably clear descriptions at the item level, such as “The knoll and slopes of Mount Terror from sea ice”. But they are still sometimes long: “P. A. M. Dirac’s second referee report on D. R. Bates, A. Fundaminsky, H. S. W. Massey and J. W. Leech’s paper ‘Excitation and Ionization of Atoms by Electron Impact-The Born and Oppenheimer Approximations’”. In this example, the clickbait that triggers you might be the subject (the Born-Oppenheimer approximation), but it appears right at the end. If the user interface seeks balance by truncating long titles with an ellipsis, the user might not have enough of a reason to click on the truncated link.

We talked a bit about the possibility of editorial intervention — to associate a more web-friendly title with an item when manually selecting it as a feature. This has its attractions, although it does introduce more workflow. You would still get a mixture of long and short titles when the UI is partly generated dynamically. There may be a concern over the integrity of the title.

For the pilot we decided that we would probably stick to the archival description and see how it goes.

Topics

As mentioned in the previous workshop post, topics are for the most part entirely machine-generated pages (or rather, they are generated by aggregating existing metadata, annotations and content).

We took an example of a topic as entity — a person, Thomas Henry Huxley:

We talked through what would be machine-generated and what would be curated. Huxley will be one of our three case studies, our enhanced topic pages to which we can add anything we like beyond the default aggregations that all topic pages get. For the most part though, we used Huxley as a stand-in for any topic page.

We talked about the extent to which external sources can provide content, for example the image and text coming from Wikipedia. This topic features a timeline, and we do have some information that could be used to automatically populate it. But it would be variable across different people pages; for some people we would have an interesting timeline, for others, a sparse one.

People are connected to archive material through a role, and this topic page uses that information to generate links to content under the headings “Huxley the author”, “Huxley the referee”. As in other areas of this project, the workshop suggested many interesting things that could be done with the content which are worth exploring and discussing — while keeping an eye on the time and budget for the pilot!

It’s good that we all appreciate that it is a pilot, and that we’re not going to get the right answers straight away. The pilot is just the first iteration of the platform, so we have to be selective about what we decide to develop, reach a decision quickly and build it fast to see what happens.

When we develop the Huxley topic page as a case study, we can add in some more visualisations of correspondence, we can add editorial content more suitable to the page’s role in the platform, and other content and functionality — but we’ll get to that a bit later!

The item

Now to the most difficult part. How to present the archival item?

We had a really interesting discussion around this, and it raised user experience challenges that affect presentation of digitised material and its associated content across many cultural heritage projects. It concerns the focus of the user’s attention when looking at archive (and in fact, any) digitised material. I go into detail in another blog post.

The Science in the Making project has a special focus — it presents the particular archive material associated with the published Philosophical Transactions of the Royal Society, and we verified that in all cases for the material in the pilot, there will be published material on the journal platform that the archival material should link to. This will often be a journal article, but it can be other published material such as editorial and letters. The user can always go from the archival material to the journal article on the Highwire platform — but how do they get back? We can do our best to drive traffic from the archive material to the journal platform, but what does the user journey look like for someone exploring across both resources?

Next steps

Now we have some work to do. We’re using the Omeka S collection management system for the pilot, along with modules we’ve developed for other projects.

  • Put all the archival images in a hosted instance of the DLCS platform, so they have IIIF Image API endpoints
  • Build just enough of a model in Omeka S to produce first implementations of the wireframes
  • Generate IIIF manifests for each of the archival items, and create Omeka items from them
  • Translate the metadata we do have into annotations to be stored in our annotation server
  • Index everything in our IIIF-aware search server

First off, let’s get all the items visible, which means IIIF.

--

--