Digitization tends to squash things. It quite literally turns three-dimensional objects into two-dimensional ones. It’s not as bad as it used to be — just one look at a bitonal microfilm scan will show you how far we’ve come — but it’s still easy to forget, when you’re looking at a digital surrogate, that the original is out there somewhere: something of weight and thickness, something you could hold in your hands.
It’s also easy to forget that the object represented on your screen may have had a long, rich life. Odds are it’s been handled, spilled on, annotated, stuffed in a trunk, sewn up and eaten by worms, over decades or centuries or millennia of use. Digitization tends to represent a single state, a single interpretation, and it’s easy to forget that there are and were others.
In some ways this forgetting is a good thing. If we were constantly reminded of the inadequacy of our digital surrogates, our office morale would probably be much worse. Still, it’s important to recognise the gaps between the physical original and the digital copy. In that spirit, we’re going to take two weeks, here on Medium and also on Twitter, to celebrate the physicality of our collections, and to examine the strengths and weaknesses of our digital surrogates in conveying that physicality.
We’ve gathered up examples of volvelles, sheep windows, myrioramas, foldout maps, gold leaf, blind-tooled bindings, iron-gall ink burn and more. On Twitter, we’re sharing images with the hashtag #booksquashing (because #bookandmanuscriptandmapandletterandephemerasquashing was too unwieldy). Here on Medium, we’ll have longer posts on the same theme.
My colleague Tim will be writing next week about our collections as interactive physical objects: things people held, rearranged, wrote in, sewed up, and on occasion held a candle too close to. Below, I’ll be starting us off with the fundamentals of what we can and can’t capture, from the smoky smell of burnt parchment to the shine of gold leaf.
What we can’t capture
When I first started working at the Bodleian, my secret goal was to touch parchment. (I’m following the Bodleian conservators’ convention here of using “parchment” as a catch-all term for all types of membrane support: vellum, goatskin, sheepskin, etc.) It was not to view many thousands of images of parchment online, but to actually touch it: find out whether it’s smooth or rough, how thick it is, how different it feels from paper as you turn the pages.
I found out eventually that all of these things vary hugely from manuscript to manuscript, and even from page to page, depending on the expense and purpose of the parchment. These qualities are hard to quantify and hard to deduce from a digitized image; you really need to feel the material between your fingers and see and hear how it moves when you turn the page. You need to be able to inspect it from multiple angles, and in multiple lighting conditions, to see the three-dimensionality of the surface. You also, ideally, need to be able to smell it.
The same thing is true of any other material: a two-dimensional image captured under neutral light simply can’t convey all the details. In our images of 15th- and 16th-century printed books, it’s possible to see the faint chain and lay lines created during papermaking, but photos taken from a low angle in natural light throw these indentations — as well as the bite of the press into the soft rag paper — into sharp relief.
The need for raked and variable lighting is even greater when looking at uninked surfaces like blind-tooled boards, cuneiform tablets and etched palm-leaf scrolls.
Finally, it’s difficult to capture the physical dimensions of digitized objects. This isn’t strictly a 3D-vs.-2D problem; it’s more a problem of context. While we photograph a ruler in most of our images for scale, this doesn’t have the impact of sharing space with a palm-sized dictionary or a Hebrew Bible as long as your arm.
It is possible to convey some of these details through a bit of technological ingenuity. The IIIF viewer Mirador can be configured with a virtual ruler to compare different digitized objects at the same real-world size.
Some institutions have created 3D models of cuneiform tablets, or supplied an interface where users can toggle between raked-light and conventional images. At the Bodleian, in order to capture the shine of gold leaf in our digitized manuscripts, our photographer John Barrett takes two shots — one with conventional flash and one with ring flash — and then combines them into a single image. (John has written more about his work in a blog post.)
What we don’t capture
When it comes to bindings and whole volumes, what we see digitally is restricted by what we choose to capture as well as how we capture it. In most cases, the Bodleian photographs the boards of its digitized codices — the upper and lower covers — but until recently, we haven’t generally captured the spines and edges.
We also generally capture one page at a time, rather than a whole opening (in order to minimize the strain on the volume and capture as much of each page as possible), and we interleave pages with a black cloth in order to avoid bleedthrough from other pages. The result is that it’s hard to get a sense of the bound volume as a whole object: the thickness of the codex, the warping of the text block, any decoration or writing on the fore-edge or spine, and details of binding and collation.
Even if we did capture spine and edge images in every case, we wouldn’t be able to capture the experience of opening the book, turning the pages, and viewing it from different angles. For particularly special items, like our Kennicott and Gutenberg Bibles, we have captured a variety of views of the whole volume, but that is the exception, not the rule.
The good news
Conventional digitization does have one big advantage over in-person consultation: scale.
Going back to parchment, for example: in our thousands of digitized books and manuscripts, there’s creamy parchment so smooth that you can hardly tell the difference between hair side and flesh side; cheap parchment cut from the edge of the animal skin, with an irregular border around the neck and leg holes; parchment with holes in it, parchment with needle piercings (or actual thread) where the holes were stitched up before stretching, parchment with sheep windows, parchment that’s mouldy or worm-eaten or water-damaged. You can see the full range, and the deep zoom functionality of today’s image viewers means you can examine every detail.
The same is true of paper, paint, gold leaf, etc. In order to find an image of shell gold decoration, for instance, you currently have to know what you’re looking for, as we don’t describe decoration in detail in most of our electronic descriptions. Someday, however, we would like to utilize Bodleian conservators’ records and user tagging to create searchable records of decoration and damage.
Digitization also potentially allows us to recreate physical experiences of library collections that are no longer possible in real life. In the reading room, you would only be able to consult a few items at a time, and even in the stacks, because we keep everything in acid-free archival boxes, there’s no way to get a sense of how all the manuscripts in a collection look together. In Digital Bodleian, however, we display the upper cover of each item as a thumbnail image.
If you browse our Hebrew manuscripts, you’ll see stamped leather Laudian boards rubbing elbows with Islamic envelope bindings; if you browse our incunabula, you’ll see marbled paper covers next to blind-tooled and clasped contemporary bindings. Someday, especially if we do start capturing more spine images, we might be able to turn our digitized collections into a virtual bookshelf, for an easy visual comparison of size, quality and provenance.
Of course, we don’t just digitize bound volumes. There are also posters, letters, games, maps, scrolls, guardbooks, papyri, unbound fragments, seals, etc. Many of these are richly three-dimensional objects, and in digitization, as discussed above, their three-dimensional qualities are somewhat flattened out. In some cases, however, particularly with large and unwieldy items, digitization can actually help give a better sense of the whole object than you would be able to get in person. Take a 30-foot scroll, for instance.
If you were consulting the original, you might only be able to unroll a small section of it at a time. This would be true of the photographer digitizing the scroll as well, of course — but the photographed sections could then be stitched together digitally, allowing the user to pan and zoom across a reconstructed image of the whole scroll.
We took this one step further with a roll-codex, MS. e Musaeo 42, which we digitized for Daniel Sawyer’s Rolling History project. This manuscript is a genealogical roll, tracing the English royal lineage back to Adam and Eve, except that it’s not a roll; it’s bound as a book. Consequently it’s extremely difficult to read in person.
For Daniel’s project, we photographed the book as a book, and then used the Universal Viewer to display cropped versions of each page image as a continuous sequence. The result is a very long, slightly wobbly-edged scroll, which lets you see the entire genealogy without having to turn a page. (Daniel has written more about this on his Rolling History microsite.)
This is where digitization really comes into its own: when it lets you look at something in a way that would otherwise be impossible — whether via a 3D model, a stitched-up map, or even just eye-wateringly deep zoom.
We’ll be posting more on this subject next week, and in the meantime we’re sharing lots more images of challenging physical items on Twitter. Be sure to keep an eye out for the #booksquashing hashtag.
This post was written by Emma Stanford, Digital Curator at the Bodleian Libraries.