An enjoyable audit

Scott Williams
4 min readDec 10, 2018

--

This is a response Chad Weinard’s Data as Medium entry in the first issue of Thoughtforms. I’ve been out of the muse-tech field for a number of years but his piece conjured a rush of memories of projects, half-baked ideas, and institutional inflection points that re-kindled a real sense of why I loved working in museums.

Much of what I’m going to write about is from 7+ years ago so I will try to keep it at a high level to minimize specific misrepresentations.

What happens when everyday museum work adds to the collection knowledge?

On of my first experiences working with a museum collections management system (CMS) was helping migrate from Questors’ Arugs to KE [now Axiell] EMu. During this 5+ year endeavor, we dreamed of the days when the lives of collections managers, registrars, exhibition staff, curators and conservators would be made infinitely better by the panacea of new software features, improved workflows. While those dreams were never fully realized [they never are] we did find a diamond in the rough, the EMu Audit Module.

Like most museums we could not effectively track when changes were made. At best, changes would be described or explained in a separate notes field. Usually they look something like this and if we were really lucky they were signed or provided a reference to justify the change.

Changed original culture value from “Iroquois (uncertain)” to Oneida based on further research. — Jane Doe [intern, summer 2003]

A Way-Back Machine for Collections Data

After the migration to EMu we instantly were given a 10-ton hammer to quash our ‘who changed this record?’ problem.

EMu 3.2.04 sees the introduction of a fully integrated auditing facility. The facility includes a new module, Audit Trails (eaudit), which contains complete audit trails for all operations registered for monitoring. There are five levels for which audit records may be generated:

change (insertions, updates, deletions)
search (queries)
display (viewed, sorted, reported)
login (first access of a module in current session)
all (all database operations)

Each of these may be set on a per module basis, which means that it is possible to monitor all changes to records in the Parties module, for instance, while Multimedia records may be audited for changes, searches and records view.

However, uses for this kind of information in programmatic ways are pretty far-reaching and extend beyond answering that basic question. When we began thinking about how we could tap into these new set of collections data points we tried to understand who is using the CMS and how.

  • Focus user training around how specific users were interacting with the CMS [e.g. identify users who were not complying with established data standards].
  • Provide automated daily summaries or real-time notifications to staff about changes to records. In practice, this was a daily report to collections managers so they could monitor the progress and standards compliance of students doing data entry. This allowed them to head-off data entry problems before they became systemic.
  • Trying to programmatically quantify (in meaningful ways!) the never-ending task of improving record “quality”.
  • Perform security audits that were outside the scope or capability of the existing security policy architecture. The EMu security policy design was already one of its strongest features but there were some things it still could not achieve.

Outside of these specific experiments, it was immediately apparent just how useful it was for staff to have proximate access to a record’s complete version history. They could rely on the system to track progress, to quantify the work they do every single day that so often feels like it was disappears into a black box.

Amid the rush to apply metrics of “success/progress” to every conceivable process in the late-aughts this new layer of information gave staff power to quantify and show their everyday work to administrators.

These examples barely begin to scratch the surface of the challenges that Chad Weinard is laying out in his piece, but tools like the Audit module make answering the call feel a lot less daunting. There is so much untapped potential here and that extends beyond just monitoring for changes, quantifying our work or restoring a old version of a record because of careless mistakes.

Just because an object sits in storage does not make it static. There are untold activities, changes, forces and interactions that play out across the collections data every day that are as just as important to understanding the collection and these audit trails do transparently capture many of those interactions. As Chad so eloquently says:

The whole dataset is important, not only the part that’s clean, or complete or presentable, not highlights or greatest hits. The collection data will never be complete, never perfect, never finished or publishable.

Metric-ton of metrics

There are certainly ways in which these metrics fall short: can can’t show how much research it took to move a value from ‘uncertain’ to ‘certain’, they can’t define help us define what record quality means nor are they self-explanatory, someone still needs to sift through them to make signal from the noise.

But we did have to start somewhere and this worked for us at the time.

So if there are any EMu clients out there that are using the Audit module or other vendors or products that do similar things let’s hear about it @ MCN 2019!

--

--