Gego, Aplique de Reticulárea, 1969. (Fundación Gego)

Open Research Platforms

Towards an ecosystem of open and interconnected tools for knowledge work

Casey Gollan
17 min readJun 6, 2022

--

Why is it so easy to import data into research platforms but surprisingly difficult to get data out? This question has animated the past few years of my work in Research Ops. I’d like to share some in-progress ideas around research platforms, interoperability, and why ReOps practitioners are in the right place at the right time to create a more open future for research data. If you’re interested in discussing this further — I’d love to be in touch! You can find my contact info at the bottom of this post.

Ready to jump ahead?

You can download the Open Research Platforms Evaluation Cheatsheet now. Or read on to learn more.

Interested in sharing your findings or getting involved in creating a shared platform for evaluations? Email Casey at hello@caseyagollan.com

This blueprint shows various cross-sections and design elements of a typical fire hydrant. Circa 1906. (NYC Water)

Interoperability is Everywhere ( …except user research? )

A building is on fire! The fire department arrives at the nearest hydrant but they can’t get it open. Only the company that manufactured the hydrant has the right kind of wrench. Through a feat of strength, they pry the hydrant hydrant open, only to find that the hose doesn’t fit! Water splashes to the ground and everyone rushes to scoop up handfuls of water and throw them into the fire.

The idea that a fireman couldn’t get water from a hose because of a design flaw is hard to imagine. This would never happen, thanks to agreements between the makers of hydrants, wrenches, and hoses. There are professional associations that maintain interoperable fire hydrant standards, city councils that support regulation, and wider laws that require standards in the name of safety.

Interoperability is a characteristic of a product or system to work with other products or systems.

In user research, things are quite the opposite. At every stage in the process, I’ve observed user researchers experiencing unnecessary friction and limitations because of difficulty accessing their own raw data and moving it between platforms.

In the way that it makes sense for a hose to connect to a hydrant — there are lots of reasons why you might want to connect different research tools to each other:

  • Syncing the same list of research participants across platforms for consistency and compliance
  • Accessing the same list of active projects across tools
  • Working on the same project in different ways (planning in a doc, synthesizing on a virtual whiteboard, tagging and filtering in a database, and presenting in a slideshow)
  • Sharing a common taxonomy between a synthesis tool and a more high-level repository of research reports
  • Tallying incentives paid out across payment platforms
  • Making research studies findable via a company-wide search engine
  • Validating content against a style guide, and providing researchers with helpful suggestions
  • Pushing actionable recommendations from a research workspace to an issue tracker, where product and development teams can plan and build
  • Tracking research impact by counting citations of research studies across platforms

So what would it take to achieve interoperability in user research platforms? We can look to general office work for an example of interoperability in action. This is a scenario so common that you might not even realize it’s happening:

  1. Carl opens his Windows computer and uses Microsoft Outlook to send an email to Jean.
  2. Jean receives the message in Google Gmail on her Apple computer. Carl’s email contains a calendar invite and a Microsoft Word doc.
  3. Jean hits accept on the calendar invite and an event pops into her Google Calendar.
  4. Jean clicks a Word file attached to the email and it opens in Google Docs — her preferred word processor.
  5. Jean hits print and the file is translated to a PDF before being sent to her Brother Printer.

It’s easy to share a document and setup meetings across platforms. But if it weren’t for shared protocols like SMTP (Simple Mail Transfer Protocol) and IMAP (Internet Message Access Protocol), it wouldn’t be possible to even send or receive a message between tools made by different companies. And you can forget opening files if we didn’t have standardized file formats like .ics (iCalendar), .docx (Open Office XML), and PDF (Portable Document Format).

Over the past few years, there has been a shift in the user research industry from working primarily in documents and spreadsheets, to working within specialized platforms that store research in a more structured way. Unlike documents that lack semantic structure, these tools relate notes to specific segments of highlighted data and capture synthesis “atomically” at the level of individual insights.

To better understand the structure of this data, you can export it to a CSV (comma-separated value) file and view it using spreadsheet software. But is it interoperable? Not exactly. Each company has its own approach when it comes to what the data are called and how fields are structured.

The broader problem is that companies and researchers lack shared definitions. What is an insight? ..a finding? …a highlight? What about “golden research nuggets”?

The atomic unit of a research insight” by Tomer Sharon

In a 2016 post, Tomer Sharon and Benjamin Gadbaw proposed an idea (and a schema) for “atomic research nuggets”, which has become a touchstone in conversations about user research. They defined a nugget as containing the following metadata: “Title, Directory, Date, Source name, Source type, Sensemaker name, Media type, Research method, Nugget (the observation), Observation Directory, Experience Vector, Magnitude, Frequency, Emotions, Props, Journey, and Characters”. They envisioned that having 1,000 research nuggets, “properly tagged, well defined, easily searched and found”, would ultimately be far more useful to an organization than a stack of long unstructured research reports that pile up and go unread.

In the years since, tools purpose-built for atomic research have sprung up left and right, but they still lack agreement on the structure of research. Under the surface of breezy marketing pages promoting a “single source of truth”, it quickly becomes clear how opinionated — and how differently opinionated — each of these research tools can be. To give an example across three research tools: what to Tomer Sharon is a “Nugget”, to Dovetail is an “Insight”, to EnjoyHQ is a “Story”, and to Condens is a “Conclusion”. These formats are all intended to achieve something pretty similar, but there is incredible differentiation in language and structure and it only gets more complicated when you dig into the metadata.

Without agreement between toolmakers, researchers are scooping up insights like spilled water and throwing them into the fires of product development. On a computer, this takes the form of endless copying-and-pasting and re-formatting of the same content as a transcript, a highlight, a sticky note, a collection of insights, and a summary report. Copy-and-paste may get the job done, but it doesn’t maintain meaningful links between the same piece of content across workspaces, and metadata is often discarded in the process.

To solve this problem, research platforms are generally working to bring entire companies and processes onto one platform (their own!), but there are reasons to be wary.

A tale of too many mergers

Among the 300+ vendors in the ReOps Toolbox, several mega-platforms are now competing to become the one research platform to rule them all.

Most research tools start small, arriving humbly on the scene to disrupt one particular area of the research cycle (e.g. participant recruitment, active research, or insight management). But in the world of software-as-a-service, each month brings new feature releases. As platforms clone features to keep up with their competitors, customers struggle to parse the overlaps between tools’ capabilities, and end-users experience confusion about “which tool to use for what”.

Venture capitalists and private equity firms are pumping large investments into the companies that create software for user research. These market dynamics force a “hypergrowth” approach that exacerbates scope-creep and paves the way for mergers and acquisitions:

  • Two of the older and larger players in the research tooling space are Momentive and Qualtrics, founded around the year 2000. Over the last decade, Momentive (formerly known as Survey Monkey) acquired 8 other companies, before getting acquired itself by Zendesk last year for $4 billion. Qualtrics, with $400m in funding, has acquired 6 companies over the last 5 years.
  • A later generation of mega-platforms founded closer to 2010 includes User Testing and User Zoom. With $150m in funding, User Testing has acquired 4 companies over 3 years to create their “Human Insight Platform”. User Zoom, not far behind with investments totaling $136m, has acquired 5 companies to build out its “UX Insights System”.
  • Founded in 2017, Dovetail is a popular contender in the latest generation of platforms, and recently surpassed $70m in funding. Dovetail recently announced its expansion into 3 standalone products covering not only qualitative analysis, but participant management, and insight discovery.

This “big bang” proliferation and consolidation of user research platforms has led to a growing tangle of capabilities that is becoming increasingly challenging to grasp, but can be best represented by User Interviews’ annual UX Research Tools Map:

Simple, right?

Where the “tube map” metaphor falls apart is that navigating the UX tools landscape is A LOT harder than navigating a transit system. For example, when I enter the subway in NYC, one swipe of my Metrocard allows me to move through the entire system, and I can use (literal) subway platforms to move between lines and change directions. Given the current state of interoperability in research platforms, the “connections” between tools in the map above are symbolic at best. A better analogy for UX research tools might be traveling between countries by airplane: fares are expensive, you’re barraged with upsells for extra leg room, you have to take everything out of your bag and then hastily put it all back together to pass through security, and once you arrive you won’t get very far without converting your currency and understanding a new language.

In a recent audit of the Re+Ops Toolbox, Caro Morgan discovered that over 50 of the listed tools are deprecated. Software shutdowns happen not only when a startup fails by running out of money, but also oftentimes when a startup succeeds. The “acqui-hired” team’s efforts are redirected to working on their parent company’s products. The sudden disappearance of software (or “sunsetting” as it’s known in the tech industry) is so common that there is a whole blog dedicated to these announcements. Phil Gyford, the blog’s author, who has collected over 130 of these announcements, asks:

Is [venture capital] the best way to structure and grow businesses? Is this the best long-term model for keeping people interested in making and doing amazing things on the internet? Why does almost no website or online service (my own included) have a plan for what happens to their users’ content over the long term?

As user research platforms become increasingly critical infrastructure on which organizations create and manage knowledge (the data repository, the workspace, and the archive), they are at once driving innovation and also adding challenging new layers of complexity and fragility.

A more thorough analysis of the openness of various user research platforms is certainly warranted, but I can share a few anecdotal findings from having evaluated many research platforms over the past year:

  • Almost all tools advertise integrations, but in practice, these are so limited in their functionality that researchers mostly end up copy-and-pasting between tools.
  • There are far more ways to push data into platforms than to get data out.
  • Most platforms offer some kind of integration via Zapier, but upon digging into the available “triggers” and “actions”, you will find large pieces of the application missing. Zapier is also “event-driven” so it can’t migrate old data, and in large companies, it represents yet another vendor that teams may struggle to get budget and approval for.
  • Most purpose-built user research tools have some form of data export (usually in the form of a CSV), but these files need to be manually requested one at a time at the level of a project or even an individual page.
  • No purpose-built tool for atomic research has robust API access. (EnjoyHQ gets an honorable mention for being the only one of these tools to offer an API at all. But Enjoy’s API is missing critical pieces of research data such as highlights.)

So is your user research platform locking in your data? If you use purpose-built user research tools, the answer is almost certainly yes.

In today’s landscape, attempting to work across platforms poses serious challenges. With only limited access to data via manual exports and third-party integrations, it’s not practical to maintain ongoing links between research platforms. And without robust APIs, there is no possibility of building something new and innovative on top of your existing research infrastructure. User research tools are evolving towards monolithic mega-platforms, with greatly overlapping feature-sets, but frustratingly dissimilar ways of organizing information. ReOps teams are put into the position of making tough choices between vendors, as maintaining subscriptions to overlapping tools can be cost-prohibitive, duplicative, and difficult to manage.

It doesn’t have to be this way! What if researchers could open their toolbox, pick out the best tool for the job, and it just worked?

Envisioning Open Futures

What might an ecosystem of open and interconnected research platforms look like? 🔮

  • 🌊 Researchers would move fluidly between tools.
  • 🔗 Research workspaces would be deeply interconnected.
  • 🚚 Migrating everything to a new platform would be effortless.
  • 🔄 Information would flow to meet collaborators wherever they work. Instead of corralling cross-functional teams to log into a specialized tool once every month or quarter, insights would appear within the tools where product teams are already working.
  • 🌐 ReOps teams would build new ways of conducting research on top of robust cross-platform APIs.
  • 📦 Research teams would truly own their data. Teams could download every last piece of data, and also integrate data automatically and ongoingly across your tool stack.
  • 💡 Competition between vendors would be based on innovation instead of lock-in.
  • 🆓 Non-profit and open-source research platforms would be vibrant alternatives to venture-backed offerings.

While almost none of these things are a reality in today’s tooling landscape, shared efforts between ReOps practitioners and vendors could help realize ideas like these.

How to choose open tools

Not every ReOps team has the time or technical resources to evaluate data practices (or parse the differences between a CSV, a Zap, and an API), but it’s important for anybody procuring software to be able to demystify buzzwords. Three key areas to pay special attention to when procuring tools are Data Portability, Integrations, and Extensibility.

If you can only ask three questions when procuring a new tool, make it these:

  1. Can I get my data in and out in standard formats? (Data Portability)
  2. How does this platform connect with others? (Integrations)
  3. Can I build on this platform? (Extensibility)

How do you know you’re getting to the bottom of it? When you start asking the right questions, most companies will quickly pull in an implementation specialist or even an engineer.

Here’s a bit more about each of these areas:

Data Portability: “Data portability is a concept to protect users from having their data stored in “silos” or “walled gardens” that are incompatible with one another, i.e. closed platforms, thus subjecting them to vendor lock-in and making the creation of data backups difficult.” (Wikipedia)

  • Examples of Data Portability: Data Imports, Data Exports, Account Backups, Migration Assistance

Integrations: “Integrations platform as a service (iPaaS)…enables customers to develop, execute and govern integration flows between disparate applications. Under the cloud-based iPaaS integration model, customers drive the development and deployment of integrations…to achieve integration without big investment.” (Wikipedia)

  • Examples of Integrations: Automations, Workflows, Zapier, Make, n8n

Extensibility: “An application programming interface (API) is a connection between computers or between computer programs. It is a type of software interface, offering a service to other pieces of software…In contrast to a user interface, which connects a computer to a person, an application programming interface connects computers or pieces of software to each other.” (Wikipedia)

  • Examples of Extensibility: APIs, Scripting, Plugins, Developer Communities

In my conversations with vendors, I’ve heard that the lack of robust APIs and data portability is because customers haven’t asked for these features, so they’re not being prioritized.

There currently isn’t an easy way for ReOps practitioners to understand what kinds of limitations they may face in the longer term, once attempting to build on a particular platform. An “open research platforms” initiative could gather information on the data practices beyond boilerplate “yes, we have integrations!” marketing copy. By setting up scorecards for the openness of research platforms, ReOps practitioners could make better and more informed decisions.

A goal for the ReOps community could also be to create shared educational materials around procurement, so that when organizations are making decisions about going with one platform or another, they can vote for openness with their organization’s budget. There is so much important and exciting work to do in creating greater data literacy so that all ReOps practitioners are empowered to ask technical questions and thoroughly evaluate platforms.

Open Platforms: How We Get There

“Ops is not, at the start, an administrative job.” writes Kate Towsey “It’s first a service design job.” This is a monumentally important reframing for the field of ReOps, which is sometimes reduced to paper pushing, compliance, and process. ReOps, to me, is about understanding the social life of research within an organization, removing barriers to research success, and creating new kinds of possibilities in parts of the research journey that are so dreadful in the present day that nobody can even imagine improving them.

I would add to this that Research Ops is also a job where you are, “designing dark matter”. As Dan Hill describes:

“I’ve been using the concept of dark matter as a metaphor in strategic design to describe the often imperceptible yet fundamental facets — the organizational cultures, the regulatory or policy environments, the business models, the ideologies — that surround, enable and shape the more tangible product, service, object, building, policy, institutions etc.

One of our…major innovations was in helping change the building codes in Helsinki — classic dark matter, that — to enable large timber buildings. That outcome is systemic, beyond a simple plot of land in Helsinki. Now there are several large timber buildings going up in southern Finland. In a city, the building regulations are the code that writes the city, the dark matter that enables, or inhibits, particularly patterns of development. Architecture could spend a little more time focusing on redesigning building codes, and not simply buildings, in order to have a greater effect on the city.”

In Research Ops, when you’re meeting with legal to remove policy-related roadblocks, that’s designing dark matter! All the meetings and negotiations can feel somewhat intangible, but this is the organizational context that makes the work possible. Without creating the conditions in which researchers can succeed, an organization’s range of possibilities will be limited to the lowest common denominator of possibility. Designing dark matter can even take you beyond your company, your job, and even your scope of influence into proposing a redesign of your vendor’s data policies. If we, as ReOps practitioners, don’t push for broader access to data, who will? In the end, you may find yourself redesigning your team’s sense of possibility, to open the floodgates to immense ideas that researchers have tucked away, feeling that they were “impractical” or would never succeed.

ReOps practitioners who procure tools have incredible leverage to push our ecosystem of software platforms towards greater interoperability. With collaboration and a bit of rallying, we can stake out a more open future of tooling for the Research Ops community.

Appendix

Assorted inspiring initiatives & alternatives…

Research Repositories: A ResearchOps Community Program of Work by Brigette Metzler, Bri Norton, Dana Chrisfield, and Mark McElhaw

Since the very beginning, Research Operations (ReOps) people have wanted to talk about research repositories, insights registers/hubs/repos or libraries. They’ve all expressed difficulties with what they’ve made, bought or used. That’s not a slight on any of the tools available on the market — the process is hard, getting buy in is hard. Knowledge management and library sciences are professions in and of themselves for good reason. Knowing exactly what’s required for each person’s context makes it very hard for research teams to get it right, at least at first. We’re pleased to have noticed, even over the course of the project, that more and more people are reporting that things are progressing.

The Minimum Viable Taxonomy Project

The Minimum Viable Taxonomy project came from the larger Research Repos project when the team identified a need to help create a taxonomy for organisations and projects. In this community call, the team showcased the MVT followed by attendees testing it out live!

Integrating Research by Jake Burghardt

Tech organizations are acting like laboratories without collective notebooks, unlocking only limited value from their diverse customer-focused research investments. We’re at local maximums for research utilization, and we need to grow our expectations. I spent 6 years tackling this problem at Amazon, and I’m continuing to investigate programmatic methods for collating research and activating it in planning processes. This post kicks off a new “Integrating Research” Medium blog to share some ideas

Data Transfer Project

The Data Transfer Project was launched in 2018 to create an open-source, service-to-service data portability platform so that all individuals across the web could easily move their data between online service providers whenever they want.

Microformats

Designed for humans first and machines second, microformats are a set of simple, open data formats built upon existing and widely adopted standards. Instead of throwing away what works today, microformats intend to solve simpler problems first by adapting to current behaviors and usage patterns.

Semantic Web

The Semantic Web, sometimes known as Web 3.0, is an extension of the World Wide Web through standards set by the World Wide Web Consortium (W3C). The goal of the Semantic Web is to make Internet data machine-readable.

Open Data Certificates

Open Data Certificate is a free online tool developed and maintained by the Open Data Institute, to assess and recognise the sustainable publication of quality open data. It assess the legal, practical, technical and social aspects of publishing open data using best practice guidance.

Object Concordances

Part of our ambition is to hold hands when as many other sources on the internet as possible. Currently we are only doing this for “people” (individuals or corporations) but our goal is to create mappings for as many first class objects in our collection (media, styles, periods and eventually objects themselves) as possible.

n8n

A free and open alternative to platforms like Zapier and Make, n8n (pronounced n-eight-n) helps you to connect any app with an API with any other, and manipulate its data with little or no code. n8n is privacy-focused and can be self-hosted for additional security.

Open Subscription Platforms

A shared movement for independent subscription data. With everyone getting into subscriptions, it’s never been more important to be in control of your customer data.

Open API Initiative

The OpenAPI Initiative (OAI) was created by a consortium of forward-looking industry experts who recognize the immense value of standardizing on how APIs are described. As an open governance structure under the Linux Foundation, the OAI is focused on creating, evolving and promoting a vendor neutral description format.

Platform Cooperatives

Platform cooperatives are businesses that sell goods or services primarily through a website, mobile app, or protocol. They rely on democratic decision-making and shared platform ownership by workers and users.

Casey Gollan is Research Ops Project Manager for Client Insights at IBM. The above article is personal and does not necessarily represent IBM’s positions, strategies, or opinions.

If you’re working on (or just interested in) any of these topics, I’d love to hear from you! You can connect with me on LinkedIn, Twitter, Medium, or the Re+Ops Community Slack.

--

--