Using research to define a developer experience strategy

Because developers are people too

Daniel Erwin
GE Design
7 min readFeb 5, 2018

--

Welcome to the Industrial Internet of Things

At GE Digital, we’re revolutionizing industry with an edge-to-cloud platform that simplifies building apps. These apps support the operators of manufacturing lines, power plants, refineries, and other industrial sites. The Predix platform enables software engineers and data scientists — the Builders — to create applications that improve output, optimize maintenance schedules, and the like.

An overview of the “Builders” part of the Predix persona map

In the era of instant gratification, the developers who build those apps expect basic setup to be easy — in fact, reducing developer effort is the core value proposition of a platform like ours. The experience consists of many of the same elements found in other platforms like Salesforce or Amazon Web Services: tutorials, demos, sample applications, CLIs, APIs, administrative consoles, the catalog, the forum, and third-party sources (e.g. Stack Overflow).

How do we decide which developer tasks to make easier, and how do we know if we’ve achieved an elegant experience?

There are many design methods for taming ambiguity and paving a path forward — here’s a look at how we took an open-ended research method and applied it to the current experience to drive our platform strategy.

Demonstrating a strategic need for user research

There are two big issues that led us to conduct research studies. First, since learning a new software platform could take months, investing in one that doesn’t take off could be a career-limiting move. Developers are wary of new platforms, and they are stingy with their time before a platform is established. We need to make the experience as clear and easy as possible in order for them to be able and willing to jump the barrier to entry and start building apps.

via https://goo.gl/dX4hxV

Second, many people — even some product owners — assumed that since command line interfaces (CLIs) and application programing interfaces (APIs) can be accessed programmatically, they don’t need to be designed for use by people. This ignores all the times developers must interact with those interfaces manually, such as during building, testing, and debugging (e.g. almost all of a developer’s tasks). Developers are people too — and we needed research to raise awareness about the business and competitive value of providing a great developer experience to drive adoption of the platform.

Questions that we aimed to answer through research included:

  • who wants to build apps on our platform?
  • are they building sample apps, test apps, or production apps?
  • how do they get started building an app?
  • how should we prioritize parts of the experience like application programing interfaces (APIs) and command line interfaces (CLIs)?

Background research

A round of one-hour interviews gave us a general sense of what developers expect from these touchpoints, such as:

1) Make my life easier — Like any platform, Predix’s value for developers is to abstract away the infrastructure required to run a secure, scalable app.

2) Visualizations — Developers typically struggle to integrate multiple data sources and make them compatible with the visualization framework. Predix promises to make this straightforward.

3) New architecture — Users are “tired of being told utilities can’t behave like Facebook” (such as by continuously pushing new code). They want support for modern practices.

But these interviews didn’t give us a good sense of how well the platform is meeting these expectations. We typically use Usability Testing methods to evaluate products, by asking users to think aloud as they use the platform (explained well in this article from Google’s Developer Experience practice). But usability testing requires at least 2 hours of researcher effort for each hour of product use — there’s no way our small team could get comprehensive coverage of the whole getting started experience with this method.

After struggling to gather the observations we needed to understand this critical getting started period, I went back to my methods toolbox to look for a way to get more detail without taking users through every platform component. I had used remote journaling studies before as a low-cost way to get close to users — I realized that in this context, it could also provide the comprehensive coverage we were missing.

Remote journaling study

A remote journaling study asks users to make a record (photos, video, text, collage, etc) of their experience. Researchers then ask follow-up questions. This last part is crucial for the researcher to dig into other contextual information about what was happening when the record was made. This reduces subjectivity compared to other interview styles by tying the discussion to particular experiences the user had so that appropriate insights can be drawn. The study can also be easily scaled to fit both the product scope and available research resources.

A sample of a user’s journal entry

We recruited 18 developers who were going to be getting started during our study from internal social media, surveys, and our network of contacts from previous research. After a participant responded and confirmed their qualifications via email, we had an initial 1-hour meeting to build rapport, learn their background and expectations, and go over the study process.

The biggest struggle with the longitudinal study study is participants’ lack of compliance with the journaling activity. In order to encourage them to keep writing to us, we did several things:

  • weekly check-ins
  • protocol of reminders after 3, 5, and 7 days if a participant stopped responding
  • email template to make it easy to respond

Each week, a researcher reviewed each participant’s submissions to clarify ambiguities and ask for more details. Since the participants generally focused on describing what they were building, we used this time to ask about their tools, process, and thoughts and feelings they had. At the end of a month, the research team gathered to share our participants’ stories with each other and observe some patterns. This helped us identify powerful questions to ask in the final session with each user, where we talked about how they progressed through the learning process.

With the resulting data, we created a visual map of areas where developers:

  • switched between touchpoints to find information
  • failed to achieve a sub-goal and had to follow a new path
  • gave up on the task entirely

For example, the user mapped below started out reading the “Getting started” documentation (darkest color circles), constantly felt like he was on a tangent, and eventually got bored and stopped. When he started over with the “Guides” (lighter circles), he ran into a couple of issues, but was able to quickly achieve success. These high-level maps allowed us to look across the experience for several people and get comprehensive view of the platform’s strengths and weaknesses.

We used these maps to show just how difficult and protracted the path to experiencing value was. This presentation helped our stakeholders — product managers and engineering leaders — prioritize their work and better focus the team’s efforts where it makes the most difference for our users.

Strategic impact

By highlighting the key user issues, and backing it up with solid and well-documented research, we subtly shifted the efforts of many teams. After we shared with leadership the problems (and successes!) we observed, the platform started to evolve in a new direction that is more in line with user needs.

Here are just a few of the issues we highlighted, and the current state of the platform.

Pain point: Missing explanations of value and impacts of decisions

Solution: Short contextual descriptions now featured everywhere, such as in the primary navigation

Short descriptions make it easier to decide which option to start with.

Pain point: Resources assumed that users understood modern software paradigms e.g. command line and the Node toolchain

Solution: Guides give enough of an overview for users to Google for details

Operational details are now highlighted in guides, and this in-depth guide to Proxy issues was added.

Pain point: Insufficient support for debugging

Solution: logs & events are shown in the console

The “Activity” panel lets developers see what’s up without having to remember console commands.

--

--

Daniel Erwin
GE Design

I turn technological possibilities into engaging and highly-functional user experiences.