Remote Debugger

Grace Kwan
Apr 14, 2018 · 10 min read

Designing a user-friendly interface for testing mobile integrations.

The Problem

The Solution

Image for post
Image for post
A demo request in the Remote Debugger UI.

The Team

Needfinding

To that end, the idea for this project stemmed from our Partner Engineering, (PE) team’s dissatisfaction with one of their tools, Charles Proxy. Since using Charles requires custom proxy settings, our Partner Engineers were wasting a considerable amount of time adjusting settings each time they switched test devices. Worse, much of the information displayed in the Charles UI was irrelevant to what they were trying to debug.

Image for post
Image for post
Configuring settings in Charles Proxy–not the most intuitive experience.

Our team’s Tech Lead had a hunch that since the requests the PE team was inspecting were to our own APIs, those requests could instead be captured by one of our own internal backend services. We could then display these requests in a web interface in Mission Control, our partner management dashboard. Our PM proposed that we display key information from these requests in a variable watch panel, a common feature of debuggers in IDEs that displays the most recent value of variables the user has chosen to monitor.

This sounded compelling, but rather than jump straight into wireframes, I wanted to confirm that (1) this project was the best use of the team’s time, and (2) our tech lead’s proposed solution addressed the PE team’s actual pain points with the tool. To do so, I sat down with Chris, our stakeholder on the PE team, to chat about his debugging workflows.

User Research

What are the current use cases for Charles Proxy?
What are the team’s pain points with Charles?
What information is the team looking for in the HTTP requests?

To begin, I asked Chris to walk me through an actual test case. He adjusted the proxy settings on his personal phone, then opened a test build of a partner app and walked through the purchase flow for a product. At each juncture, he explained which values he was looking for and what they meant. By the end of the session, I reviewed my notes and made the following key observations:

  1. For the most part, only two request types matter. The vast majority of the information Chris was looking for could be found in just two API requests: get-button and get-links. With a few exceptions, the other requests displayed in Charles were irrelevant.
  2. BigQuery is expensive. A key piece of information in each debug session was the request ID. This enabled the tester to look up the request in BigQuery, our data warehouse, to determine if our system returned the correct information. Since each query costs money to execute, this process was both time-consuming and literally expensive.
  3. Without prior knowledge, the requests are unintelligible. Interpreting the request stream would be impossible for anyone not already familiar with our data models, since the variable names in the requests often had no relationship to the terms we used around the office. The relevant information was also scattered throughout the request and response, forcing the tester to jump back and forth.

In an effort to break these observations down into actionable pieces, I translated them into user stories. A few of those stories were:

As a Partner Engineer…

  • I can inspect a relevant subset of requests from my mobile device to our APIs in a web UI, so I don’t have to spend time configuring proxy settings.
  • I can see if a get-button or get-links request was handled correctly in Button’s backend so I don’t have to use BigQuery.
  • As a PE, I can view key variables from get-button and get-links requests in a user-friendly format, so I don’t have to dig through raw JSON.

Having identified my goals for the project, I then sat down with our Tech Lead to understand our technical constraints. He explained that one of our internal services could provide the request details and walked me through the existing documentation. He also agreed to add the campaign ID to the response for get-button and get-links requests, which–even before building the rest of the UI–was a huge win, as our PE team was able to significantly reduce the time and money spent on BigQuery.

Finally, in an attempt to figure out what to name the project, I searched the web for similar tools. As it turns out, Google has a similar feature for Android devices that it calls Remote Debugging. As a strong believer in the value of consistency and familiarity, I coined our project the Remote Debugger.

Design, Test, & Iterate

Wireframes Review

Image for post
Image for post
Image for post
Image for post
Early sketches used for the Wireframes REview

I kicked off the review by briefly discussing the goals, non-goals, and user stories for the project. Then, I walked through the prototype screen by screen, taking questions along the way. Using these obviously low-fidelity sketches for the review helped keep the feedback focused on the UX. Happily, all stakeholders were on board with the overall direction. To close the loop, I compiled a list of action items based on the feedback from the review, including the following:

  • Summary views only for two request types: get-buttons and get-links contained the vast majority of relevant information. Each Summary view required custom design and development work, so it wouldn’t be worth the investment for other type of requests.
  • Clarify whether a device is iOS or Android: Allowing users to specify whether a device is iOS or Android when saving a new device to the tool would allow us to display an icon for it in the UI, making it easier to identify the correct device at a glance.
  • Support downloads (v2): One suggestion from the review was to support downloading logs for debug sessions. In the interest of shipping incrementally, our PM and I agreed that the feature was out-of-scope for the MVP, but was worth adding in a future release.

User Test

Image for post
Image for post
A snapshot of a request from the user test

Following our classic user testing formula, I got Michael’s permission to record the session. Then, I asked him to run through the prototype as if he were performing an actual debug session. He was able to intuit what the tool was used for without an introduction, but there were a few key points at which he expressed concern, such as how to save the debug session. After the session, I compiled my notes and the recording of the session into a list of action items.

Image for post
Image for post

A few of the most notable changes we made based on his feedback included:

  • Expandable and copiable requests: Though most values were easily available in the Summary view, viewing the JSON version of a request would still occasionally be necessary. Since the JSON view was much longer than the Summary view, I added options to expand the request card horizontally and vertically. In the same vein, Michael mentioned that he occasionally copied the code into a text editor, so I added a Copy button as well.
Image for post
Image for post
  • Email logs: Though I’d previously dismissed downloading (and subsequently re-uploading) logs as out-of-scope for the MVP, Michael’s concern over how to save the debug session indicated that it might be more important than I had previously thought. When I brought this observation back to the team, we agreed that it would be worth the time to leverage the email client we’d already built for our internal dashboard to support sending logs of debug sessions via email.
  • Alerts: Due to the static nature of the prototype, the state change when a debug session began wasn’t as obvious as I would have liked. Button’s UI Library includes Transient Alerts, which are notifications that float down from the top of the screen to indicate a response to a user’s action. Adding these made it clear when debugging began and ended.
Image for post
Image for post
The Transient Alert that indicates when a debug session has started.

90% Design Review

The final major task was to nail down exactly which values to support for the Summary views. I created a table for each request type in a Paper doc and sat down with our PE team to fill in the blanks. Finally, we were ready to implement.

Implementation

One example is how new requests appear in the stream. In the first version of the tool, when new requests appeared, they’d bump the requests the user was currently viewing off-screen. Since the user didn’t precisely control the appearance of new requests and there were no animated transitions, the experience was jarring. To address this issue, I collapsed the requests by default and animated their appearance and expansion.

Image for post
Image for post
An example of the fade-in and expand animations. (The content of the requests has been omitted.)

Once the changes were in, our PM wrote an internal-facing User Guide and coordinated an in-person training with our PE team. At long last, we formally launched v1 of the Remote Debugger.

Impact

Image for post
Image for post
Number of debug sessions per day for the Mar 1 — Apr 3 2018 timeframe. (Dips correspond to weekends.)

We’ve stayed true to the spirit of “ship and iterate” and shipped a few improvements based on requests from the PE team. One example is the ability to view previous debug sessions by uploading JSON logs, even if the problem can’t be reproduced live.

The tool’s user base has also expanded considerably. Based on a suggestion from our Head of Partner Engineering, we also trained our Partner Success team to use Remote Debugger to provide helpful context when filing issues with integrations. Finally, all employees at Button use the tool to record logs of test purchases as part of our new QA program, Test Your Market, which helps ensure the continued health of our existing partnerships.

Appreciation

Grace Kwan

Grace Kwan is a UX Engineer who specializes in designing…

Grace Kwan

Written by

Interaction / Software Designer @ IDEO Tokyo

Grace Kwan

Grace Kwan is a UX Engineer who specializes in designing and developing for the web. She currently works at Button, a mobile commerce startup in NYC. When she’s AFK, you can find her in the kitchen thinking up recipes for her food blog, Grey & Grapes.

Grace Kwan

Written by

Interaction / Software Designer @ IDEO Tokyo

Grace Kwan

Grace Kwan is a UX Engineer who specializes in designing and developing for the web. She currently works at Button, a mobile commerce startup in NYC. When she’s AFK, you can find her in the kitchen thinking up recipes for her food blog, Grey & Grapes.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store