Remote Debugger

Grace Kwan
Grace Kwan
Published in
10 min readApr 14, 2018

Designing a user-friendly interface for testing mobile integrations.

The Problem

Unlike the web, where sophisticated debugging tools are built into the browser, debugging on mobile is a challenge. Prior to this project, Button’s Partner Engineering team relied on a third-party tool called Charles Proxy to inspect HTTP requests, which required tiresome configuration and presented as much irrelevant as relevant information.

The Solution

I designed Remote Debugger, an internal tool that allows Button’s Partner Engineers to easily test integrations by pulling out key information from test sessions into a clear, human-readable UI.

A demo request in the Remote Debugger UI.

The Team

I served as the Design Lead for the project, reviewed the code of the full-stack engineer leading the frontend implementation, and coordinated the rollout to our Partner Engineering team. The rest of the team consisted of a Product Manager and Tech Lead.

Needfinding

My team, Insights & Controls, is named for our dual missions of (1) providing our partner-facing teams a window into the performance of our products and (2) enabling them to launch new integrations as effortlessly as possible.

To that end, the idea for this project stemmed from our Partner Engineering, (PE) team’s dissatisfaction with one of their tools, Charles Proxy. Since using Charles requires custom proxy settings, our Partner Engineers were wasting a considerable amount of time adjusting settings each time they switched test devices. Worse, much of the information displayed in the Charles UI was irrelevant to what they were trying to debug.

Configuring settings in Charles Proxy–not the most intuitive experience.

Our team’s Tech Lead had a hunch that since the requests the PE team was inspecting were to our own APIs, those requests could instead be captured by one of our own internal backend services. We could then display these requests in a web interface in Mission Control, our partner management dashboard. Our PM proposed that we display key information from these requests in a variable watch panel, a common feature of debuggers in IDEs that displays the most recent value of variables the user has chosen to monitor.

This sounded compelling, but rather than jump straight into wireframes, I wanted to confirm that (1) this project was the best use of the team’s time, and (2) our tech lead’s proposed solution addressed the PE team’s actual pain points with the tool. To do so, I sat down with Chris, our stakeholder on the PE team, to chat about his debugging workflows.

User Research

My goal for the session was to walk out with answers to the following questions:

What are the current use cases for Charles Proxy?
What are the team’s pain points with Charles?
What information is the team looking for in the HTTP requests?

To begin, I asked Chris to walk me through an actual test case. He adjusted the proxy settings on his personal phone, then opened a test build of a partner app and walked through the purchase flow for a product. At each juncture, he explained which values he was looking for and what they meant. By the end of the session, I reviewed my notes and made the following key observations:

  1. For the most part, only two request types matter. The vast majority of the information Chris was looking for could be found in just two API requests: get-button and get-links. With a few exceptions, the other requests displayed in Charles were irrelevant.
  2. BigQuery is expensive. A key piece of information in each debug session was the request ID. This enabled the tester to look up the request in BigQuery, our data warehouse, to determine if our system returned the correct information. Since each query costs money to execute, this process was both time-consuming and literally expensive.
  3. Without prior knowledge, the requests are unintelligible. Interpreting the request stream would be impossible for anyone not already familiar with our data models, since the variable names in the requests often had no relationship to the terms we used around the office. The relevant information was also scattered throughout the request and response, forcing the tester to jump back and forth.

In an effort to break these observations down into actionable pieces, I translated them into user stories. A few of those stories were:

As a Partner Engineer…

  • I can inspect a relevant subset of requests from my mobile device to our APIs in a web UI, so I don’t have to spend time configuring proxy settings.
  • I can see if a get-button or get-links request was handled correctly in Button’s backend so I don’t have to use BigQuery.
  • As a PE, I can view key variables from get-button and get-links requests in a user-friendly format, so I don’t have to dig through raw JSON.

Having identified my goals for the project, I then sat down with our Tech Lead to understand our technical constraints. He explained that one of our internal services could provide the request details and walked me through the existing documentation. He also agreed to add the campaign ID to the response for get-button and get-links requests, which–even before building the rest of the UI–was a huge win, as our PE team was able to significantly reduce the time and money spent on BigQuery.

Finally, in an attempt to figure out what to name the project, I searched the web for similar tools. As it turns out, Google has a similar feature for Android devices that it calls Remote Debugging. As a strong believer in the value of consistency and familiarity, I coined our project the Remote Debugger.

Design, Test, & Iterate

Based on my observation of Chris’s test session, I realized that the variable watch panel originally proposed by our tech lead wasn’t the right mental model. Unlike a software debug session, in which the value of variables may change according to the particular line of code being executed, the values of interest in this case were not consistent from one request to another. Rather, each request could be viewed as a collection of useful values. What would accelerate the PE team’s workflow would be easy access to only the relevant values of the relevant requests. This way, users wouldn’t have to dig through the nitty-gritty of HTTP requests to find the information they needed. With this new mental model in hand, I sat down with a pencil and paper to sketch out some ideas.

Wireframes Review

Once I had a direction I was happy with, I decided to hold a Wireframes Review with the project’s stakeholders (PM, Tech Lead, our PE representative, and the rest of the Design Team) to ensure I was on the right track before investing any time in visual design. I took photos of the sketches with my phone and turned them into a 2-screen Invision prototype, which became the basis for the review.

Early sketches used for the Wireframes REview

I kicked off the review by briefly discussing the goals, non-goals, and user stories for the project. Then, I walked through the prototype screen by screen, taking questions along the way. Using these obviously low-fidelity sketches for the review helped keep the feedback focused on the UX. Happily, all stakeholders were on board with the overall direction. To close the loop, I compiled a list of action items based on the feedback from the review, including the following:

  • Summary views only for two request types: get-buttons and get-links contained the vast majority of relevant information. Each Summary view required custom design and development work, so it wouldn’t be worth the investment for other type of requests.
  • Clarify whether a device is iOS or Android: Allowing users to specify whether a device is iOS or Android when saving a new device to the tool would allow us to display an icon for it in the UI, making it easier to identify the correct device at a glance.
  • Support downloads (v2): One suggestion from the review was to support downloading logs for debug sessions. In the interest of shipping incrementally, our PM and I agreed that the feature was out-of-scope for the MVP, but was worth adding in a future release.

User Test

With all the stakeholders were on board, the next step was to test the flow with actual users. I created a higher-fidelity version of the wireframes in Sketch, incorporating the feedback from the Wireframes Review. I then linked up the wireframes into an Invision prototype and ran a user test with Michael, a Technical Project Manager on our Partner Engineering team who hadn’t yet been involved with the project.

A snapshot of a request from the user test

Following our classic user testing formula, I got Michael’s permission to record the session. Then, I asked him to run through the prototype as if he were performing an actual debug session. He was able to intuit what the tool was used for without an introduction, but there were a few key points at which he expressed concern, such as how to save the debug session. After the session, I compiled my notes and the recording of the session into a list of action items.

A few of the most notable changes we made based on his feedback included:

  • Expandable and copiable requests: Though most values were easily available in the Summary view, viewing the JSON version of a request would still occasionally be necessary. Since the JSON view was much longer than the Summary view, I added options to expand the request card horizontally and vertically. In the same vein, Michael mentioned that he occasionally copied the code into a text editor, so I added a Copy button as well.
  • Email logs: Though I’d previously dismissed downloading (and subsequently re-uploading) logs as out-of-scope for the MVP, Michael’s concern over how to save the debug session indicated that it might be more important than I had previously thought. When I brought this observation back to the team, we agreed that it would be worth the time to leverage the email client we’d already built for our internal dashboard to support sending logs of debug sessions via email.
  • Alerts: Due to the static nature of the prototype, the state change when a debug session began wasn’t as obvious as I would have liked. Button’s UI Library includes Transient Alerts, which are notifications that float down from the top of the screen to indicate a response to a user’s action. Adding these made it clear when debugging began and ended.
The Transient Alert that indicates when a debug session has started.

90% Design Review

After implementing the changes from the user test, I held a 90% Design Review with the same stakeholders. As is ideal for a review near the end of the design process, the designs received signoff from all parties, and I left with only a few minor action items.

The final major task was to nail down exactly which values to support for the Summary views. I created a table for each request type in a Paper doc and sat down with our PE team to fill in the blanks. Finally, we were ready to implement.

Implementation

One learning from previous projects that we put into play with the Remote Debugger was the value of incremental rollout. Since our users were internal and highly technical, we had a lot to gain and little to lose from releasing the tool before it was perfectly polished. By launching early, we were able to get the tool in our PE team’s hands earlier, making their lives easier and giving us the opportunity for early feedback.

One example is how new requests appear in the stream. In the first version of the tool, when new requests appeared, they’d bump the requests the user was currently viewing off-screen. Since the user didn’t precisely control the appearance of new requests and there were no animated transitions, the experience was jarring. To address this issue, I collapsed the requests by default and animated their appearance and expansion.

An example of the fade-in and expand animations. (The content of the requests has been omitted.)

Once the changes were in, our PM wrote an internal-facing User Guide and coordinated an in-person training with our PE team. At long last, we formally launched v1 of the Remote Debugger.

Impact

The Remote Debugger was even more impactful than we had hoped. Our PE team were incredibly excited about the improvements to their workflow, and the metrics match. According to Google Analytics, we’ve seen an average of over 20 debug sessions per day–not bad for a tool built to serve a 5-person engineering team!

Number of debug sessions per day for the Mar 1 — Apr 3 2018 timeframe. (Dips correspond to weekends.)

We’ve stayed true to the spirit of “ship and iterate” and shipped a few improvements based on requests from the PE team. One example is the ability to view previous debug sessions by uploading JSON logs, even if the problem can’t be reproduced live.

The tool’s user base has also expanded considerably. Based on a suggestion from our Head of Partner Engineering, we also trained our Partner Success team to use Remote Debugger to provide helpful context when filing issues with integrations. Finally, all employees at Button use the tool to record logs of test purchases as part of our new QA program, Test Your Market, which helps ensure the continued health of our existing partnerships.

Appreciation

This project is a result of the combined efforts of the entire Insights & Controls team. Ian Halbwachs and Daniel McGrath led the frontend and backend implementation respectively. Our PM, Daniel Lee, wrote the user guide, coordinated the rollout out of the tool, and held internal trainings. A huge shoutout also goes to everyone on the Partner Engineering team for their support and feedback throughout the process of building the Remote Debugger.

--

--