On the Convergence of Design and Research

Tony Wang
Zensors MHCI Capstone 2018
5 min readJun 8, 2018
Scholar John Harthan’s introduction to the books of hours, a medieval manuscript documenting the number of work hours it takes to develop a novel computer vision product’s MVP.

As any development team knows, one of the most frightening aspects of a fast-moving project is the potential for the project to go off the rails due to poor sprint planning. This is definitely a worry for our team of students: How do you prioritize new information as it comes in for each sprint? How do we balance the input of various stakeholders in our project? What do we want to do as a team and what do we need to do as a team to accomplish our goal?

We’ve been hacking away at the beginnings of a front-end for Zensors for the past week or so, and our goals for this week have been to implement a functioning prototype of the core question authoring aspect of the interface. The question authoring feature is what we expect to be used by users of the system to write their own questions so that Zensors can begin collecting data for him or her. It needs to be a smooth process because it’s part of the charm of the entire project. There’s a multitude of possible designs to consider, dozens of ways to conduct user testing to validate this, and a plethora of possible technical issues that could prevent us from accomplishing our goal.

Somehow we’re still on track. It wasn’t easy, and here’s how we handled it.

Design Iteration

One of the coolest parts of working on Zensors is being part of the Future Interfaces Group at CMU. With close proximity to the researchers who originally developed the technology, our team has unparalleled access to working embedded within the larger development team bringing Zensors into the world.

Yet with great access also comes great responsibility. As a team, we’re constantly in dialogue with a number of stakeholders that are interested in the success of Zensors. There are a ton of ideas being thrown around that could have major impact on the outcome of Zensors as a product, and these ideas come not only from our team of MHCI students but also from other people working on the project as well. Often, these interactions take the form of design feedback session, a project sync up meeting, or even one-on-one conversations.

What it feels like to be designing, testing, and developing at the same time. The level of complexity is defined as 10 + n⁹⁹⁹ for n people (both internal and external) involved in a project based on the axioms of string theory.

You probably see where I’m going here: there’s a ton of information floating around and we’re designing at a frantic pace. How can we make sure that important feedback and requests aren’t lost to us, yet make sure that our sprints remain orderly enough that each of our processes are on track to make progress?

Solution: design backlog.

We started recording important ideas that pop up in the middle of our sprints that should be integrated into future sprints. By assigning priority to each of these ideas, we can decide if there need to be adjustments made to the sprint plan to accommodate new work. This tactic also resolves the tension between design and user testing: even if we hypothesize about the benefits of design A or a new design B, testing might simply show that design A doesn’t suffer from the problems we thought it would. With a backlog, we can continue listening to the concerns of our clients without introducing risk to our progress mid-sprint.

Francis Bacon giving the stink eye to UX practitioners that talk in abstraction instead of conducting user testing.

External User Research

This week saw our first use of external users to test our prototype. We conducted external user testing with 7 participants this week recruited via Craigslist for coworking spaces using think-aloud testing. Each test included a brief warm-up interview, a scenario for context, three tasks to test the success of our current design, and a short debrief that covered overall impressions about Zensors. All interviews were conducted remotely online using Google Meet and notes were taken by observers throughout the session or recorded on screen.

We developed a scenario in which testers were asked to use the prototype as if they would be required to use the new technology at work. Since most of our participants were directly involved with facilities management, the scenario was never met with any confusion. The tasks we chose to test were important features of the Zensors MVP — question authoring, managing questions, and starting or stopping data collection.

A screenshot from a recording of a test: user shares their screen with us as we watch where they click and listen to them think out loud. Names have been blurred for privacy reasons.

After conducting our tests, we synthesized by consolidated notes and observations from each session and identifying themes shared among all participants. Broadly summarized, our results showed three important findings:

  1. In general, the flow to author a question was easily understood by all of our testers, suggesting that the current design is accessible to even completely new users.
  2. More consistent terminology would be necessary. This was particularly noticeable on the home screen where testers were led astray by the meaning of certain actions.
  3. Features can be made easier to discover by improving the design of signifiers and/or the visual hierarchy.

By validating our current interface, we’ve gained confidence that what we currently have is at least functional. There’s a number of additional areas to improve, but with time of the essence, we’re going to be taking this feedback and focusing on improving the aspects that break the entire system. Any questions we have remaining after the user test can also go into the backlog for future validation.

Development

On the development front, we’ve begun to fully implement a question authoring UI that had been in the works since the previous sprint. Features that we tested and will be refining will be built on top of this technical base. There’s a lot of juicy stuff that went on in amping up our development process, but I’ll leave that for our next Medium post.

Convergence (?)

Next week, we’ll finally see the fruits of these various tracks combine into a further refined experience for question authoring. Also, our data from user testing has shown that there’s plenty of work to be done fleshing out other important aspects of the Zensors interface. Our team is looking forward to delivering an MVP within the next few weeks, so stay tuned as we talk a bit more about our journey as the front-end team for Zensors.

--

--

Tony Wang
Zensors MHCI Capstone 2018

UX research, online communities, and languages | Masters Candidate in HCI @ CMU