The coordination of development, design, and research
On the development front
Part of our commitment to the clients for the MVP pilot deployment is to develop a functional frontend UI that is informed by our UX research. We entered full-on development phase this sprint, and have reached the first milestone as we demoed the revamped question creation interface at the Zensors all-hands meeting earlier this week. In this article, I want to cover some of the approaches, considerations, and challenges on the development front.
Tech Stack
For the tech stack, we decided to stick with the tech stack we inherited from the current version of Zensors UI. This tech stack uses React in TypeScript as the frontend framework, implementing the Flux architecture. It also provides us with a light-weight GraphQL client to communicate with Zensors backend server. And to visualize the data generated by Zensors, we plan to use Palantir’s Plottable data visualization library, which is built on top of the widely-used D3 library. Inheriting the current tech-stack of Zensors allows us to reuse many of application logic and UI components developed by the engineering team, it also makes it easy for us to collaborate with the engineering team in development.
Challenges
Since we are inheriting and building onto Zensors existing code base, we had to thoroughly understand the structure and data flow of the existing application before adding in our new designs. This introduced complexity as we were trying to ramp up the pace of development. We had a slow start in the beginning of the development phase, but very quickly picked up the pace as we got a better grasp of both the tech-stack and the code base. Working along-side Zensors engineering team is a big advantage for our development workflow. We were able to reach out to them for any questions and any backend updates to support the features we designed.
Another challenge was to restyle the components used in the existing UI. Our new interface is designed following the design specifications our team established last semester in order to achieve an approachable and engaging experience. But we ran into speed bumps as we tried to restyle the existing components to our design specifications. The problem was the existing UI uses components from BlueprintJS toolkit. Although BlueprintJS offers a wide range of components and was helpful to quickly sketch out an application, it couldn’t accommodate the stylistic changes and interactions we wanted in the interface. After two days of wrestling with Blueprint components to comply with our style, we decided to rewrite the components we need following our design guidelines and phase out Blueprint’s components as we go. This will offer us more control and flexibility in both visual style and interaction patterns as we move forward with development.
Approach
Our current goal for development is to deliver a fully functional frontend interface for the MVP by the end of June. To make sure we meet this goal, we have broken down the interface into a list of core features, each of which is self-contained in terms of development. And to fit our development cycle into both our sprint plan and the client’s sprint plan, which are misaligned by one week, we decided to set weekly milestones to complete one core feature per week. Weekly milestones also make it easier for us to coordinate development with design and research, so each track can prioritize their tasks base on what feature comes next in the pipeline.
Earlier this week, we completed our first milestone: question authoring flow, which we demoed at Zensors all-hands meeting. This week, we are working on the question management screen, and will be working on the dashboard screen afterward.
Reimagining the Dashboard
From our previous round of external user research, we found that many users were confused by our dashboard wireframe screen. To fix this problem, we wanted to revisit how we approach the dashboard screen, and start over again from the ideation phase.
Empathy mapping
We started with an empathy mapping process to better understand the requirements for the dashboard feature and base the design in the user needs we uncovered in our foundational research. We broke down the mapping into three sections: thinking (what users are thinking as they approach this part of the interface), doing (the actions users can expect to take), and feeling (how a user might feel as they engage with this component). As part of the exercise, we collaboratively filled in what each of us envision users to experience when they use the dashboard screen. From this mapping, we generated a list of requirements for our dashboard to follow:
- Obvious — there’s little room for ambiguity and an interface design that presents information and actions for users to take will set them up for continued success as they continue to use Zensors.
- Interactive — part of the most compelling aspect of Zensors is the dynamic nature of the data acquisition and analysis. A tool that enables its users to actively learn more about their own environments needs to be one that encourages interaction.
- Accommodating and friendly — while Zensors has the potential to provide great power to users, it also can be very overwhelming to those unfamiliar to computer vision, machine learning, or even data analytics.
- Respectful — we’ve heard time and again from target users that privacy is a major concern. Especially with the evolving nature of data security as it is, we’ve come across a healthy level of distrust when it comes to cameras in common areas. Determining how best to address this desire for privacy while still delivering value could be a major market differentiator for Zensors as a brand.
Ideation sketches
With the requirements in mind, we went on to create sketches of the dashboard meeting one or more of the requirements. From the sketches, we generated two concept screens, one more traditional and one with a different approach to information presentation, and reached out to our cohort and faculty advisors for expert evaluation.
Expert evaluations
We recruited two MHCI students and two faculty advisors to note key features and their assumptions around functionality. From there, we asked them to correlate these to perceived value and overall success of the design.
Our key findings include:
- Participants generally understood the more traditional design was more easily as it follows a known pattern and condenses information and actions into an easy to digest layout
- The timeline layout provides a more tactical view for day-to-day work, but runs the risk of being too noisy without the ability to filter out unnecessary events
- How-to directions are helpful, but only for the first few times a user logs in, so a way to minimize this or only provide it contextually would be preferable
- Live visualizations from the more traditional view are appreciated, but provide a longer term strategic value over providing a more immediate operational value
Moving forward, for the MVP we’ll go with a design that follows the more traditional layout as it provides immediate understanding and value to users. However, for future directions, the timeline is compelling enough to develop with increased support from the back end to make it valuable for users
Coming up
Next week we will be conducting the second round of external user research. In this round we wanted to get a better understanding of users’ mental models in addition to workflow validations. We have revamped our research protocol to include a broader range of research methods. Stay tuned for our update next week!