Story Map Your Way to a Summer Backlog
Starting our next phase of work with a clear idea of what exactly to build and how to validate our assumptions
Continuing our planning discussions from the end of the first phase of work, we reviewed our overall plan for this phase in terms of tracks of work, end deliverables, and team member roles. Essentially we’ll have tracks for research, design, and development going throughout the summer, with attention paid towards setting up and managing a pilot.
We plan to build an iterative cycle into our work in each of these tracks so that research informs everything that we do, design provides direction to what we create, and development executes the vision of what we’re trying to provide for users. We’ve mapped out our process below based on a traditional lean methodology.
Story Map Work
Process in mind and timeline set, it was time to define what should go into our first version of the product. One way to do this is to think through the user needs and consider the sequence of events that will need to transpire for them to have a successful experience with Zensors. The exercise we opted to conduct in order to do this is called story mapping, a classic method of agile that breaks an experience into hierarchical pieces that can shape the product roadmap, give some direction into prioritization, and fill the backlog.
A product story map consists of three key levels of hierarchy:
- Actions help users are trying to accomplish in order to achieve a larger goal. They’re typically representative of larger chunks of activity but they themselves are also one of a series of actions within a user’s day or week. For those familiar with agile methodology, these typically translate to epics.
- Tasks are a grouping of events that need to take place in order for an action to take place.
- Steps are any other events that need to take place in order for a task to be completed. Depending on how a workflow is setup, tasks and steps could translate to stories.
In our exercise, we identified the high level areas that we’d need to focus on for the MVP: setup and management, question creation and monitoring, and insight access (either viewing in Zensors or exporting to a different tool). From here, we moved into some initial wireframe sketching.
One of the key points of discussion addressed the need for a hierarchical relationship between cameras, questions, and answers. A user setting up a question may want to apply that question to multiple cameras or vice versa. For example, a community manager at a coworking space may want to monitor the utilization of a room of hot seats (unassigned seats for subscribers). To do this, it’s likely that they will have create a question and then leverage multiple cameras trained on different parts of the room to answer it. In this scenario, in the user’s mind, there is a single question, multiple cameras, and a single answer, like the rough diagram on the left in the above photo. However, the way that the Zensors system is designed, each camera has an iteration of the question and is collecting data separately that will then need to be aggregated to respond to the original question (much like the diagram on the far right). This is something that the system can handle, but design will need to assist in creating a workflow that supports users well.
Additionally, there could be a use case in which a user might have a single camera that they use for multiple questions. For example, that same community manager may have a single camera in the shared kitchen that could be positioned so that it can monitor different supplies, utilization, and other activity. In that case the hierarchy described above is a little simpler for the users as the system sees their questions individually and just references the shared camera. With some agreement on this approach, we next wanted to put together a rough version of what this might look like for users.
Wireframe Sketching
Our sketches focused within the areas above, but more specifically, drilled down into the core screens that would make up the experience: navigation, visualization/dashboard, question list, question detail, new question/question creation, and system management.
Navigation
For this component, we’re envisioning a straightforward horizontal global navigation with just a few navigation elements for data visualization, questions, and system and account management.
Visualization/dashboard
This screen encompasses the insight access functionality for the system. Users will be able to do some relatively straightforward visualization of their data and export raw data files that they can then import into other tools like Tableau of Microsoft BI for any additional needs they may have. Also in this screen, users will be able to aggregate camera data for a single question.
Key features:
- Widget gallery
- Visualization creation
- Visualizations
- Question aggregation
Question list
This screen lists all questions, giving users the ability to quickly see what’s running as well as key information about each. Considerations here are ways to differentiate between questions with similar names and provide users with relevant filters. From here, users can access detailed information on any question in the form of the question detail screen. This is also an access point for users to duplicate questions (or add additional instances of a question for a new camera).
Key features:
- List view of questions
- Grid view of questions
- List/Grid toggle
- Filter and sort
- Question tiles with relevant information displayed
- Duplicate question entry point
- Question detail entry point
Question detail
This screen provides both information about a specific question as well as look at the raw data associated with that question. Considerations for this screen revolve around the anticipation of what users will actually need to see immediately versus what can be disclosed progressively as well as how to best position a question instance as a piece of an overall question as this view is designed to give a closer look at how a single camera is answering a question that may have additional camera feeds associated with it.
Key features:
- Complete question and camera identification information
- Question specifications (creation date, duration, sample rate, location).
- Edit specifications
- Data visualization of responses over time
- Raw data feed visualization
New question
This screen provides users with the ability to create new questions in the system. Major considerations here are the sequence of steps and how best to message what each parameter means in terms of data collection and insight development. For this stage in our process, we’ve opted to create a workflow that prompts users to select a camera first and a region of interest, and then author a question and set the parameters. Once all this is in place, then a user will have the option of duplicating the question across multiple cameras.
Key features:
- Progress indicator
- Camera feed selector
- Region selection
- Question authoring (text field, recommendations, validation)
- Parameter selectors (sample rate, duration, notifications)
- Sample data and answer submission for training
- Review and confirmation
System management or control center
This series of screens is where a user can monitor and manage both their own account as well as system wide settings. At this stage, we’d like to provide users with easy access to functionality that will allow them to assess value for their business and reduce friction in their own workflows.
Key features:
- Account settings (profile, billing, account type, contact information)
- System settings or control panel (question running, cameras online, users and permissions)
Research Planning
Our work this week on what we can do to meet our users’ needs sets us up well to start validating the key assumptions we’ve made so far. Essentially we have three different avenues that we’ll leverage to gather feedback and insights into our work:
Internal
For this type, we’re looking to conduct fast, directional research with geographically convenient participants.
Participants: Team, lab members, MHCI students and staff
Potential methods: Heuristic evaluation, speed dating, think alouds
Outcomes: Immediate understanding of feature clarity, usability, and positioning; resolve major issues before external user testing
External
Here, we’ll work with representatives of our target users (coworking space staff) to validate our assumptions that require relevant context and experience.
Participants: Coworking space or communal space managers, staff, and operations and product-focused employees
Potential methods: Usability testing (moderated and unmoderated), think alouds, speed dating
Outcomes: Assess individual feature success (usability, sequence position, effectiveness; further understand market fit and user needs
Pilot
Finally, with this channel of research, we’re looking to validate our assumptions that require longer term context and understanding with more personalized features and data.
Participants: Specific coworking space employees
Potential methods: Feedback surveys and interviews, support tickets and messages
Outcomes: Assess overall MVP success (value, workflow efficiency); further understand market fit and user needs
Pilot logistics are being worked out and recruiting is underway for our rounds of user testing and soon we’ll have a protocol ready as well.
Architecture Planning
In preparation for actually building this out into the product, we’re also looking into the existing Zensors tech stack and developing a proposed approach to how best to build on top of that. Currently, we’re looking into how best to work with GraphQL and what visualization libraries or frameworks will provide the level of polish we’d like with the amount of bandwidth we ultimately should dedicate to this feature given the product direction. Zensors is a data and insights gathering tool, but isn’t meant to be a robust visualization tool.
Coming up
Next week we’ll continue our planning for the summer and conduct some internal research to validate our sketches. Development will soon too be underway.