Designing a Virtual Assistant for the National Park Service — Our Process, Part 2

“Hi! This is Sam.”

Hannah Koenig
Interaction-Driven Communication
12 min readOct 2, 2019

--

Team National Parks: amanda sánchez, Anuprita Ranade, Hannah Koenig, and Michelle Chou from Carnegie Mellon School of Design

Our Mission: design a virtual assistant for an organization that has not launched one. Part One of this project is focused on mobile interactions between users and our virtual assistant. Part Two expands on mobile by considering how our assistant will integrate with other components or digital products, such as kiosks and the web.

This post captures our process for Part 2. Read about our process for Part 1 here.

Our team is comprised of students in the Fall 2019 Interaction Design Studio at Carnegie Mellon University’s School of Design. We are not affiliated with the National Park Service and any opinions expressed here are our own. Thank you to the Harper’s Ferry Center at NPS for giving us permission to use NPS brand assets for the purposes of this project (only).

Week 4: September 30 — October 6

Introduction to Part 2

In class on Thursday, Daphne, Dina, and Q reviewed the brief for Part 2 and invited questions. We shared with them our reflections on Part 1 and what we discussed in the Team Contract exercise. We then jumped into brainstorming and discussing ideas for how our virtual assistant would expand into a new platform with new functionality, and incorporate a social component for two users who already know each other.

Work session capture, 10/2/2019

We discussed an ecosystem involving a more robust mobile app as the digital home base for park visitors; a smartwatch that would provide minimal highlights throughout the visit; and a tablet that would allow Sam to help rangers and educators host public events and lectures at parks. We agreed to meet on Friday to make a journey map for this new experience and decide what scenarios we imagine users moving through.

Journey Mapping

We met on Friday morning to show our individual journey maps. After reviewing each in turn, we set about identifying the key moments we want to show for Part 2. We eliminated the tablet platform in favor of a smart watch, as we felt we had an abundance of ideas within that platform alone.

Our consolidated journey map

We chose to focus on a combined mobile app and smart watch experience to cover three main stages of the journey. First, building on our concept for of deciding on a park for Part 1, we laid out key moments in planning a visit to a national park (in this case, Bryce Canyon). Second, we edited down to a handful of key moments taking place at the park. Third, we specified some things that our users could experience after leaving the park.

The scenario we chose will allow us to integrate the social component of the experience in the project brief. Our original user, Alex, is planning for a family trip to Bryce Canyon in April. She is planning to meet her brother Paul and his wife Jen at the park. They will share a campsite and do activities as a family. We are excited about the ideas we have to allow for Sam to shine as an assistant. Next steps: researching virtual assistants on wearable devices, developing wireframes for mobile and watch, and drafting our script.

Week 5: October 7— October 13

Virtual Assistant Research

For class on Monday, we chose to research the visual identity (location, scale, and transitions) of two virtual assistants on wearables like watches: Siri and iTranslate Translator.

Our main takeaways:

  • Siri is consistent across mobile and watch platforms. The logic of appearance/disappearance, visual identity elements, color, and scale use the relative screen size of each plaform in the same way.
  • iTranslate is more diverse from mobile to watch. The watch platform has less functionality than the mobile app. Colors differ as well; the UI is white on iPhone (with a new option for dark mode) while the watch uses 100% dark mode. on apple watch has a lot less functionality than the iPhone app. The iPhone app shows text movement, utilizing the entirety of the screen, while iTranslate on Apple Watch centralizes everything.

This research oriented us to some potential ways we could think about Sam’s behavior across mobile and smartwatch platforms. A key question for us is whether Sam’s ellipse and ball form needs to be visible on every screen. Given the constraints on screen real estate on smartwatch, this becomes an important consideration.

Initial Wireframes and Script

In class on Monday, we showed our initial wireframes for mobile and smartwatch as well as a draft script.

We got some feedback from Q about how we could push our ideas for Sam’s new features and capabilities. An important piece of feedback was about the virtual assistant vs. mobile app functionality. Our objective is to integrate our virtual assistant into a service ecosystem, rather than design a capable mobile app. Q challenged us to think about how Sam could be smarter and help users make complex tasks easier, like integrating data sets, analyzing, and making recommendations (“which trail is closest to my campsite?”).

New States for Sam

Based on our script and wireframes, we identified some new states for Sam. While some of us worked on the next round of wireframes, some of us focused on exploring motion for the new states.

We updated the colors in Sam’s form for the ball and ellipse to address feedback we received at the end of Part 1 (increase overall brightness and have more contrast between the two colors). We also increased the size of the ball proportionally to the ellipse — we felt the ball was too small in the mobile UI, and would definitely be too small when translated to the watch. In class on Wednesday, we showed our motion explorations and identified a next step of deciding on a logic for Sam’s behavior when the states are integrated into the UI.

Next Round of Wireframing

We integrated feedback about making Sam smarter into the next round of wireframes by focusing on how Sam might make recommendations based on what they learn about the users.

In class on Wednesday, we talked with Dina and Matt about this latest work. Based on our conversation, we came away with two points of feedback that we felt were important to integrate into our platforms going forward. First is showing some of the collaboration between Alex and their brother Paul when planning the trip — Dina and Matt felt this was an opportunity to develop the social component of the brief further. Second is continuing to make Sam smarter by combining biometrics and completed hikes at Bryce Canyon so that Sam can recommend additional hikes (“You did a great job on Hike A. Based on this, I think you’d be able to handle Hike B with no trouble.”) For the rest of class, we transitioned into working on our video storyboard for the Part 2 concept video.

Video Script and Storyboard

On Thursday, we met virtually to take stock of progress and make a new task list for the weekend in preparation for peer review. We finalized the script for our video with all scenarios we wanted to showcase. This was important because we needed a single source of truth to reflect across our UI components and the video storyboard. With this in hand, we were able to plan our next round of UI edits and complete a storyboard for our concept video. We chose to use live action footage showing each of the three stages in our design scenario’s journey map (planning the trip, at the park, and after leaving). Similarly to our Part 1 concept video, we will highlight the UI of our (updated) mobile platform and our new smartwatch platform.

Week 6: October 14— October 20

Peer Reviews

We conducted our mid-project peer review with the teams making virtual assistants for Blue Apron and Fitbit. We showed our storyboard and some screens from our smartwatch and mobile apps, and asked for feedback on overall balance between the brief, the brand, and our work. After peer reviews, we spoke with Daphne about our storyboard and next steps.

Peer review: mobile UI in action
Scenes from our concept video storyboard
Mobile UI progress
Watch UI progress

Feedback included:

  • Refine icons across platforms and address contrast in watch UI between green background and green buttons
  • Think more about the watch flow of plant ID. why not just take pix? If you are wanting to help NPS track things, then be up front about that.
  • Be clear about the decisions for how and when we use watch vs mobile platforms.
  • Consider the placement of the virtual assistant on the watch UI — what about a verbal cue to appear, or something else, instead of always keeping the form of the assistant present in the upper right of the watch face?
  • Think about script and tone of voice. Make sure it’s an assistant, not an IVR system.

Virtual Assistant Across Platforms

After peer reviews, we got to work on integrating feedback and tackling next steps. Chief among them was to decide on a logic for the behavior of our virtual assistant. When should Sam be seen on the screen? Where should Sam be placed, and at what scale?

We wanted to ensure consistency of experience across platforms. We agreed to test out a logic where Sam is visible on mobile when listening, waiting, and thinking, but not when speaking. On watch, Sam is invisible while speaking/sharing information, and visible at a smaller scale when listening, thinking, and confirming. Sam is visible at a larger scale when celebrating and warning. We finished class on Monday with a game plan for how to move UI, motion, and our concept video forward.

Deep Dive into UI

In class on Wednesday, we took advantage of the work session and discussions with professors to get into the weeds on UI, motion, and our concept video.

Watch UI coming along

We had a long discussion on the best way to ensure style consistency across platforms. We thought about whether to have light vs. dark mode across platforms. We decided to go with light mode for the mobile app and dark mode for the watch based on conventions we found in our research. We also evaluated our color choices to ensure appropriate contrast on each platform and refined our typography by prototyping on the appropriate devices.

On mobile, we fleshed out the collaboration screens and nailed down our scenarios. We tested whether it would make sense for Sam’s form to disappear when speaking on mobile, and felt that it worked, especially with additional transition time in the audio between Sam and the user. We also agreed on a final set of new states for Sam on the watch platform.

Mobile UI coming along
Rough transition test between states of VA

Game Plan for Video Shoot

With UI well underway, we turned to planning the details of our concept video shoot. We timed our script and decided it was too long and needed condensing. Once we made our edits, we created a shot list and picked up our camera rental.

Video shoot: plan and outdoor shot list

Shooting the Video

Our initial plans to shoot live action video on Thursday morning were foiled by rain. And so, we reconvened on Friday morning to shoot at a teammate’s apartment and outside at Schenley Park.

Concept video behind the scenes with team mascot Fern

Week 7: October 21— October 23

Final Working Sessions

In class on Monday, we had our last working session before final presentations on Wednesday. We tweaked our UI for each platform and made final notes of what to change in order to record prototypes for the concept video. We hit the booth to record new audio (many thanks Amrita!) and cut the audio into scenarios. We did a rough cut of the live action footage and new audio to ensure what we had was usable.

Home stretch to-do list
Thanks Alex for jumping into a mockup!

We also finalized the colors for Sam and the other UI components, made our digital passport badge for our final scenario, and created a ranger view desktop mockup of all user alerts coming in. This is intended to be an illustrative cameo in the video rather than something indicative of a deeper service design project, which was deemed to be outside the scope of the brief for this project. We met on Tuesday after classes to close out the remaining tasks for Wednesday’s presentation. It was a long night.

Final Presentations on Part 2

In class on Wednesday, we presented our project for a final critique. We got questions from the audience about the behavior of the virtual assistant (when Sam is visible vs. hidden) and our strategy for a consistent Sam experience across platforms. Some of our peers expressed concerns about the feasibility of the smartwatch platform in areas with poor connectivity, which we acknowledged. Our peers were also interested in and enthusiastic about the idea of Sam interfacing with the NPS organization and employees, based on the scenario we showed in our concept video.

Concept Video

At the end of our presentation, we were asked about what we learned from this project.

  • Importance of research: in order to effectively use Sam as an assistant for NPS employees, we need to know much, much more about their service ecosystem, roles, and interactions. We could have benefitted from more research throughout the project.
  • Value Sensitive Design: in our seminar class, we were asked to analyze our in-progress virtual assistant project from the perspective of value sensitive design. Our team came away with some concerns about the value tension between environmental sustainability and convenience. If we were to take this project forward, we would focus more on the ways that Sam could help NPS educate visitors on the importance of caring for the environment while mitigating the effects of increased volume of visitors at parks.
  • Scaling to new platforms: we had a lot of discussion and exploration around the best way to ensure a consistency of experience for the virtual assistant from one platform to another. We evaluated each of our elements, including form, color, scale, proportion, and motion, and tested them in various combinations on mobile and smartwatch. This was valuable learning in how these components contribute to a coherent set of interactions and larger sense of brand (and keywords).

--

--