TaskWiz: A 10-Week Journey Through Interaction Design

Jade Shyu
9 min readNov 16, 2016

--

Over a year ago, I embarked upon a rigorous, yet rewarding, design journey. In July 2015, I enrolled in UC San Diego’s Interaction Design specialization, a certificate program encompassing 8 courses, including a 10-week Capstone Project that spans the skills taught throughout the program and used by interaction designers. Throughout the journey, I’ve been challenged to discover and understand a user’s needs, create prototypes and visual designs that address these needs, and test the designs with real people.

To wrap up the final week of the course, I’d like to share the mobile app concept I worked on and how the design changed over time.

The humble beginnings of TaskWiz started with the design brief:

Mission:
Redesign the way we experience or interact with time.

Inspired by the mission and the idea of a calendar that assigns tasks based on energy level, I set off to work.

Discovery

My goal was to study how mood and energy affects the way a person schedules his or her time, their current task management process, and any frustrations they are currently facing.

After creating a study plan, I asked participants to walk me through their process, complete an energy diary, and answer follow-up interview questions.

I observed the steps and tools participants currently use to plan out their day. Because I was interested in the topic of whether mood and energy had a relationship with the planning process, I also had participants keep a diary.

Results of the study showed that:

  • the majority of my participants planned and completed more when they were in a good mood, compared to when they were in a bad mood, especially for social engagements
  • mood and energy had less of an effect on tasks when they were work related, likely because the tasks were mandatory

Ideation

Next, I brainstormed ideas based on findings from the study. I connected the user’s needs with a high-level concept that could help make their lives easier.

Two findings stood out:

1. The executive assistant completes more tasks and activities when she is in a positive mood. She needs a place to queue and sort different categories of tasks, so that she can complete them when she is available and in good spirits.

2. The teacher is an intrinsically motivated individual who rewards herself with stamps for accomplishing days of tasks in advance. She needs a way to gamify the planning process, so that she has fun with the process of planning and finishing tasks, as well as celebrating her accomplishment with a symbol of completion.

I pondered deeply and developed a point of view for the design brief:

The act of planning and completing tasks/activities is naturally a challenge, as a negative change in mood can hinder completion. What if planning and accomplishing tasks was as fun as playing a game? Creating an engaging, context-based atmosphere where users are rewarded for taking action can help them feel encouraged and accomplish more.

While brainstorming, I assembled and compared current task management tools in the market today. Most task management and to do list apps I found emphasized function over delightful design, while a task gamification app lacked the contextualization aspect. With this newfound enthusiasm, I thought about what this experience could look like.

Storyboarding

With the point of view in mind, I sketched two storyboards of the experience around a context-based task assistant with gamification qualities. I focused on users’ needs and how this tool could simplify their lives.

In the storyboard below, a young professional picks up a book with the goal of finishing it soon.

The task assistant helps the young professional to find time in his busy schedule.
Completing a task is more delightful with in-app incentives, like coins and badges.

Paper Prototyping

Using my storyboards as inspiration, I crafted two paper prototypes from Post-its and paper cutouts, for a total of 36 “screens”. Here is a snapshot of the prototypes:

Prototype 1 focused on manual input, while Prototype 2 focused on a chat-based experience.

At the core of both prototypes was task management: one could enter tasks into fields the traditional way, or have a task assistant set up tasks for him. The prototypes featured contextual reminders for location, availability, mood, and weather, as well as gamification elements.

Next, I walked through the prototypes to prepare them for user feedback.

Heuristic Feedback

After making minor revisions, I shared my paper prototypes in person and online through Google Hangouts, and evaluated my classmates’ prototypes. We provided each other with feedback based on Jakob Nielsen’s Heuristics, with the heuristic violated and the severity of the violation.

Here are a few highlights from the hearty list of feedback:

  • Limitations from contextual options (Recognition rather than recall). If a user set up the task so that he had to be “happy” to complete it, this could block the app from properly checking him in, if he forgot to choose the same emotion when the reminder popped up. Likewise, limiting a task to a certain type of weather could be a hindering factor. I decided to leave out emotion and weather to focus on other contextual elements.
  • The check-in assumes task progress (Flexibility and efficiency of use). The user mentioned the check-in process asks for progress during the reminder, but that the user’s mindset is that he’s just getting started with the task. He suggested breaking down the check-in into two parts: the reminder and the follow-up.
  • Task check-in was burdensome (Aesthetic and minimalist design). Users thought the check-in process was confusing and had too many steps: users navigated through dialog windows, added progress to their task, added a photo, and other progress detail. For simplicity, I decided to scale down the prototypes to one-time tasks.
  • No way to track completed tasks (Recognition rather than recall). As the prototypes were in their infancy, marking tasks as complete had not been accounted for. A user suggested having a completed tasks screen.

Wireframing

From the heuristic feedback, I reflected on changes to be made, and decided to combine the two prototype ideas into one task assistant app.

To ensure I finished my project on time, I created a development plan, listing the assignments, estimated/actual hours, and deadlines for each remaining week of the course.

I also drafted wireframes of key screens within my app:

Starting the wireframing process with the Task, Settings, and Profile screens.

I swapped out the paper prototype images from my InVision prototype with the wireframes. Gradually, I also began to conceptualize the visual design.

Testing

Next, I tested the app in person and online to determine areas of improvement and understand what’s currently frustrating or delightful about the experience.

Part 1: In-Person Testing

To prepare for in-person testing, I designed additional screens, brainstormed an app name, wrote the testing protocol and documentation (consent forms, instructions, interview questions), and made tasks for testers. Testers were tasked with the following:

1. Create two new tasks: one using the bot and one using manual input.
2. Edit a task.
3. Once a reminder comes up for a task, check-in to the task, and find out where users can view their coins.

I sought out three iPhone users who had used task management apps in the past and/or were current users.

A sampling of my findings included:

  • Difficulty understanding TaskWiz. When creating a new task, testers were given the choice to use “TaskWiz” or manual input, and all three testers asked, “What’s the difference between TaskWiz and manual input?”
  • Confusion over coins. Testers are able to earn coins by checking in, and all were unsure about what coins were for and how they could be spent.
  • Funny story. When I asked a user whether he would use the app again, he said yes, because he wanted to get some coins. I asked, “But what if you don’t know what they’re for?” He replied with a grin, “It doesn’t matter, I just want them!”

I brainstormed solutions for both issues with alternate designs for each.

Part 2: Online A/B Testing

For my second test, I planned a Split A/B test between the original and alternate “Create a New Task” dialog design. My goal was to observe which design was more effective in conveying the goal of the app as a smart task assistant. I hypothesized that the new design would be easier to understand.

*Note: A high-fidelity version of Design B was tested. Scroll down for the big reveal. ;)

After preparing prompts and interview questions, I divided my testers into two groups, one that tested the original design (Group A), and one that tested the alternate design (Group B). The testing was completed with UserTesting.com.

I was excited to go through the results, which, amazingly, were ready in about an hour. I delved through video recordings of testers talking through the tasks on their iPhones, and analyzed their written responses.

I uncovered more findings and a few surprises:

  • The alternate design (B) was easier to understand. One tester reviewing the original design (A) said, “TaskWiz, I’m guessing, is some other piece of the software that does something for you… maybe it syncs something.” Conversely, both testers of the alternate design (B) delightedly exclaimed, “Ohh!” as they read through the dialog and chose the “wizard” option with no hesitation.
  • Still confusing: coins. Both groups shared confusion about the coins. A user from Group A said, “the coins I would probably leave out if they serve no monetary value,” while a user from Group B said, “I am still unsure about what the coins are for!” To reduce confusion, I added more details in my final video walkthrough below.
  • Delightful features: hidden surprises. Group B shared that they appreciated the personal touches that help the app to stand out among functional task apps. The features included gamification with coins and the bot sharing random funny videos.
  • Potential killer feature: reminder follow-ups. Group B also appreciated having the reminder divided into two parts — the reminder and the follow-up. This feedback was originally suggested by my first paper prototype tester, and it was satisfying to see that this feature was well-received by others.
  • Limitations: static prototype. The prototype was built from a set of static images on InVision, so it offered a glimpse into the intended interactions, but had to be completed in a certain order to demonstrate the full range of features. Having to go in order was a source of frustration for testers.
  • Confounding variables: tech savviness and experience level in user testing. Both groups of testers included people in their early 20s and early 30s, but Group A expressed more hesitation/confusion and vocalized more frustration throughout the app. For example, one tester spent a few minutes analyzing the InVision app loading screen before realizing she was supposed to tap on the link shown.
  • What I would do differently: clarify directions. I double-checked the testing prompts, but missed a few places that needed more clarity. In the future, in addition to performing comprehensive pilot tests, I could write out the name of the screen that a question references.

Introducing TaskWiz

The TaskWiz Walkthrough.

Meet TaskWiz, a smart task assistant app for iPhone. This prototype includes an assistant chatbot, custom task input, contextual reminders, and fun gamification elements.

Final Thoughts

I’ve made it to the end, and if you’re reading this, you have, too.

I’m grateful to have traversed this epic, 8-course journey, which provided me with knowledge, tools, and methodologies for future opportunities in interaction design. Coupled with my previous design experiences, I now have a greater depth of design knowledge, and can take on any design challenge confidently. Bring it on, world!

  • In 10 weeks, I completed an intensive design process from beginning to end, learning about user needs, designing for these needs, and testing the designs with users, both online and offline.
  • In 8 courses, I completed a certificate spanning the topics of human-centered design, design principles, social computing, research, prototyping, information design, and experiment design and analysis.

I’ve enjoyed this learning experience, and want to say a big thank you to Professor Scott Klemmer, Professor Elizabeth Gerber, and Professor Jacob O. Wobbrock for creating this program. I also want to thank my classmates for sharing their knowledge in each subject matter and providing me with feedback on my work.

For anyone who wants to learn about this program, check out UC San Diego’s Interaction Design specialization on Coursera.

Thanks for stopping by! I hope you enjoyed reading about my process. I invite you to share any comments, questions, or feedback you have.

Don’t forget to connect with me via Facebook, LinkedIn, or Twitter! I love meeting fellow designers, makers, and curious minds.

--

--