Presenting Halo! and User Testing
August 22nd, 2017
We kicked off the day with a presentation of our personas and storyboards to the class. Our goal with this presentation was to show the different types of people and situations where our application could shine. Although our intent is to target college students, our other potential audiences include project managers, wedding planners, and all types of people who need better organization in their lives. With Halo!, our objective is to make tedious and stressful group collaboration a thing of the past.
Often times, the design of our application makes perfect sense to us because we know the thought process behind it. Unfortunately, this means that we aren’t aware of poorly-designed decisions with our prototypes even if they are glaringly obvious. This is what leads us to the next process in developing our application — user testing! We split up into two groups and we collaborated with another team so that we could test each other’s prototypes. By giving and receiving feedback from other teams, we gained new insight on our own design. We realized that the icons we chose and the navigation that we created were not as clear or intuitive to other users. Furthermore, we learned about features that people wanted, such as a better way to communicate via comments and activity logs. These are new observations that we will be taking into consideration as we redesign our future prototype.
With these new ideas in hand, it was time to reconvene as a team. We collected our data and shared our feedback. We used a whiteboard to help us visualize our newly designed prototype that tried to incorporate our testers’ ideas. By the end of today, we had come up with the new sketches of our second prototype. Overall, we’re satisfied with the current redesign and we’re excited to show it on Thursday’s second round of user testing.
Written by Brendon Taing
August 24, 2017
On Thursday we split the team into two new groups to test our combined prototypes. This proved to be valuable, we learned that some of our design decisions resulted in ambiguity. Users complained that the visualization of task completion was too confusing. Using colors to denote status resulted in multiple subjective readings and users found the navigation between screens to be unintuitive.
At this point it was clear that we had to rework the way tasks were displayed. List view combined with multiple colors resulted in too much visual clutter, in addition to reading the task itself users would also have to determine it’s status. We solved this by opting to use a “card view.” Users would first select the category of task completion (not started, in progress, and completed), and then the list of tasks would be displayed. These cards are stacked on top of each other like tabs in an internet browser, signifying that the categories are clickable. This solved our issue because the user is forced to filter the tasks first before reading it’s contents, instead of making the user hunt for specific task and then determine it’s status.
After we got the inital prototype flesh out, we ran into a new issue: How was our app different than competing apps? Why should students feel compelled to use our app over others? We decided that making the task manager collaborative was not enough to motivate accountability between users. To solve this we added a rating feature to each task. By rating the difficulty of each task, the app can calculate the amount of work that needs to be done by each user to result in a roughly even distribuition of work. Users who had done less work would see their score in relation to others in the group, and would feel the social pressure to balance the spread.
Written by Carter Duong