Day 4: Revised Prototype & User Feedback

Mariya Kopynets
COGS 187A Summer 2016
11 min readAug 20, 2016

--

Emily Small, Mariya Kopynets, Chris Tetreault

Initial Prototyping by Emily Small

Upon performing competitive analysis of existing apps such as: Monkey parking, Best parking & ParkMe, then conducting interviews with potential users, the Beep team affirmed we had enough qualitative feedback to warrant a low fidelity prototype. We began by looking at the user interviews from the initial need finding stage, and began to discuss what features we would need to include to address users’ current frustration with UCSD parking. We constructed storyboards for various anticipated scenarios based off interview data, then brainstormed what features would best address these needs.

We began by sketching a rough hierarchy of features to address user needs. The Beep team next constructed a flow chart and discussed how transitions would carry information from one screen to the next in a pleasant, intuitive manner for the user.

Our initial screen would include a log-in feature and a sign up feature. Introducing individual accounts for users would allow users to personalize their lot preferences based off Beep usage history. The personalized password feature would afford security for sensitive information stored in the account for the user (i.e. credit card info to pay for visitor passes). Once signed in, the user could chose to remain signed in and not have to re-produce username and password for a determined value of time up to 365 days.

Once signed in we had the log-in screen flow to a fresh screen which offered the user two primary options: two banners which could be swiped for either “looking” or “leaving”. This screen provided guidance to users towards the main function of the app as identified by our user group. This screen would simultaneously coordinate different data points from all subscribers leaving or arriving in particular lots and provide the real time data behind our informative analytics later to come.

After logging into the app and surpassing the looking/leaving option, the user would have the option to scroll through various analytic screens, dependent on selected layout. The user would have the option of viewing a blueprint style spatial representation of all available lots at UCSD, or a drop down list naming all parking lots, accompanied by availability percentages(calculated given lot history, as well as real time data). This list would be filterable dependent on preferences constructed by user history. The list and map views of current lot status would allow users to “subscribe” to their favorite lots to receive real time updates on traffic flow around campus throughout various hours of the day. Users could now maximize their parking potential!

Additionally, we constructed: a “purchase parking” feature, a “LotChat” feature and an exit survey. We believed the “purchase parking” feature would allow visitors and faculty in a rush to remotely purchase online permits at their convenience from pre-loaded funds. We hoped the “LotChat” feature would provide a sense of camaraderie amongst the Beep community and allow users to instantaneously update their network. Lastly, we implemented an exit survey to provide even further accuracy in real time updates by incorporating user feedback on their experience.

Design Changes Over the Week by Mariya Kopynets

There was about 26,590 undergraduate students enrolled at UC San Diego during the 2015–2016 academic year. Taking into consideration the large population size of this particular category, our team has decided to analyze the prototypes of BEEP app from the Student persona-type perspective. We wanted to understand what features in the app will allow students to navigate most efficiently to their parking destination of choice (Rosenzweig, 2015, p.146). We 1) obtained feedback from students at Rising Edge team and later 2) assessed our prototypes following Jacob Nielsen’s 10 Usability Heuristics for User Interface Design, Scott Klemmer’s Lectures on Design Heuristics, and the Successful User Experience-Strategies and Roadmaps by Elizabeth Rosenzweig. These two intricate exercises allowed our team recognizing current pitfalls and creating conclusive strategies for aesthetic, minimalist, and feasible prototype design actualization.

Team BEEP Feedback Exercise

Part 1: Feedback When the initial prototypes were ready for user testing, we invited team Rising Edge to join our workspace and view, test, and provide us with comments for prototype improvements. Our team-members projected the prototypes one-by-one on the large TV screen available in the classroom. In detail team BEEP described all the features available, displaying one prototype-slide at the time. The depicted app information was praised in many ways by Rising Edge students, specifically that our app “is pretty clear”, “not convoluted”, and “all the core functionalities are in place and the logic app works correctly”. It was exciting to hear that our initial ideas were implemented in the right direction, that a novice user could potentially navigate through the app with ease. However, the task was to attain feedback on faults and suggestions for app improvements. Ergo we decided to proceed with more specific usability test questions to obtain qualitative data and recommendations to what the app was still missing (Rosenzweig, 2015, p.134) Twenty minutes into the ardent discussions we learned the following:

Feedback Records for Low-Fidelity Prototype

Insights from User Feedback There were 5–8 users simultaneously testing several different tasks on the prototypes displayed. Besides, the usability test also incorporated structured scenarios such as reserving searching for parking spot in preferred buildings and filtering out S and V parting lots. This allowed our team to evaluate if the app is usable in a real-life framework. Users suggested to discover ways to incentivize users to answer surveys, to include more data on the X,Y axis on graph, to incorporate a filter system which helps locating “closest available parking spots to the ultimate destination”. Users also advised integrating a background video, potentially from https://github.com/movielala/VideoSplashKit source. Withal, to decrease the number of questions and apply meter instead of stars in the survey. In the meantime, we heard approval from users on aspects that worked well in our prototype. Explicitly: the option to filter parking lot search to specifically S or S/V combination, the ability to reserve a spot with a single click on the icon, the projected statistics of reliability, the ability to review closely restaurant/cafeteria, the ability to store in memory the car’s current location, the ability to purchase permits remotely.

After attaining the information we provided with feedback to the prototypes presented by Rising Edge team. Following, a self-report survey was distributed for each team to complete and provide with even more detailed suggestions and critique. From the survey we discovered more qualitative data, specifically that: to have 3 states looking, found, leaving to select from on the front page, to add parking lot for each floor of garage, and to consolidate the found parking page with the subscription page. With such a broad spectrum of usability testing by various users we were able to bring forefront the current problems associated with initial prototypes, and acquire actionable design recommendations (Rosenzweig, 2015, p.153).

Part 2: Design decisions, based on heuristics and user feedback

Fig. 1 The curve represents the average of six case studies of heuristic evaluation.

Following the logic of Jacob Nielsen, we used 6 evaluators to conduct the heuristic evaluation. We held a Google Hangout Video conference, and our team systematically went through the prototype interfaces initially solo, then as a group, while examining the myriad of features and comparing them to the 10 Usability Heuristics for User Interface Design by Jacob Nielsen. This process allowed independent and unbiased evaluations of the interface by each team member. Some evaluators verbalized their findings, while others had written reports. We obtained substantially better evaluations by aggregating data from different evaluators in our team which helped in making design decisions (Nielsen, 1995).

Heuristic Evaluations of Low-Fidelity Prototypes

In order to follow the User Control and Freedom rule we decided to add a front page with options signing in/out or proceeding as a “guest” and to maintain the undo/re-do app functionality by keeping the error on top of each screen. We ensured that the language we used was familiar to our intended audience: UCSD students. We also established more flexibility by introducing more defaults with options to select from (Klemmer, 2016) and to include additional ambient resources, which provides little extra information to users and allows them to make better decisions. For the Aesthetic & Minimalist Design rule we decided to keep the filter option for A,B, S,V parking spots and to include collaborative filtering to make suggestions on nearby open parking lots and nearby restaurants/cafeterias. In order to ensure Recognition Over Recall, which decreases the users’ need to remember information, we have clearly defined default buttons to avoid unnecessary intermediate steps. Finally, for Help and Documentation heuristic rule we have collectively decided to create a video-tutorial for the novice to better understand and efficiently navigate through the BEEP app (Klemmer, 2016).

The main difference between user testing (feedback we obtained) and the heuristic evaluation was that our team-members were wiling and open to answer questions from the evaluators. We discovered a certain pattern between user testing information and heuristic evaluations. After evaluating each usability problem separately and identifying specific mistakes, we made collective decisions for aesthetic, discrete, and congruent prototype-design actualization. This allowed us to generate a new revised model of the prototype (Nielsen, 1995).

Revised Prototype by Chris Tetreault

With the user feedback reviewed and the heuristics examined, the last piece of the puzzle was to decide on what actually needs to be changed. We discussed the data, and decided on some design issues. Chief among them was the found parking screen.

A screen with no reason to exist

The user testing determined that this screen served no real purpose. It showed roughly the same information as the subscriber list and map view, but less efficiently. It was decided that this screen should be completely eliminated. The subscriber screens would gain a “found parking” button that would bring the user to the found parking survey.

The next issue identified was a nomenclature issue. Throughout our app, we use the word “subscribe.” Typically, when one subscribes to something, they subscribe for some long period of time. When one subscribes to a magazine, they subscribe for a year. When one subscribes to a news feed, they subscribe indefinitely. When one subscribes to a lot in Beep, they subscribe until they find parking, usually within an hour. This sort of fleeting relationship isn’t normally what one thinks of when they use the word “subscribe,” so we decided to change it. “Subscriber” would become “lot hunter” and “subscribe to” would become “search in.”

The new landing screen

The next issue in line was the lack of a landing screen. Nielsen’s ten heuristics value recognition rather than recall and the visibility of the system status. Neither of these are upheld when the first screen a user sees is a login screen. What are they logging into? Facebook? Their bank account? Beep? All are equally likely. To remedy this situation we decided on a simple solution to a serious problem, we added a landing screen.

The evolution of the survey screens

The next issue is twofold: the survey screen has redundant questions, and the survey screen has a questionable meter. There are two survey screens in the app; when a user parks, and when a user leaves. When the user parks, they are presented with three questions, one of which is unnecessary: “how long did it take you to find parking?” This question can be determined programmatically; we know how long the user has been subscribed to this lot, therefore we have an idea of how long it took them to find parking. Similarly, the leaving screen has a redundant question “is this parking lot a good choice to park in right now?” Also asked on this screen are “how much competition is there in this lot” and “how full is this lot?” Obviously a full lot with a lot of competition is a poor choice.

On top of these redundant questions is the questionable star meter. In the real world, we associate stars with goodness. A five star restaurant is the pinnacle of culinary perfection. It should follow that a parking lot with four stars of “fullness” should be a good place to park, right? You’d be wrong if you assumed this, and this is the problem. We decided that we should implement a meter instead. After all, it follows that a full “lot fullness” meter means the lot is full.

The evolution of the analytics screen

Next on the block is the analytics screen. The analytics screen allows the user to view historical data in order to make more informed parking decisions. Unfortunately it was determined that this screen could use some polish. Users were confused by the graph of “Days of Week” over “Time of Day.” Due to this, it was decided that we needed to provide more detail on this screen.

Other much more minor issues were discovered as well. Among these were that the user testers wanted more granularity in the temporary pass purchasing system, and lot chat needed to support anonymous users. Overall, we gathered a lot of good data, and the end result is a much stronger concept.

References:

Nielsen, J., How to Conduct a Heuristic Evaluation, 1995, Nielsen Norma Group, Web. https://www.nngroup.com/articles/how-to-conduct-a-heuristic-evaluation/

Nielsen, J., 10 Usability Heuristics for User Interface Design, 1995, Nielsen Norma Group, Web, https://www.nngroup.com/articles/ten-usability-heuristics/

Rosenzweig, E., Usability Testing, Chapter 7, Successful User Experience: Strategies and Roadmaps, 2015, 131–154, Print

Figure 1 The curve represents the average of six case studies of heuristic evaluation. How to Conduct a Heuristic Evaluation by NELSON, J. Jan. 1. 1995, Retreived from https://www.nngroup.com/articles/how-to-conduct-a-heuristic-evaluation/

Klemmer, S., HCIOnline (2016, Apr. 11), Lecture 4.2 Design Heuristics (Part 1/3), Retrieved from https://www.youtube.com/watch?v=gSm6bOw-KcQ&list=PLNtQfKgd43l2ybf4ukgGz5513zKBXCMgM&index=2

Klemmer, S., HCIOnline (2016, Apr. 11), Lecture 4.2 Design Heuristics (Part 2/3), Retrieved from https://www.youtube.com/watch?v=Hi6YO1tTqTk&index=3&list=PLNtQfKgd43l2ybf4ukgGz5513zKBXCMgM

Klemmer, S., HCIOnline (2016, Apr. 11), Lecture 4.2 Design Heuristics (Part 3/3), Retrieved from https://www.youtube.com/watch?v=tLFrVe4o_98&list=PLNtQfKgd43l2ybf4ukgGz5513zKBXCMgM&index=4

The Design Team

Saveen Chadalawada, John Wishon, Chris Tetreault, Emily Small, Eric Kingsley, Mariya Kopynets

Read Previous: Personas & Storyboards, Need Finding & Competitive Analysis, Team Building and Logo Design

Thanks to Mary ET Boyle, Rahul Ramath, Kandarp Khandwala for all their support.

--

--