CentHog — A Concept for Financial Habit Change

Kevin
5 min readMay 19, 2019

--

Eleven months ago, I came across the Interaction Design Specialization on Coursera, through UC San Diego, and taught by the knowledgeable Scott Klemmer and several other guest lecturers.

Over that time, I was given an opportunity to rehash my interaction design skills, and to really apply the concepts I had been reading about for several years prior. The refresher was welcome, and now, as the capstone course draws to an end, I feel more ready than ever to transfer my two years of web design experience into a more UX-centric position.

My capstone project came out of an idea from the theme of “change”, one of our options for the course, and revolved around helping people enjoy changing their spending habits. It attempted to make penny-pinching cool, and to facilitate saving through a conceptualized product-value AI, Vincent the Centhog, as well as social interaction.

Needfinding & Ideation

To begin defining the needs that were to be answered by this application, I organized a simple diary study for three coworkers/friends.

Each participant was to go through their day as normal, but to try and record instances where they found themselves making a value comparison of some kind, either through a physical or digital stimulus.

These findings were used to ideate a list of concrete user needs, and to construct a point-of-view about the application:

Most people know they should practice good spending and saving habits, but this isn’t always easy. One primary reason is how spending is so ingrained into everyday life, that the amount spent usually falls into the background. A better way to visualize and otherwise bring to attention continuous saving habits would provide an opportunity to help users set these positive habits in motion through reminders, incentives, and other design patterns.

Paper Prototyping & Evaluation

Research findings and resulting conclusions were then turned a preliminary list of features, explored through two storyboarded scenarios for use of the app. After the user flow was finalized, I created a low-fidelity paper prototype/wireframe of how the UI should look and function.

Through heuristic evaluation both in-person with an acquaintance, and online with another colleague taking the course, I came across a number of potential usability issues. Major changes in the next iteration addressed:

  • Placement of UI elements and screen crowding
  • Screen architecture confusion
  • Text label hierarchy
  • Miscellaneous other issues

User Testing

The digital prototype took on a style of rounded corners, casual, fun typography, and a greater emphasis on the activity of saving, rather than the actual dollar amounts. This is to help the user better enjoy developing better spending habits, rather than having to count every single cent (that’s Vince’s job 😁).

A combination of in-person and online testing was utilized in order to find any last minor or major issues with the application before final polishing. I had two friends run through a usability test to find some issues for the comparative online usability test that would occur the next week. The protocol for these tests was:

Preparation

  • Download application to present prototype on my mobile device
  • Seek out testers amongst my friends and family
  • Gain informed consent using the consent form
  • Get notepad and camera ready to take pictures and jot notes

Usability Script

  • Tell participant the brief and general purpose of the app
  • Give participant phone, and have them give first impressions of the landing screen
  • Have participant go through onboarding
  • Instruct them to set their first goal, either by themselves or versus a friend
  • Instruct participant to pretend that they are in a Best Buy, and to use the app to check the value of the television in front of them
  • Ask participant if they would make the trip or not for the extra value
  • Let participant browse other screens of the app
  • Conduct post-test interview

From these tests, I distilled several more improvements to be made to the prototype, as well as a significant change to test the next week. This involved combining two screens into one to facilitate quicker price comparison, as well as to help the user stay within the “value-score” paradigm.

The comparison test followed much the same script as in-person, although this time, tasks were automated through UserTesting.com.

A | B

Alternative A kept the two screen design, with the addition of an image of Vincent the Centhog “saying” the value score. This was to help construct the association between the personified AI and the value scores that were previously just put on the screen with no real context.

Alternative B instead sought to confine all value comparison to one screen, in order to facilitate a quick comparison, and avoid the cognitive load of having the user keep these scores in their head in addition to dollar value.

Overall, I had hoped that both of these improvements would help add meaning to the evidently confusing value-score system, and minimize confusion at the lack of dollar values.

In Conclusion . . .

Comparison testing proved to be useful for determining that the application could use several more revisions, but the users seemed to better grasp the “value score” paradigm. Alternative A, predictably, was more efficient task-time-wise, but Alternative B left more room to consider adding dollar values or other potentially relevant information.

One great insight from this test was how much different testing at home can be from the context of intended usage, and how that can influence how clear a label or icon is to the user. A contextual in-person usability would be the most important next step for development.

Want to try out CentHog? View the latest prototype.

--

--