The Trust Exercise— a UX case study

Research on website credibility and UX strategies for improving the definitive tool to achieve it.

Kevin Ciputra
13 min readMay 16, 2022

Why I designed this

During my time here at General Assembly, I was fortunate enough to take part in my first client project with StackGo and its founders. The project I worked on focused on their close-to-launch product SitePerfect, a post-publish site spellchecking tool. Our team developed LandPerfect™ — a suite of present and future recommendations to better convey the value of the software to SitePerfect’s intended users.

*My main mission was to improve users’ value understanding of their website platform. Here is the client’s main website: SitePerfect

Click here to jump straight to the interactive prototype!

Project Details

  • Collaborators: Chris Piper (Team Lead), Holly Milling (Strategy Lead), Yasha (Ideation/Testing Lead)
  • Duration: 18 days.
  • Tools: Trello, Google Suite, Miro, Figma, and lots and lots of paper.

What I did

  • Research Lead
  • User Interviews & Task Analysis
  • Prelim. & Secondary Research
  • Persona Creation
  • User Flow
  • Team Design Studio
  • Wireframing
  • Prototyping
  • A/B Testing
  • Recommendations

The Opportunity

StackGo’s business is primarily in building tools that will assist and integrate into a marketplace of platforms. The list includes software such as Xero, Shopify, and Atlassian, to name a few. The client’s value proposition is in the enterprise market, whereby an ecosystem of these ‘tools’ would be available seamlessly as the user is using the partner software.

SitePerfect was one of such tools, set to launch at the end of April 2022 for a partnered total of 7M customer base. It was a post-publish, backend website spellchecker, however, it was planned to be integrated in the future.

  • StackGo’s need: to identify issues on and improve the website’s current usability for SitePerfect’s launch into the market.
  • Users’ need: (being those who would subscribe to SitePerfect’s subscription model) to communicate the value of their products/services to increase customer conversions on their websites.

The Recommendation

My objective here was to develop LandPerfect™; an end-to-end design direction for StackGo to enhance their users’ landing and value introduction experience. The solution pack was divided into five (5) MVP enhancements, namely:

  1. Revamped front-page tutorial. An easily digestible and high-quality front-page tutorial to convince potential users to try the SitePerfect.
  2. Instant basic results. Users are able to have a sample report generated without having to sign up for an account or provide payment details.
  3. Three-column pricing panels. Self-explanatory, based on industry best practices and examples of successful brand adoption.
  4. Time frame estimate. Presenting how long the report will take to be generated helps alleviate users’ confusion.
  5. Report progress bar. A progress bar helps signpost which stage of the preparation the report is at.

An additional four (4) higher-effort additions were also highlighted for StackGo’s future considerations.

My Design Process

I found that this client project required a solution for an existing, low-fidelity product draft. Hence I have utilised the Double Diamond framework and adjusted it to best suit the client’s business goals, as follows:

Step 1: Research

1.0.* Reconciling Two Needs
In order to meet both SitePerfect’s audience need and business need, we had to first understand the client’s own perspectives of the product, as well as understanding who they perceive as potential users.

Thus, even before conducting some of our own research, we decided to explore what information had already been established.

  • Stakeholder Workshop
    We conducted an interactive workshop where the client expounded on the industry, challenges, and relevant goals in which SitePerfect was treading in. We also fleshed out what they had in mind when talking about customers and competitors, as well as key metrics they were after.
Trails of our workshop board 04/04/2022

1.1. Testing The Hypotheses
Taking existing insights from the client, I conducted initial research on two main areas: (A) spellchecker market/mechanics, and (B) how websites are perceived as credible.

(A) Since the product by nature deals with the English language, I launched into an investigation on what a lack of good spelling and grammar would do to business websites. Two notable studies are summarised here:

Furthermore, I discovered a myriad of texts all pointing to three major study points, namely:

  1. There is a powerful and direct correlation between spelling mistakes and customer conversion rates.
  2. Grammar errors are somewhat correlated to a loss in brand image.
  3. Searchability of a website was positively correlated to good language, whereas user retention was indirectly linked to grammar.

(B) A significant aspect of SitePerfect’s value proposition is that its services can help to establish its users as a credible website. Key attributes were:

1.2. Learning From The Best

The team researched our two top competitors and comparables in the market, sketched their task flows, and noted their stand-out features.

We parked these learnings to later identify where we could improve SitePerfect’s value offering.

1.3. Piloting Surveys
Following from this, I recognised the need to define our final set of interviewees. I circulated a multi-step survey to confirm our team’s assumptions of who our main target audiences were, including their common behaviours/tools for checking site content and language.

Key takeaways:

  • Most respondents were small business owners!
  • 27.3% of users update their site content monthly.
  • 68.8% rely on already existing tools, e.g. Grammarly, etc.

1.4. Stories of Aussie Webnauts
At this stage we finalised our research plan and put forward two main target groups — startups and Australians who regularly update their website contents. We utilised current stack-go users, social media groups, LinkedIn connections and a sprinkle of personal connections to find our 15 target users.

Our interviews included topics such as how users managed their site contents, processes and tools, as well as personal needs voiced by the community of site editors and writers. We also conducted task analyses in tandem with few interviews here and shelved our insights down on Miro for later (see 2.1).

Key takeaways:

  • User patterns in their work priorities in brand, language, and site functionality.

Step 2: Synthesis

2.1. The User Experience
By looking at the results of our task analyses and the resulting emotional experience, we decided to imprint the overall comments from what users went through to from Landing all the way to Report Generation stage.

We compiled all insights per stage and summarised the following:

  • Landing page: poor accessibility in colours and carousel, confusion in product offering, no option to access pricing.
  • Sign up page: unclear signposting, unclear features, overall lack of trust and motivation to sign up.
  • Selecting a plan: confused about differentiation between plans, surprised by the requirement to pay at this point.
  • Dashboard/Report: confusion on report generation timeframe.

2.2. Discovering The Trinity
From our user interviews we gathered hundreds of insights, including what users thought about their current spellchecking processes, preferences, and frustrations with those processes. We mapped these under behavioural commonalities and formed empathy maps of the following roles: marcomms, engineering, and startup owners.

From there we extended the archetypes to three main personas:

  • Sam the Start-Up Owner
  • Jasmine the Marketing Lead
  • Barry the Software Developer

We also discovered that from our research, some of the roles and perspectives of the three personas could intersect. Not all things are mutually exclusive, yet there would be times where, for example, Sam would wear the hat of the Jasmine or Barry during early stages in his startup growth.

With all three personas, it was noted that all three site managers valued credibility, design quality, accessibility, transparency, and user reviews.

Furthermore, we noted the greatest impact could be related to none other than our Business Owner Sam — which represented the largest user base for SitePerfect.

2.3. The Main Unmet Need
The insights showed that Sam required tools that were reliable enough to maintain his website. His utmost priority was to support his customers, and having website that could guide and understand them would help improve customer engagement, trust, and ultimately conversion.

Step 3: Testing

3.0* Pre-Testing Ideation Explosion
As a team we held a studio day which helped craft HMW statements relating to communicating and improving product value, increasing sales, and maintaining trust/tone/branding.

Our efforts garnered us over one hundred sketches. These had been further ranked, eliminated, and then prioritised per below Impact/Effort Matrix:

From the top three features, we first decided to recreate the entire user journey in order to get an understanding of which are the most urgent points in the journey to be improved first.

We then created low-fidelity screens and flows to A/B test with users regarding the features within the high impact/low effort quadrant.

3.1. Test 1 — Report Generation Process
In this round of testing we designed all the low effort features within the user journey flow, from landing to report generation.

The purpose was to test whether users prefered flow A: an upfront website URL as part of the the sign up process, versus flow B: signing up first (as of existing SitePerfect’s model but streamlined).

We also included a limited sample report and the revamped pricing panels in each case to find out how users perceived value in both flows.

To compare the flows A.1 and B.1, follow this link and select the correspondingly titled flows.

Key Findings:

  • 66% of users preferred flow A because they could understand the site premise at a glance.
  • Users who chose flow B thought it seemed less ‘dodgy’ and was less of a ‘bait-and-switch’.

3.2. Test 2 — Website Tutorial
In the second round of testing, I realised from our research that a lot of the value proposition of SitePerfect still had not been delivered properly for users to make a decision to subscribe. Returning to our research on credibility and bounce rates, our team moved into investigating best practices to demonstrate how SitePerfect works.

As a result we developed a second set of flows whereby there were two leading industry standards on product tutorials — carousel and walkthrough styles.

Style #1: Carousel

This style took the initial style of SitePerfect and improved the usability of the flashing images to a slow-moving carousel. The new tutorial also included annotated images for users’ options to click through to the relevant info.

Style #2: Walkthrough

Basically, a scroll-down version for users to learn how the product works at their own leisure.

To compare the Carousel versus the Walkthrough Test, follow this link and select the correspondingly titled flows.

Key findings:

  • Users found the carousel style now effectively educated them on SitePerfect, but not until pointed out! They scrolled past the carousel function without really looking at it as the top of the hierarchy.
  • All users took their time to read the walkthrough. It was the preferred option being more digestible at their pace.

We also noted the walkthrough style tutorial still had consistency issues, and therefore we reiterated the visuals until it was well-received by the users.

Step 4: Recommendations

4.0.* We Have A Liftoff

On the morning of 27th April 2022, the four of us got up in our best behaviours. We pulled our formal gears, and re-hashed and reiterated our presentations on Google Slides (which we crafted in the client’s medal blue).

At 10am the StackGo team and founder arrived and we launched into our in-depth presentation on our solutions for SitePerfect.

The clients asked us relevant questions, such as how we arrived at the effort side of the MVP matrix. Others were in the recommendations category, e.g. in the 4.2. recommendation there was a ‘bait-and-switch’ concern. The client asked further questions on ideas on how to address this.

Note that our team answered all their questions based on existing research/ testing. The client members were very satisfied with our answers to take it onboard. It was a success! You can find our recommendation points from 4.1 onwards.

4.1. Revamped Front-Page Tutorial

“ A well structured, easily digestible, and high quality front page tutorial to convince potential users to try the product.” — Team StackGo

  • The purpose was to guide and handhold the users more on the steps required for them to receive their first report.
  • Clarity was important in how the report will look visually and transparent enough in the value of the user’s spellchecking processes.

*Key Metric: Landing page CVR

4.2. Instant Basic Results

“ Users are able to have a report generated without having to sign up for an account or provide payment details. They then receive a sample report of their chosen website which provides a few free errors, with the remaining errors being locked behind a pay-wall.” — Team StackGo

  • The results provide clarity on “errors to be fixed” on the user website.
  • Testing included 2 errors out of 45 shown felt like a ‘bait-and-switch’. Further testing on the ideal ratio of errors detected VS shown is the next step for StackGo.
  • Alternatively, a full example report of a model site could be shown.

*Key Metric: Landing page CVR

4.3. Three-Column Payment Plan Details

“It's digestible, clear and scannable: but most importantly it doesn’t differ from what users would expect in terms of pricing layouts.” — Team StackGo

  • Labels are simple and good hierarchy for comparison.
  • Differences between each pricing option is not clear enough currently as of 27/04/2022. A better copy tailored to the characteristics of each plan requires additional testing/iterations.

*Key Metric: Landing page CVR

4.4. A Bonus Recommendation

From the report flow testings, users were shown to become uncertain about the product if the report generation had no timeframe or/and progress status. Here we recommended two complementary additions in the forms of a prudent, timeframe estimate, and a progress bar to be located on the dashboard right after the potential user requests the report.

*Key Metric: Net Promoter Score (NPS)

4.5. Higher-Effort Additions

For the next few days, we endeavoured to finalise our final recommendation report for StackGo. If we included our entire collection of research papers, synthesis, and ideation materials, and rehashed the written solutions, we would have already achieved above and beyond the client goals here. However, we wanted to highlight that SitePerfect could have a brighter potential in its future developments.

One way to do this was by highlighting long-term concepts that never became realised in our solutions (but emerged from our user research). We asked the question “what will happen after they let go of our thrusters?” These became part of our next steps so that StackGo could start thinking of the technical feasibility of the features listed, how to possibly implement/test them, and how they would add significant value to users’ current needs.

Key additions to LandPerfect™:

  1. Tone analyser
  2. Established brand identity
  3. Dictionary packs
  4. SEO tool

*Further details on the rationales behind the additions could be found by striking a conversation with myself or my team!

Key Learnings

Being part of this project with StackGo has taught and validated my real-world UX/UI and design skills. This included, but was not limited to:

  • Holding a design workshop
  • Leading research efforts
  • Working with clients
  • The end-to-end design thinking process
  • And to aspire to always improve myself and my designs, just one percent more than yesterday.

Thank you for your time.

Photo by Jeff Kingma on Unsplash

Bonus — Disaster Struck!

During this engagement, I was struck with my first COVID symptoms. However, my previous case studies reminded me of two blasts from the past:

“When the going gets tough, the tough gets going.”

“Some problems were bigger than what UX/UI can offer…”

Therefore, I decided to press on toward the finish line. But I understood that I was still human, and thus took the week after off to recuperate my health.

PS. Look forward to my next case installment!

--

--