Part I: How to unify multiple products into a single experience

Cristina Feijó
Feedzai Techblog
Published in
14 min readSep 20, 2022

It’s with a lot of mixed feelings that I’m writing this post, my journey at Feedzai has come to an end. Feedzai has been my home for the past four years, and ohhh what a home! This was the place where I can say I grew the most. Not only as a professional, but also as a person, as a human being that needs to deal and understand other very different human beings.

Feedzai did start, as most start-ups do, as an engineering-driven company with a technology-first approach. After a lot of effort from the UX team in promoting our mission within the company, holding workshops, informal gatherings, helping other teams succeed by volunteering ourselves to support with UX processes, we witnessed how this reality slowly shifted to a user-centred company, a move that fills me with pride. All projects done in our product team have at least one UX designer involved. Meanwhile, discovery and research is very much valued and considered a fundamental tool for projects to be robust and scalable.

A major UX-driven project, a milestone achieved

One of the most important milestones for our team was what we call the RiskOps Studio (or just Studio). This is likely the most challenging UX-driven project ever undertaken at Feedzai.

RiskOp Studio not only makes our product more user-friendly, it also increases our users’ efficiency, changes their way of working, and decreases their costs, a lot of costs. Our goal was to set the baseline for an even brighter future at Feedzai, a future where not only functionality and technology matter, but where our users can see their mental model reflected in the product they use on a daily basis. A product which is reliable, where they can feel confident and supported on their decisions.

To make this happen, our team looked strategically into every single product in Feedzai’s offering; into every use case and persona that uses our products; into their needs; their habits; edge cases and tested, tested, tested. This was no ordinary project, as we would be unifying Feedzai’s multiple products, use cases, and journeys into a single experience with a consistent and user-centred interface.

Matching our users’ mental model

The first step of this new adventure was to understand how the multiple products Feedzai offers would come together and make sense as a whole. We looked deeply into which user journeys each product supported to understand how they could function as a whole in a new architecture of information. The goal was to create a blueprint for the RiskOps Studio to scale in a healthy and sustainable way.

Next, we broke down our findings into more granular information, where products would not only link to use cases, but also to include context around specific features and functionalities. We had to understand how, when, and why our user personas would go through these journeys, in order to make sure they will continue to, or will eventually become, supported with the changes we were planning to make.

We collected all the concepts and translated them individually into cards. We also needed to make an assessment over which functionalities made sense to keep and which ones we could drop, either because they weren’t being used anymore or because we would change them strategically into a different, better feature.

I estimated roughly 30% of functionalities either weren’t needed anymore or could evolve into something that would suit our users’ needs in a more global and strategic way. For example, change the way users would write rules (“if, then” conditions) to fight financial crime patterns and make this experience more efficient, less time-consuming, and with much smaller learning curves. (If you want to learn more about how we redesigned our rules experience at Feedzai, check part II of this blog post).

While planning and preparing the card sorting exercise our team faced two major challenges:

The first one was that we needed to make some compromises over the granularity of features we planned to show in the cards. UX best practices indicate there should be a maximum of 40–50 cards for each user to organize, especially in open card sorting exercises like the one we would be holding primarily. It is proven that 50 cards is usually the maximum amount of information a human can process and act upon.

This proved to be a real challenge, since in the financial crime industry use cases differ a lot from one to another, meaning the cards available to sort would differ between use cases. For example, in a Transaction Fraud Banking use case there is no need to have concepts like Suspicious Activity Reports (SARs), which are reports filed by institutions when there is a suspicion backed up by data and specific transactions that indicate possible fraudulent activity like money laundering. On the other hand in the Anti-Money Laundering (AML) — Transaction Monitoring use case, filling and filing SARs is one of the main journeys of our users.

Defining cards and running workshops

The second challenge was around naming the cards. Many clients aren’t yet 100% autonomous in the journeys they need to make, so they rely on our customer success teams to guide and support them along the way. This means that many of our users use a different language to describe their journeys which is not linked to our product terminology directly. Describing features in a way everyone would find intuitive and understandable proved to be a real pickle.

Time was of essence, but leadership had granted me some more time to dedicate to the project but they wanted to see results and understand how impactful these changes could be. This meant we didn’t have a lot of bandwidth to test names and improve accordingly.

We decided to reframe the existing terminology to a label that communicated the goal of the feature instead of using the product copy. We kept the original name and added a short description to the cards, believing users would struggle with it.

Example of card for the sorting

After defining all the concepts that would translate into cards to be sorted into multiple categories, we used FigJam to conduct the workshops. It proved to be a very flexible and surprisingly fun tool for our users. We planned and divided sessions into the following structure:

Card sorting structure diagram

Self-sessions

Before kick-starting the official sessions, the product manager and I decided to do our own card sorting exercise. This would be especially important to document our unbiased view before observing how participants performed and which ideas were explored. The goal was to compare and understand how far or similar our assumptions were to the end results.

Internal sessions with an open card sort

We first conducted internal sessions both with product stakeholders — including product managers, directors, SMEs, and so on — and internal users, such as data scientists and analysts. These users contributed respectively to the sessions with their wide knowledge and ability to put themselves in the users’ perspective.

Additionally, to understand which options should be available for the different user personas/roles, we asked the participants to mark the cards they interacted with regularly. This allowed us to understand which options should be available for the different user roles when accessing the UI.

Example of how participants marked cards that aligned with their persona needs

We split people’s expertises into different groups of three to generate healthy discussions and conducted an open card sorting workshop. This means users had flexibility to create their own categories to group categories under, change card names, duplicate them, and even remove them from the sort, if they felt those concepts wouldn’t serve the new experience any longer. Our team would guide them through the process, ask them for clarifications and feedback over why they made their decisions.

Card sorting instructions

Client sessions with a closed card sort

After running a pilot with one of our clients we quickly understood an open card sort workshop was too complex for our users to grasp and produce meaningful results. Bear in mind our clients mostly belong to somewhat conservative and traditional industries, like the banking industry, which usually means they have less, or zero, contact with any of these types of exercises.

We decided the best approach would be to follow a closed card sorting and moderated approach, whereby users would still have the chance to provide feedback and organize concepts, but they would benefit from more guidance: card categories to organize individual cards into would be predefined according to our internal results and there would be the option to remove cards they felt would be unnecessary to include in the new experience.

After conducting the sessions we sent clients our internal results, asked them to compare them with their own outcomes, and provide feedback on what could be improved.

Client feedback instructions

Breath in, breath out and analyze results

Since we decided to use FigJam to collect insights, we needed to have an efficient way to analyze the results produced, both internal and client outputs. Luckily, we were able to find a template that would automatically compile results into a CSV format, which made it much easier to analyze and document. You can find a copy here.

One of the main challenges we faced when conducting the analysis was that our product couldn’t be compared to simple information architecture (IA) like a website, for instance. We could observe that users needed more than one category level in order to map the available cards. Which meant our analysis couldn’t be a 1:1 process, it would be closer to a 4:1 analysis. We first needed to understand how common the top level categories were and how they mapped to the other lower level categories.

In the end we would come up with a hierarchy result similar to this:

Information architecture structure diagram

Investigation vs. Strategy

Two main top level areas were identified as primary product areas. After some research and careful evaluation we named them:

  • Investigation: an area where risk analysts conduct their investigations by reviewing alerts and analyzing fraudulent patterns and behaviors.
  • Strategy: a product area where data scientists and analyst managers and fraud directors would define their risk strategy. Rules and models compose the strategy, which is responsible for scanning transactions and deciding whether they should be alerted and manually reviewed by the analysts’ teams.

We then analyzed the lower level categories’ correlation to the individual cards and were able to create a unique structure. Cards that scored between 67–100% correlation were placed under the respective category without any further discussion, whilst the ones that scored between 33–50% had a more careful analysis between the product manager and UX.

Another interesting pattern that emerged was one of the cards that represented a fundamental core concept in the current product architecture was removed by participants from all the sortings, leading to a 0% placement. Most groups would convert this card into a higher-level concept representing the clients’ use case (e.g., Transaction Monitoring), as shown in the image above.

Additionally, we standardized namings of top level categories and improved individual card namings, by carefully observing where unbiased users which had no or very little contact with the product struggled most. For instance, a functionality we used to call “Plans”, a label that is very abstract, open to interpretation and doesn’t really describe the goal of the feature, was renamed to “Data Transformations”. An area where users are able to clean, change, or map their data.

Test and iterate

Concepts were organized and structured into a model that would match how users thought and worked their way through our product journeys. Great! But now we still needed validation that our research posed the right assumptions. We made our hypothesis visual by incorporating the new IA into a low fidelity interface, an interactive prototype whose main goal was to validate findability and understanding of the new IA.

It was during this phase, where our project was becoming more and more tangible, that the UX team was able to show product leadership how valuable this project would be for Feedzai’s growth. It gained momentum and our team gathered support to continue with this huge transformation, which would completely change how our users perceived Feedzai as a company: one product, one experience, one interface.

Say whaaaaat, 76% success rate!

Since our product, as mentioned previously, has multiple personas, we needed to make sure all of them would find the new IA intuitive and easy to navigate. Additionally, not all personas would have the same options available, for instance a data scientist persona would need access to an area where they could train a machine learning model. But a risk analyst wouldn’t need to see any of the machine learning model related components. It would only add more load to their cognitive processing.

Hence, why the UX team invested so much time in designing for each one of these single needs. Additionally, we created a generalist prototype, which was considered a less common use case, but with potential to scale and increase need in the future once our product has achieved a level of automation and ease of learning over areas which are yet complex for a non-data scientist user to perform. This would be what great would look like!

The new IA needed to be robust, lasting, and scalable. It would serve as a baseline for our team to start improving key journeys our users would take, so validation was REALLY important. We defined the participant requirements to make sure they had experience with the components we would like to test findability and defined the metrics we would like to collect during sessions:

  • Task/user/overall test success rate
  • Users’ subjective satisfaction (from 1–7)
  • Users’ subjective difficulty perception (problem value metrics: from 1–7)
  • Users’ expectations (value discovery metrics)

We conducted usability tests with 19 users in total, nine risk analysts and 10 data scientist user personas. As a team we defined that our target to consider success should be baselined at 60%, since users would be interacting with a complex product without any prior contextualization (onboarding tools) nor training. Tests were very successful with an overall success rate of 76.30%. Additionally users rated the proposed solution as 5.46 (on a 7-point rating scale, ranging from Does not improve at all to Drastically improves) when asked how much it would improve their current workflow.

We were able to exceed our own expectations and that felt amazing! Besides that, these results were also important to boost leaderships’ confidence in the work we were delivering.

Won’t settle for less than a pixel perfect design

After incorporating the design changes which were identified during usability tests, our team started to grow. We had Isabel Pinto join the team with her amazing UI skills to really finalize and polish our design which up until now was not the focus and needed some urgent love.

Together we planned how to move forward and decided there were multiple components that needed to be reevaluated and designed in a detailed way, for example, designing a component that enables users to switch between use cases (Fraud, Anti-Money Laundering, Transaction Monitoring, Transaction Screening, or Customer Due Diligence, just to name a few).

We wanted to be thorough and choose a design that would be as easy to understand as possible. Another important goal was to assess which colors users associated in a more meaningful way to the two different product areas (investigation vs. strategy). We were aware that we were dealing with a complex IA, so we decided to design two different approaches and make an A/B test to assess which of the two options scored better. While planning the test we decided to list the Pros and Cons of each solution so we could compare the results to our initial assumptions.

OPTION A:

Option A design

Pros:

  • Allows users to easily recognize which primary and secondary category they’re in.
  • It distinguishes the tertiary category from the other categories and places it on a different vertical level. Makes scanning categories easy.

Cons:

  • Depending on their user journey, users can go to a maximum of four clicks.

OPTION B:

Option B design

Pros:

  • Fewer Clicks — Depending on their user journey, users can go to a maximum of three clicks. Fewer cursor movements by having all menus on the same level.

Cons:

  • Users could spend more time navigating the menu to recognize which level they’re on.

The plan would include questions that made it possible for us to later compare benefits and struggles users faced while interacting with the prototype, but also validate their subjective perception and satisfaction when presented with the designs.

Find below some examples of the questions posed:

  • How well would this navigation match your work process and needs?
  • From 1 (does not represent well at all) to 7 (represents very accurately) how well does the color displayed represent the “Strategy/Investigation” concept?
  • From 1 (not helpful at all) to 7 (very helpful) how helpful is color to differentiate between the two different product areas?
  • Using elements besides color was also an option, so we decided to collect some data around that assumption too.
  • Overall, from 1 (cumbersome to understand) to 7 (easy and intuitive) how intuitive did you find the proposed navigation?
  • Overall, from 1 (does not match at all) to 7 (matches and fulfills my needs) how well would this navigation match your work process and needs?
  • What did you like in this proposal?
  • What do you think should be improved?

After A/B testing with 10 users (data scientists and analysts), results indicated clearly that option A would be the best option with a 71.67% task success rate compared to option B which scored 37% lower (45% success rate). We decided to give use to most of navigation’s A design, but included some attributes that were considered positive by users from navigation B. We believe this would be the best way to ensure our design was rock solid!

Top navigation design end result

Done and done! We had our navigation defined, improvements identified during the sessions were added to the mockups. Of course, the whole process was carefully planned and discussed with the engineering team to make sure effort and feasibility were accounted for. I’d like to add that I’m summarizing and purposely leaving some steps out regarding this part of the process since I was only acting as a supporter. If you’d like to get more detail around how it flowed you can contact Isabel Pinto and I’m sure she would be happy to provide more information.

Product design with new IA incorporated

Product Vision Workshop, what might the future hold?

As a last step we wanted this new product structure to last, so we dedicated some time into conducting a workshop which aimed to solidify the vision our teams had for our product. The goal would be to bring meaningful stakeholders and decision-makers together and jointly understand in which areas they believed our product could be improved:

  • What could be removed?
  • What could be added to increase value?
  • What could be changed or transformed in a way that would better suit our IA?
Product vision workshop canvas

After collecting the participants’ ideas, we distributed votes in order to prioritize the topics for discussion. The top-voted themes were the ones we discussed and brainstormed about first. This exercise turned out to be important for several reasons. First of all, we could understand over which topics multiple people were aligned or needed to work on alignment. Secondly, it opened peoples’ minds, helped them reach beyond the day-to-day responsibilities and scoped features they were delivering. It also helped them see how strong we were when acting as one team, one product, one goal: to serve and to delight our users with the best solutions while still having fun and growing our product whilst improving ourselves constantly.

--

--