USER EXPERIENCE TEST-MOMO MOBILE APPLICATION CASE STUDY

Ozge Zaugg
HubX
Published in
9 min readNov 11, 2021

User Experience is a term used for how the user interacts with the product, service, or system and experiences the whole process. The term User Experience also includes the usability, memorability, efficiency, and user perception of the product.

During the pre-development research phases of Momo, which is our currently in-development product, we spent most of our time thinking about ‘’How do we maximize the user experience?’’. Considering this, we conducted a detailed User Experience research study. Let’s briefly talk about Momo before getting into the details of the contents of our research.

What is Momo?

Momo is a high-tech AI product where users can edit their photos easily. It is a heavy-lift product in terms of product management, UI/UX and technical aspects. Technologies such as Machine Learning, Image Processing, Artificial Intelligence are used extensively in the background. We are in cooperation with PulpoAR while applying the technologies we use in the project.

The screenshots above are actual screenshots taken from Momo. The other images used in this article were added as examples since our product is not live yet. Actual photos of our user experience testing process are not included due to information security.

You can find the details of our product, which we have started to develop right after our user experience design process, on our website.

The Details of the User Experience Research Study:

The steps of our research study consist of 3 main and 12 sub-stages in total are as follows:

  1. Test Preparation

a. Competitor analysis and benchmarking

b. Netnography and affinity mapping

c. Deciding on the feature list

d. Creating wireframes containing selected features

e. Creating application color palette and converting wireframes to UI

f. Creating user test scenarios and viable UI prototypes

2. Selecting the Participants to be Tested

a. Creating five different personas

b. Conducting a large-scale survey to find five people for each persona

c. Identifying suitable people and making appointments for tests

3. Application of the Test

a. Conducting user interviews and prototype testing with selected people

b. Analyzing and reporting the results of prototype tests

c. Finalizing the UI with the results

1. TEST PREPARATION

The first step: We started with competitor analysis. In addition, we made benchmark studies based on UIs, feature icons, and feature lists of the competitor apps.

Tools: SensorTower, AppAnnie, Miro, Airtable, Figma

Competitor Analysis:

  1. 50 different mobile applications for photo editing were analyzed using SensorTower and AppAnnie. Data reviewed here are:
  • In which countries these applications are used,
  • Country-based revenue-per-downloads (RPDs),
  • Initial release dates,
  • Update dates,
  • Day-1, Day-7, and Day-30 retentions,
  • Number of separate and total users on Android/iOS platforms,
  • Advertising budgets used for applications,
  • Total revenues

2. We selected 30 of them, which appears to have high potential, and eliminated the remaining 20.

  • We scored all the metrics mentioned in the previous step with a certain weight during this elimination.
  • We removed the applications with the lowest scores from the list.

3. We reviewed the selected applications on a module basis. We took screenshots and videos of each module and its submodules.

  • We reported all these captured images on Miro to display them at a single glance with maximum efficiency.

4. We marked the likeable and unlikeable UI samples and entered some notes upon our liking.

5. The difference between photo editing applications and other applications is that they have many sub-functions — and as a result, a large number of icon types.

  • We carried out a separate study for icons.
  • We reviewed the icons in each application, took the screenshots, and scored each application (based on icons) out of 5.

6. As the final stage of the benchmarking work, we created a list in Airtable for the features in each application.

  • For example: “Able to shoot with the camera — Competitor1 has it”.
  • We created a Sunburst Chart using Figma.
Source: https://dribbble.com/shots/2680022-e-Commerce-Bechmark-Sunburst-Chart#

The second step: We made netnography and affinity mapping studies.

Tools: Figma

  1. Netnography can be defined as the study of different human communities, behaviors, and attitudes in online environments. Within the scope of this study, we applied it to consider the comments made about our competitors in different online environments.
  • We analyzed the AppStore and PlayStore reviews of 30 selected applications in the last six months.
  • We analyzed the comments made on the Instagram accounts and blogs of these applications.
  • We noted each point that attracted attention during the analysis on the post-its (on Figma).
Source: https://www.centercentre.com/blog/page/3/

2. Affinity mapping can be defined as the organization, grouping, and analysis of the ideas and data generated from the research.

  • We grouped all similar notes taken during the netnography study.
  • We named the groups and documented the summary notes, highlighting repetitive points (often complained or endorsed by different users).
Source: Photo courtesy team Platypus, GA, NYC.

The third step: We decided on the feature list.

Tools: SensorTower and Notion

When deciding:

  1. We checked which features in these products were released in which versions and whether they caused a remarkable change in revenue or rate with that version release through SensorTower.
  2. We checked which features in these applications have been advertised. In addition to this, we checked how long the advertising activities were active through SensorTower. Considering that a feature whose ads remain active for a long time has a high user conversion rate, we added this feature to the features list. We also noted the features in the ads that were aired for a short time and immediately removed. We also observed and reported the quality of the ads here.
  3. We combined and reported these two pieces of information
  4. The summary notes taken in the first step (competitor analysis and benchmarking) and the second step (netnography and affinity mapping) were also used as input data for the feature list.
  5. Lastly, we finalized the list and created a potential “feature list”.

The fourth step: We created wireframes containing selected features.

Tools: Figma

  1. We documented the pages and flows within the applications, considering which of the features in the final feature list would be on the same pages and which would be on different pages. We detailed not only the happy path but also the error cases.
  2. We started the wireframe design of the pages for these flows. We used no coloring during wireframing.
  3. We added links between pages using the Prototype menu in Figma (marking which page will open when a specific button is clicked).
Source: https://www.lapa.ninja/freebies/wireframes-mobile-free-ui-kit-design-for-figma/

The fifth step: We created the application color palette and converted the wireframes to UI.

Tools: Figma

  1. We transformed wireframes, finalized in the previous step, into UI designs using the color palette selected.
  2. We studied two to three different versions of UI designs for menus. We prepared alternative pages for user testing.

The sixth step: We prepared user test scenarios and created a UI prototype suitable for these scenarios.

Tools: Notion, Figma, Figma Mirror

During the user test, a prototype of the application would run on the phone we provide to the user and we would ask them to complete some tasks on this prototype.

  1. We created a task list for users who will participate in the user test.
  • This task list was created for the users, for the UX researcher to guide the users during the test.
  • A total of 20 tasks have been added to this list.
  • Examples of these tasks are:

i) Open the camera menu in the app. Flip the front camera, set a 3-second timer, and take your photo.

ii) Thicken the eyebrows of the person in the photo you uploaded.

2. We created separate prototypes for each task.

3. We checked the prototypes created after the Figma Mirror application was installed on the phones to be used during the test. The screen selected in Figma was mirrored to Figma Mirror.

4. We fixed the deficiencies encountered during the controls and made the prototypes into a flawless flow.

2. SELECTING THE PEOPLE TO APPLY THE TEST

The first step: We created five different personas.

Tools: Miro

  1. In user-centered design, a persona can be defined as a fictional character created to represent an audience of users who can use a product in a similar way.
  • We created our personas to represent our target customer base.
  • In total, we created five different personas to represent each target audience.
  • We documented the details in Miro.

The second step: We conducted a large-scale survey to find five people from each persona.

Tools: Google Forms

  1. To select five people for each persona correctly, we created specific survey questions to understand whether they would fit the audience or not.
  2. After ensuring that the documented survey questions cover all personas, we moved the questions to Google Forms.
  3. We shared the links of the finalized form on our social media accounts, ensuring the survey reached a broad audience.
  4. The questionnaire was closed to new entries when a sufficient number of recipients were provided.

The third step: We selected eligible people and made appointments with them.

Tools: Airtable

  1. We exported forms results to Excel and selected suitable people for each persona with appropriate filters.
  2. We contacted the selected people via their preferred communication (e-mail or telephone) and scheduled suitable times for interviews of 30 minutes per person.
  3. User candidates were given the following information beforehand:
  • Since the application test will be applied via mobile and the product is not live yet, the tests would be carried out face-to-face in the office,
  • The test is intended to test the product, not the user,
  • Audio and video recording would be taken during the test,
  • Details of the reward to be given to the user at the end of the test.

3. PERFORMING THE TEST AND REPORTING THE RESULTS

The first step: We conducted prototype testing on selected people.

Tools: Figma, Figma Mirror

  1. We reminded users who visit our company for testing of the additional information previously made.
  2. We turned on audio recordings and two cameras to see the user’s face and where they clicked/tried to click in the mobile application.
  3. We started the test with the first task.
  4. All tasks were completed in order, with the completion time of each task noted.
  5. We asked the user their usage habits, thoughts about competitors, notes they want to add, suggestions; and documented their answers.
  6. We presented the end-of-test reward to the user, and the process continued with the following user.
  7. The same order was followed on all 25 users.

The second step: We analyzed and reported the results of prototype tests.

Tools: Notion

  1. We transcribed the audio recordings of the interviews.
  2. By watching the video records and reviewing the notes, we analyzed the steps that the users had difficulty passing or quickly passed even though they were expected to be challenged. This analysis was supported by examining the completion times of the tasks.
  3. These test results for the people were grouped, and we determined in which modules specific personas had difficulties.
  4. We evaluated the determined results together with the UI, UX, and product management team.
  5. The necessary changes were documented and planned.
  6. Some examples of these changes:
  • Moving the menus that most users challenged to find and enabling them to be more noticeable,
  • Replacement of the menus in the menu tree,
  • Selecting the most liked one among the alternative color themes shown to the users.

The third step: We finalized the UI with the results.

Tools: Figma, Zoom

  1. We determined the changes to be made in the UI.
  2. We reached the users interviewed before, showed them new screens remotely (by sharing the screen via Zoom), and took and documented their opinions. In addition, the UI/UX and product management team gathered again and examined the final version of the application.
  3. At this step, the product’s UX and UI design process has been completed, and the product has become ready for development.

With its finalized UX and UI design, we started to develop Momo with 2-week Scrum sprints. The first version will be live in the first weeks of 2022. Stay tuned!

--

--