Mobile onboarding evolution: Part 2. Contentful era

Sasha Zinchuk
Flo Health UK
Published in
14 min readSep 3, 2024
By Sasha Zinchuk, Product Manager, and Aliaksei Talankou, Software Engineer

In the first part of the series, we described the problems of app-embedded onboarding and shared our experience with transferring it to the server. Although we had solved the main problem of slow onboarding updates, their creation remained complex and time-consuming.

To democratize experimentation in onboarding flows and other in-app surveys, we identified transitioning from Git JSONs to a codeless online flow editor as the most impactful next step.

Choosing a new onboarding editor

Third-party turnkey solutions

To enhance the onboarding creation experience and enable a no/low-code approach, our team initially explored third-party options, such as Onboarding.online, Pendo, and Chameleon.

Pros: Intuitive visual flow and screen editors.

Cons: Not all platforms are supported. There are additional integration costs, new vendor onboarding efforts, and extra pricing for the paywall builder and analytics system Flo already had in-house.

Figma or Miro plug-ins

We also considered creating Miro or Figma plug-ins to turn diagrams into executable flows. These tools are perfect for online collaboration and displaying large graphs. Still, they are not the best onboarding source because they don’t have built-in concepts of environments, localizable fields, backups, content types, publish statuses, and validation.

Keeping all onboardings on one Miro board can be messy

It doesn’t mean you cannot have Miro or Figma as your onboarding editor — of course, they are viable options. You just need to adapt your processes, create custom validation, and export tools, which can eventually be costly but still fragile.

Contentful CMS

After comparing with all other options, we chose to use the Contentful CMS for storing and editing Flo onboardings because of these advantages:

  • Localization support and integration with Phrase, Smartcat, and other TMSs
  • Automatic asset management and distribution through CDN
  • Most Flo services already stored content in Contentful
  • Ability to enhance functionality by writing custom apps (UI and backend plug-ins)

Here is a high-level overview of the Contentful-based solution our team came up with.

Main idea: minimize time from creation to delivery and reduce errors

The right part — “Using survey” — was already in place after migration to server-driven onboarding, so we needed to build the left part — “Creating survey” — by setting up the Contentful model, building a flow editor, and finally importing existing onboardings into Contentful.

The onboarding creation flow was planned to have the following steps:

  1. Set up an experiment using our internal A/B testing tool, Unified Experiment Service.
  2. Create a survey in Contentful and include it in the A/B test.
  3. Send a survey for localization and check translations using the Localization Matrix app.
  4. Publish to production: Survey Engine service reads and validates surveys stored in S3.

Delivery to the end user remained the same: Flo app downloads server onboardings during initial animation. This article is focused on survey creation and new tooling around it because all other systems were already in place.

Creating data model in Contentful

How do you start creating onboardings in Contentful? First, of course, describe their structure and relations, and only then create actual content.

We started by setting up a data model compatible with onboarding JSON configurations from the first article. The Survey became an aggregated content type that consisted of multiple Transitions. Each Transition connects two Steps and may have a Condition.

“Lost in transitions”

The first model version seemed a little bit complicated, and later, you’ll find out its weak points. Now, let’s dive deeper into the main elements of the Survey.

Step types

The most challenging task was to decide what to do with the 25+ onboarding step types inherited from previous app-embedded onboardings. Our goal was to give users rich editing forms where they could control every aspect of the onboarding steps. To achieve this, we analyzed step usage and layout change frequency, which led us to split the step types into two groups: the most popular and all others.

Our team created separate content types for the three most popular step types, allowing detailed customization with built-in validation.

Media and Feature Card could have been one content type

For all other screens with practically unchangeable designs — such as calendars, login pages, symptom pickers, and weight/height entries — we reserved a Generic Step with a JSON field to house their contents.

These screens don’t change for years until stakeholders have a new bright hypothesis

We also agreed to use this Generic Step for all new use cases because creating a content type for each new design was a dead end and led to maintenance hell. The goal was to eventually create a flexible unified step consisting of reusable building blocks: buttons, texts, and images.

Translation management systems cannot effectively operate with raw JSONs because it’s hard to precisely distinguish metainformation from the localizable text. To resolve this problem and explicitly mark localizable content, we introduced Text and Asset Placeholders. Placeholder keys are wrapped into the ${} construction in JSON, for example, ${title_text}. Survey Engine backend replaces placeholder keys with their values after content publishing.

Generic Step with placeholders and static preview in Contentful

Editing JSONs could be challenging and error-prone, so we enhanced the editor with validation and auto-completion using JSON schema. Such an approach allowed us to move faster and postpone UX improvements until the migration to Contentful was done.

After setting up step types, it was time to gather them into an executable flow (survey).

Survey types

We also enriched the initial onboarding model and distinguished three types of Surveys:

  • Root Survey: an aggregated survey available for downloading from the server
  • Sub Survey: a reusable group of steps inside a Root survey, which hides the cognitive complexity of long onboardings by expanding/collapsing its content
Root survey with three Sub Surveys
  • Experimental Survey: a survey branch intended for a particular user segment and connected to any place in the Root survey flow. The backend service attaches Experiments to their Root before sending the whole survey only if the current user meets all required criteria (for example, age > 18 and in a test group of some experiment).
Root survey with two Experimental surveys attached

Such an experiment concept allowed us to do isolated parallel editing without merge conflicts and to avoid content duplication. When an experiment fails (approximately 9 cases out of 10), LiveOps just archives it without any changes to the Root survey. If an experiment succeeds, its branch is simply merged into the Root.

All these survey types were not just our guesses — they were based on actual use cases. Onboarding product managers combined similar workflows and branches in Miro while designing experiments. We tried to keep the same experience but in Contentful so that these diagrams could be created, validated, localized, and published to mobile devices from one source.

Our team drew surveys on a whiteboard during the concept brainstorming and dreamed of having something like this for actual Contentful surveys…

Adding flow visualization using a custom app

The standard Contentful list interface would be sufficient if Flo’s onboarding process had a linear flow without any branching logic based on user answers. But even with one fork, the list of transitions becomes inconvenient for human perception.

Which visualization looks clearer?

In reality, our onboarding has a lot of forks, such as user goal, age, experiment, system version, and so on. Displaying such complex onboarding as a list is not usable at all. So, we added a new type of visualization that did not exist in Contentful out of the box — a custom Survey Graph application.

Graph view is understandable without any prior training

We identified the main functions of the app and added them one by one:

  • Visualizing entries as a graph with steps, transitions, and conditions
  • Creating steps or reusing existing ones
  • Searching steps and conditions (a lifesaver for huge graphs)
  • Expanding/collapsing of sub-surveys
  • Previewing all experiment branches connected to a root survey
  • Survey validation (previously done only on the backend after publishing)
  • Merging experiment to its root

At first, the Graph App was just a read-only graph visualizer built with the React Flow library, and we were adding more and more capabilities and custom views to speed up survey creation and reduce the number of human errors.

The visual tool, equipped with drag-and-drop functionality, enabled LiveOps (citizen developers, content ops) to implement onboarding flows autonomously. Developers could finally concentrate on innovative user experiences.

Survey selection mechanism

With a visual workflow creation tool, our users could create many surveys for onboarding and other app features. But how could we distinguish onboardings and show them only to relevant users?

Flo teams use a special filter to match the right piece of content with a particular group of users. This filter works on SQL-like expressions whose final result can be either true or false. Our LiveOps set these expressions to choose the audience they want to reach with a specific survey. This approach doesn’t require any programming knowledge, just a basic understanding of how SQL WHERE statements work, and is widely used within the company.

platform = 'iOS' AND app_version >= 9.5.1
AND lang IN ('de', 'es', 'fr')
AND (my_experiment = 'test1' OR my_experiment = 'test2')

Moreover, experiments utilize the same logic but have one more filtering layer — they should be attached to the Root survey.

Thus, each user may receive a unique, tailored onboarding!

Surveys for different audiences

New testing tools and process

So our users had a straightforward mechanism to pick relevant onboardings for different user segments, and on Production, these onboardings do not interfere with each other — some are for Android Spanish locale, others for iOS Vietnamese. But what about the Test environment? There are always some draft experiments or fixes. How can a QA or LiveOps person test a particular survey without interfering with other team members’ experiments? The solution is a special onboarding debug menu.

Launching any onboarding from the debug menu

This menu significantly reduced onboarding testing efforts by:

  • Testing a specific survey or an experiment without matching its filter conditions (like a cheat code in a game)
  • Logging out of the current user account and resetting all app data (to emulate a fresh app install)
  • Teleporting to any survey step (start on step #100 and save tons of clicks and nerves)
  • Displaying client errors on UI (fewer requests from LiveOps to engineers)

However, users were still typing all the debug parameters, which could have been very long. So, we added the possibility of opening a survey via a deeplink.

Scan & go!

Migration from Bitbucket to Contentful

Finally, after designing the Contentful data model, building the flow editor, and introducing new testing tools, it was time to migrate existing onboardings from repository-based JSONs to Contentful. Our final goal was to use Contentful as the only source for all onboardings, with LiveOps as our target users.

Migration

To quickly transfer the existing mobile onboardings from Git to Contentful, we decided to automate this process by writing a script utilizing the rich Contentful Management API. The script collects JSON configurations from iOS and Android app repositories, transforms them into the Contentful entries format, saves them, and publishes. Our team had to run, test, and fix it many times, and as a result, it saved us many days of manual repetitive work.

Because the source of configurations shifted from mobile clients to Contentful, we had to ensure that the Flo application always had a local onboarding backup to use in case of missing internet (up to 10% of all app installs!). Therefore, the platform team set up a pipeline for mobile repositories, which copies Contentful onboarding JSONs and assets from the backend into the app bundle.

Can a backup be broken? No, our QAs test it offline before each release

After completing all manual and automated checks, we started migrating onboardings platform by platform, tier by tier. Each onboarding flow was rolled out under an experiment in which analysts compared the tech and business metrics of Git-based onboarding vs. Contentful-based onboarding. Next, it was time to analyze the first results.

Facing the results

Unlike most product experiments that aim for positive change, migration experiments succeed when analysts can’t prove a significant difference in any of the metrics. This means there’s no statistically significant impact on the measured aspects after the migration. Here are some examples of conducted experiments.

If an experiment fails, just run it again, just to be sure

One of the experiments failed because it showed −3.74% in subscription conversion, which was statistically significant! We had no idea why it happened because all manual and automatic checks indicated that all user groups got the same content on their devices. There were only two hypotheses left:

  • Either it was a case when p-value = 5% was not enough (random coincidence).
  • Some other teams used onboarding step IDs (which were modified in Contentful to guarantee uniqueness) in their experiments.

There were no active experiments that could use onboarding step IDs, so our analyst proposed to run the experiment again for 20% of users to be on the safe side and hold it for more than two weeks.

A picture you can hear

And it helped — the new experiment had no statistically significant differences in metrics!

What is the conclusion in this case? Trust your experiment data, but always remember that A/B testing operates only with probabilities.

Hope to see the same chart after Flo goes public!

All migrations, from the first server-driven onboarding experiment to the last Contentful-as-a-source one, took us around seven months.

The biggest challenge was not to interrupt the ongoing onboarding releases and experiments, as onboarding is the biggest revenue channel.

Monitoring enhancements

Our internal users immediately observed the fragility of Contentful-based surveys: any careless unpublishing of transitions or assets could break the whole survey. Although the backend doesn’t update valid content with an invalid updated version, a simple service restart could have broken content serving. LiveOps needed to see the current state of all surveys in real time, and we built an advanced Grafana dashboard with different states of surveys. Guess which colors mean what?

The on-call engineer’s life

Having nice green squares and end-user traffic for surveys in place was half of the deal, but our operations wanted to react quickly if there was a risk of not serving onboardings to users, so we configured alerts in case surveys suddenly got broken.

LiveOops, I did it again…

Later, our engineers added one more layer of safety by saving surveys on the S3 after each publish from Contentful together with a built-in rollback mechanism.

Lessons Learned

Our team successfully avoided incidents or downtimes during and after migration to Contentful. However, not all our decisions were optimal. Let’s review the most interesting issues and their resolutions.

Separating the Graph structure from the content

Our decision to store graph transitions and conditions as separate content types turned out to be ineffective because huge production surveys immediately hit Contentful API limits:

  • The References view became practically unusable.
  • The Localization connector refused to export a large number of linked entries with deep nesting.
  • The export from Contentful to the S3 was also unstable due to the number of entries and the excessive bundle size.
A survey with 1516 references doesn’t want to display all its structure

To address this issue, we modified the data model by relocating Transitions and Conditions from separate content types to a new JSON field inside the Survey.

Fewer content types, fewer problems

As a result, we achieved faster survey data exporting, boosted the Graph App’s performance, bypassed API limits for the localization connector, and enhanced the user experience when editing transitions.

Do not use Contentful purely as a relational database — separate content from its metadata.

Nuances with remote images

Before migration, all onboarding images were stored within the app and instantly displayed at each step. The new approach involves storing the images in Contentful and distributing them via our CDN. Upon receiving the onboarding configuration, the client app initiates the background download of remote images and animations as the user progresses through the onboarding process.

The first impression can be spoiled quite easily

At the start of the flow, remote assets may initially have low successful download stats (as low as 50% in some cases). However, this issue can be simply resolved by placing them after a couple of textual questions. There’s a wide range of optimizations available here:

  • Prioritizing the assets downloading queue
  • Embedding the first images into the app build
  • Showing placeholder images or low-resolution copies

Find the balance between local and remote assets, as not all users have good internet.

Static Step Preview is not enough

In the beginning, users uploaded step screenshots from Figma or real devices, which was an effective and cheap way to improve flow readability. However, everyone on the team quickly realized that only Live Preview can guarantee good UX and development speed through live content updates and Inspector mode.

No need to publish and test on a real device

In the long term, there is a plan to integrate Contentful Studio or develop a separate WYSIWYG Step Editor with a drag-and-drop element builder, conditional fields, etc. But this is another story for a separate article.

Immediate content preview is crucial for time to market and quality.

Conclusion

It took us a couple of sprints to teach the onboarding team to use new tools and adapt their testing, localization, and experimentation processes. We received tons of helpful feedback and started polishing our solution to reduce human errors and automate common use cases.

The onboarding product team has significantly increased its autonomy, with QAs and LiveOps now independently resolving 80% of issues rather than relying solely on the platform team. Communication and collaboration have improved as the team effectively uses the Survey Graph app, reducing the need for direct involvement of the backend team except in critical cases.

Our gains so far:

  • One LiveOps can perform like two mobile Engineers before.
  • Concurrent experiments: 45 now (and constantly growing) vs. 10 before.
  • Time to detect an error in an experiment: 15 minutes vs. one day before.
By Sasha Zinchuk, Product Manager, and Aliaksei Talankou, Software Engineer

👐 P.S.

Did you know you can clap up to 50 times on each Medium article or comment?

Waiting for your feedback and questions!

--

--