From 0 to 0.9

Alexander Markevitch
6 min readApr 26, 2019

--

Here is the story of how we created a new localization experience at Flo.

Flo Health Inc.

Flo is more than just a period tracker. It also serves as a health coach for women around the world that helps its users to track their period, follow their pregnancies, and stay healthy throughout their entire lives.

At this very moment, we feature 22 app interface languages and ten languages that we use for our Health Insights, i.e., English, Spanish, German, French, Italian, Portuguese (Brazil), Polish, Japanese, Chinese and Russian.

When I joined Flo as a localization team lead, I faced a task that seemed daunting, to put it mildly.

New team

First, we had no constant pool of translators, editors, or proofreaders. So, we decided to create a new pool from scratch by posting ads at ProZ. In the first 24 hours, we received almost 1500 résumés. Checking them was an almost impossible task. Testing all the translators and selecting a few of them seemed to be the most challenging task ever. So, we implemented a new selection process.

We had several criteria for our selection:

  • Owning CAT tools or having a work experience that involved the use of CAT tools

As this was an urgent matter, we made the decision not to spend time explaining to our new translators what a CAT tool is and how to work with it.

  • Rates per word/per hour

Considering that the health-related topics are quite sensitive, we couldn’t ignore market rates and expect to pay less to get more.

  • Relevant work experience, i.e., healthcare or women’s health (ObGyn)

I believe it goes without saying why relevant working experience was essential to us. Health care is a vibrant topic that requires the best in-field experience.

  • References

A professional translator should have several references available. It’s better to ask for both direct and indirect references.

When we applied these basic selection criteria and chose several résumés, the question then arose how to select the best ones to fit our needs. The problem was straightforward: “How do we assess sample translations?” That was a vexing point for languages we didn’t have in our in-house team of localizers.

Language is not the same as math. But, at the same time, it is. Even if a translator has tamed grammar and spelling, it doesn’t necessarily mean the translation will be perfect or will read like an original text. We have to make decisions every single minute. Some of them ARE very difficult.

Unfortunately (or fortunately), one can’t win any discussions about style, nor can one give any reasonable pro and contra arguments. It’s either “I like it” or “I don’t like it.”

That is why, first of all, we concentrated on grammar and spelling. All sample translations were cross-checked by other translators (not for free, mind you; these tasks have to be paid) who responded to our ads. That is:

  • The test piece by Translator 1 was sent for review to Translator 2 and Translator 3.
  • The test piece by Translator 2 was sent for review to Translator 3 and Translator 4, etc.

It’s essential to explain to all the reviewers that they should concentrate ONLY on spelling and grammar and leave detailed comments with proof.

So, that’s basically how it works. We ended up with several translators and editors who passed all stages and were recognized as the best ones to work with.

However, even though we trust our selected localizers, we may check random samples of translated texts sending them to third parties. That way, we can be sure that quality is not falling over time.

New glossaries and terminology management process

While selecting a new pool of translators, we moved to creating glossaries and terminology bases with fixed expressions we use within our app. For that purpose, we decided to use a free project: Tilde. It helps to collect terms from corpora of texts, and translates these terms, taking translations from different sources.

We’ve also tried different OpenSource projects, such as the Okapi Framework because they also help to automate the process of collecting terms and creating glossaries. However, the biggest problem with these kinds of projects is that they sometimes insert non-terms into the list.

When creating the glossary, we also checked existing translations for consistency and managed to find and eliminate all discrepancies and fear-inducing expressions.

Style guide

While we had rudimentary style guides for different languages, it became evident that we had to re-create them from scratch to define the most crucial points to address.

So, we created a “skeleton” style guide that was then filled by the localization team in conjunction with our outsourcers. At the moment, it helps us to resolve all questions that arise during the localization process.

Translation memories and CAT

In the beginning, there was nothing. And someone said, “Let there be CAT.” And there was CAT. There was still nothing but the right kind of nothing.

To harness the power of modern technology, we decided to switch to CAT tools. The process we used before, i.e., translating directly in Word was good enough for the beginning, but the scope of our work was increasing exponentially. The only way to avoid bottlenecks and cut costs was to implement a CAT tool that would save us time and money (from 10 to 15%, considering that the most of our texts are unique with no precedent full or fuzzy matches).

So, we used the same corpora we used for glossaries and created translation memories for different language pairs. The biggest challenge was cleaning them up and correcting misalignments. Technology facilitated this process.

The selected CAT tool also helps us to localize the in-app interface in a totally new way. Earlier, we used Google Sheets, Python scripts, and GIT to create, convert, and upload new strings. Our CAT tool has a connector that works as a cron job to pull and push new strings into the translation project and back. This helped us decrease the delivery time from a couple of days to one hour (under extreme conditions; but don’t do this to your localizers!).

Thanks to our Growth team and Serge (a plugin developed by Evernote) we managed to automate the process of strings localization by converting our strings to PO-files, sending them to the localization platform and putting them back to the repository.

LQA

We also developed an LQA process to ensure the quality of texts we produce.

Even though we checked new strings within the test build of the app along with new content, there was still room for improvement.

We created a table of expansion and contraction as a reference for our designers. This way, we had a reliable source of truth that we could use when localizing new strings for our app.

We also switched to an offline QA app to check our texts against language standards, i.e., quotes, spaces, digits, etc.

What we achieved so far

The new process has helped us to drastically decrease the delivery time to all teams within Flo, as well as the number of complaints regarding the quality of our localization.

Here are some numbers:

  • A 150-word article: from 10 hours to an hour-hour and a half;
  • A 1500-word piece: from 4 days to 4–6 hours, etc.

As you can see, these are exceptional results, but we are working on developing a new process that will help us to create more localized content as quickly as possible.

Why “From 0 to 0.9”?

We started at zero to create a whole new localization process, but we didn’t move to 1, as we still have a lot of technical tasks and processes to improve. The big idea, though, was to create a process that doesn’t require deep technical skills to dive into it.

Thanks to the team, implementing new processes doesn’t take up a lot of time. And I must say, “Thanks for the super great job and dedication!”

Here, at Flo, we always strive for improvement and perfection.

--

--

Alexander Markevitch

All Things Localization. Passionate about languages, technical flow, and team management