Alexis Yelton
Sep 27 · 7 min read
Left to right: Ninja by Mwangi Gatheca, Rockstar by Austin Neill and Magic by Pierrick Van Troost

At Indeed we help people get jobs, which means understanding resumes and making them discoverable by the right employers. Understanding massive amounts of text is a tricky problem by itself. With source text as varied as resumes, the problem is even more challenging.

Everyone writes their resume differently, and there are some wild job titles out there. If we want to correctly label resumes for software engineers, we have to consider that developer wizard, java engineer, software engineer, and software eng. may all be the same job title. In fact, there may be thousands of ways to describe a job title in our more than 150 million resumes. Human labeling of all of those resumes — as well as new ones created every day — is an impossible task.

So what is your job, really?

To better understand what a job actually is, we apply a process called normalization to the job title. Normalization is the process of finding synonyms (or equivalence classes) for terms. It allows us to classify resumes in a meaningful way so that employers can find job seekers with relevant experience for their job listings.

For example, if we determine that software engineer and software developer are equivalent titles, then we can show employers searching for software engineers additional resumes with the title software developer. This is particularly useful in regions with fewer resumes for a job title the employer wants to fill.

Normalizing job titles, certifications, company names, etc. also helps us use resume information in machine learning models as features and labels. We want to know if biology on a resume has the same meaning as bio or even a common misspelling like boilogy. If we want to predict whether a job seeker has a nursing license, we have to correctly label resumes with RN and registered nurse.

How do we normalize text?

There are many ways to normalize text. For a quick initial model, we can measure how similar strings are to one another. We apply two common methods for measuring these string distances, Levenshtein distance of characters and Jaccard distance of phrases. We measure these distances by characters (to capture misspellings) and by words (so we can group cell biology major and cell biology together).

Step 1: Preprocessing

As with most text-related models, we must first clean the text data. This preprocessing step removes punctuation from terms, replaces known acronyms and abbreviations with full names, replaces synonyms with more common variants, and stems the words, e.g., removing suffixes such as ing from verbs.

Step 2: Term frequency

After that, we define a term frequency threshold. If a string falls below this threshold, we do not consider it as a potential normalized value.

Step 3: Minhash

Once we remove low count strings, we have to classify the terms into groups. The most common technique for this kind of grouping involves determining the distance between terms. How different is boilogy from biology?

To prepare, we need to address a computational power problem. We often have millions of unique strings coming from resumes for each field, e.g., for company names. Finding the distances between all pairs of strings is slow and inefficient, since the number of comparisons needed is as follows:

…where n is the number of values. For one million different strings, we would need about 500 billion comparisons. We have to reduce the number of pairwise comparisons to make string distance computation feasible.

To address this challenge, we use locality sensitive hashing. This set of algorithms hashes similar items together in buckets and can approximate string distance. In particular, the minhash algorithm approximates Jaccard distance, which is the intersection of a set of items over the union of that set.

Approximating Jaccard distance with minhash is an easy way to measure string distances defined by the words they contain. Using minhash vastly reduces the number of comparisons that we need by only comparing the strings that are in the same minhash bucket.

Once we carry out minhash and remove a large number of the comparisons we have to make, we calculate a normalized version of the Levenshtein distance to get a character-based distance metric.

Step 4: Levenshtein distance

We then remove pairs with very high Levenshtein distances. Ultimately we are left with groups of pairs that are quite similar, like cell biology and cell biology major.

Step 5: L2 norm

If similar strings are grouped together, it makes sense to choose the normalized value from that group. But which one? Which values of any given string should we designate as the standard (normalized) value?

To determine this without outside information such as labels, we look at the frequency of strings in our corpus of resumes. Frequently occurring strings are likely to be the more standard values for a string.

However, we do not want to rely solely on frequency to choose our normalized value. The most frequent value could be a good standard for most strings in that group, but not all of them. A group could have pairs that contain French, French language, and French language and economics. In this case, we might want to normalize the first two strings together, but not the third.

To address this problem, we create a vector of features for each pair. This vector contains the two distance measures and the weighted inverse of the frequency of the more common term (wf where w is the weight and f is the frequency of the term in the corpus). We use an inverse so that the number output is lower when the string is higher frequency — this is consistent with string distances being lower when similarity is higher.

We then normalize strings to the term with the lowest vector magnitude (L2 norm) based on those three features. This results in better normalization accuracy as determined by human labelers.

A worked example

Here is how this normalization works in practice. Below is a table of job titles that need to be normalized as well as their distances from the first job title Java developer II.

We apply the following steps:

In step 1, during preprocessing, we remove extraneous words such rockstar and stem the remaining words, removing endings like er.

In step 2 we determine which job titles have the necessary number of counts to be potential normalized job titles, based off a threshold of 1,000. Rockstar java developer does not make the cut.

In step 3 we use the minhash algorithm to group the titles by Jaccard distance, and discard any job titles from the group with a distance of > 0.7. Barista and Night shift janitor are discarded from the group.

In step 4 we calculate the Levenshtein ratio, and discard job titles from the group with a ratio of > 0.3. Developer is discarded.

And lastly, in step 5 we select the standard value based on finding the shortest vector of the Jaccard distance, the Levenshtein ratio, and the w/counts (the L2 norm). Since this is a group of two strings, the distances are the same and only the counts feature is different. Here we use a weight of 50. The vectors are:

  • Java developer II [0.33,0.15,0.005]
  • Rockstar java developer [0.33,0.15,0.5]

The normalized value becomes Java developer II since the L2 norm of the first vector is 0.36, less than that of the second vector 0.62.

Is this the best way to approach normalization?

Many other techniques can normalize text and take into account distant synonyms by considering context around the terms of interest. In fact, we are currently working on including phrase embeddings in this framework. In the meantime, our current approach works for us by greatly reducing the amount of time needed to come up with a new normalization for any field in structured text. With a little tuning, this model can work well for many of the 28 languages found in Indeed resumes.

This method also works for different types of data sets. It can apply to job descriptions and even Indeed Questions — the questions that employers use to screen applicants. Normalization does not circumvent the need for expert human judgment. However, it is helpful in aiding and scaling these experts for use in a large international product.

Normalization is the bread and butter of understanding text. It might not be as exciting as text generation or deep learning classifiers, but it is just as important. Normalization helps search engines by finding synonyms. It aids in creating features and labels for machine learning models, and makes analysis of data many times easier. Models like the one described here can speed up the normalization process so we can expand to new countries without years of work. These models can also adapt to new data easily so we can update our normalization to a changing lexicon.

With mathematical models for normalizing text, Indeed can better understand job seekers and employers and adapt to changes, ultimately helping us help people get the jobs they want.


Indeed Engineering

Stories from Indeed Engineering

Alexis Yelton

Written by

Alexis is a Data Science manager at Indeed where she works on understanding and improving resumes.

Indeed Engineering

Stories from Indeed Engineering

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade