Automatically generating goal-progress interfaces from natural language goal statements, using dependency parsing

This is how we’re thinking about using little bits of cleverness to make team’s lives easier

Sandy Rogers
Saberr Blog
3 min readMar 13, 2018

--

OKRs are a type of measurable goal

OKRs are a popular way for companies and teams to set measurable goals that will naturally lead to progress. The basic idea is that you work to achieve a few (well thought through) measurable Key Results (KRs). By hitting those you’re more likely to achieve your broader Objective (O) than if you just stated the Objective and tried to be busy.

There are lots of OKR tools around to help. These are particularly useful when a whole company is using OKRs, in its many different teams. Together the teams’ OKRs build into achieving the company’s OKRs, in a hierarchy.

At Saberr, we’re building CoachBot, an app to helps teams with things like Goals and collective Purpose and everything else that makes teams work well together. With CoachBot, we’re trying to make it very easy for a team to set some OKR-like goals — even if they’ve never done them before and even if their company isn’t (yet?) using them more broadly.

Goal progress tools can be a pain, sometimes

We were looking for ways to make the process of setting and reviewing OKRs fast and straight forward. Team goals are supposed to be useful, not a burden.

But this is usually how a team has to input their progress on a key result:

Who wants to fiddle with dropdowns?

Key Results are usually a measurable thing — but not always, especially when people are new to them. Often people end up writing tasks, or boolean-type results (“Make the new website”, “Launch the website”).

We can use machine learning to remove some pain

To save people a bit of time, let’s try and automatically work out what kind of Key Result we’re dealing with, and, if it’s numerical, what the units are.

This is a nice task for Dependency Parsing: a kind of natural language processing that predicts the roles and relations of all words in a sentence.

We can do this quite quickly Tensorflow’s Syntaxnet with pre-trained models.
Load up the tutorial, type in our Key Result and we see a parse tree like this:

Dependency parse tree for a key result: “We will have 1000 meetings”

“Meetings” is an object referenced by “1000” which is its nummod, or numeric modifier. So it looks like our job will be quite easy! We just need to look for objs with nummods.

Obviously there are some edge cases and gotchas. A few lines of code are needed to deal with KRs like “one thousand meetings”, and for complex sentences we have to keep track of the most likely actual object of the sentence — by seeing which one references the root of the sentence (“have”) more closely.

Handily, the root of the sentence is usually the verb when the KR is written as a boolean outcome or task.

For example if you’re in a rush and make a KR of “Ship it”, we’d parse it as:

Dependency parse tree for key result: “Ship it”

That’s handy because the action that finishes this Key Result is the root of the sentence (“ship”).

This means we can usually fallback on the sentence root as being the action for none-numeric goals:

Automatically detecting numeric goals, their units and range, or boolean goals

This is a nice example of how pre-trained models can be used fairly immediately to solve little problems.

A version of this story was also published in Toward Data Science.

Find out more about Saberr at www.saberr.com

--

--

Sandy Rogers
Saberr Blog

Reformed astrophysics researcher, recovering marathon runner, and recalcitrant data wrangler @SaberrUK.