Cooperative Intelligence II: The Humans Behind Clara

Emily Pitts
From the Desk of Clara
5 min readJul 11, 2016

--

Clara is an intelligent, human-like agent that schedules meetings for people, saving them hours a week in complicated and tedious emails back-and-forth. To achieve this, Clara is powered by a hybrid of human and machine that we like to call Cooperative Intelligence: while humans don’t scale, and machine learning alone can’t provide a highly reliable service, Clara accomplishes both.

On the human side of our cooperative intelligence, Clara Labs moderates a sophisticated knowledge work distribution platform. Here’s what we’ve learned in the process of scaling and supporting the community of human Clara Remote Assistants (CRAs) behind the platform:

Building for a Distributed Workforce Is Hard

Humans make mistakes
Scheduling for others is hard work — just consider all of the things you need to keep track of: Who needs to be on the invite? Is someone on the thread coordinating for a third party? Where will the meeting take place? Does the customer’s schedule allow for travel time on that day? While a remote assistant that handles two or three clients might be able to keep all of this straight, it doesn’t scale for a CRA that might support thousands of different customers and perform hundreds of scheduling tasks each day. There’s too much room for error.

In order to operate at this larger scale with minimal error, we’ve broken the scheduling work down into simple tasks, keeping a couple of principles in mind:

  • Minimize decision fatigue. Instead of asking a CRA to answer “What should Clara do?” we ask them to identify “What did the participant say?” and let our back-end code decide what Clara should do based on the answer and the customer’s preferences. Asking simple identification questions with a limited answer space minimizes the likelihood of mistakes and makes the work easier to complete quickly.
  • Build feedback loops into the workflow. We try to present work in a way that either a human or a computer could complete it — see last point: like a human, ML is bad at answering nuanced questions but good at identifying and labeling things in text. Because the work is structured this way, the feedback loop is built right into the workflow. Every time a human completes a task, the ML gets more confident about completing the same task in the future. As the ML automates away more and more complex work, the human’s job becomes simpler and simpler.

Finding the right incentives structure is key
Before we had clearly defined tasks in our system and emails were answered mostly by hand, CRAs were paid at an hourly rate. When we moved to a per-task payment scheme, we saw a 70% increase in efficiency across the system within weeks. This was shocking at first, but the reason is clear: CRAs weren’t doing a bad job before, we just hadn’t built a system in which they needed to work quickly to succeed.

When we moved to a per-task payment scheme, we saw a 70% increase in efficiency across the system within weeks.

Once you’ve incentivized speed, you also have to incentivize correctness, because the two may be in direct conflict (the fastest way to “work” is by “completing” tasks as quickly as possible but not necessarily correctly). At Clara, we do this by using accuracy — the percentage of tasks completed without mistakes — as a gateway to higher earning potential. To calculate accuracy, we use an anonymous peer mistake-reporting system with the goal of helping CRAs learn from their own and others’ mistakes (thereby increasing Clara’s overall accuracy). Clara uses a micro-task system to enable peer-reporting: every scheduling thread is broken down into smaller tasks (usually corresponding to an email sent from Clara) that are each handled by a different CRA. This means that previous tasks are checked by every CRA that completes a task later on in the same thread.

There are many alternate schemes to consider here for building efficiency and accuracy into the system, like using redundancy and and controlling for worker quality (Ipeirotis et al. 2012) or assigning higher value to more complex tasks (Shah et al. 2014).

Knowledge work distribution is complicated because humans are complicated
Because every human has a different set of experiences, talents, and work styles, distributing work in a way that minimizes the unpredictability of the system is essential. With this in mind, we’ve decided to source our CRAs directly and build the tools needed to allocate and complete their work in-house. No existing distributed work platform or outsourced solution allows us to support the real-time, consistent quality of service our customers expect from Clara. Our custom-built system allows for the following:

  • We are selective about who we work with. Every CRA goes through a rigorous screening and certification process to make sure they excel at this kind of work. While over time we expect that our platform will allow a broader range of people to be CRAs, working with experienced CRAs early on lets us iterate on new features quickly without sacrificing quality.
  • We constantly experiment with the way we distribute and incentivize work. We can match a CRA’s skills to the type of work they handle and make sure we have the right balance of resources to match incoming volume at all times. Because it’s all baked into a system we control, we can build and ship changes fast.
  • We start supporting features early, even before we’ve codified them, because we know and trust our CRAs well enough to do quality work even when it isn’t perfectly defined.

Building For a Distributed Workforce Is Really Great

We can move fast
One of the unique things about Clara is that we build for two users: our paying customers and our CRA community. Because we pay CRAs to use the tools we build, we have more freedom to experiment than we would with a customer facing app. We can run A/B tests on the internal tool’s interface and iterate on new features quickly without worrying about major downtime for our paying customers. And if we get something wrong, we hear about it fast because we always have a direct line of communication open with the community.

We get to provide opportunities for an amazing community
Clara’s CRAs are stay-at-home parents, students finishing degrees, and others for whom the opportunity to do flexible work remotely is incredibly important. Alongside hearing success stories from our customers, hearing success stories from our CRAs and building this community is the most rewarding part of working at Clara.

They understand scheduling
Clara’s CRAs really understand the problem we are trying to solve. They come from assistant-like backgrounds and they know that scheduling is complicated. Having this experienced community provides us with an excellent feedback loop into what is or isn’t working with the product. They see customers’ pain points and help us come up with ideas to fix them.

And, of course, having humans in the loop means that Clara gets it right more often than it would if it were all-computer: two minds are always better than one.

Clara Is Growing

We’re excited to grow our platform and team. Find out more about open roles here: https://jobs.lever.co/claralabs

Learn more about our CEO Maran Nelson here: https://www.forbes.com/sites/clareoconnor/2017/04/18/clara-labs-wants-to-save-your-from-your-inbox-with-cyborg-assistants/

and read about our recent $7M series A announcement here: https://techcrunch.com/2017/07/19/claraseriesa/

--

--