Controlled Regression: Quantifying the Impact of Course Quality on Learner Retention

This is Part I of our Causal Impact @ Coursera series.

Vinod Bakthavachalam
Coursera Engineering
5 min readNov 8, 2018

--

At Coursera we use data to power strategic decision making, leveraging a variety of causal inference techniques to inform our product and business roadmaps. In this causal inference series, we will show how we use a range of techniques to understand the stories in our data, including the following:

(1) controlled regression

(2) instrumental variables

(3) regression discontinuity

(4) difference in differences

This first post covers an application of controlled regression to measure causal relationships in observational (i.e., non-experimental) data.

Intuitively, we believe that course quality is important at Coursera — for many reasons, ranging from ensuring learners are actually developing valuable skills in the courses they take to helping build Coursera’s brand as a platform for high-quality learning.

But defining and measuring course quality is inherently difficult, and estimating the true causal effect of course quality on learner outcomes is even harder. AB testing is also off the table as we would not randomize learners into course versions known ex ante to be of varying quality.

To circumvent this, we defined a proxy for course quality — in particular the content’s Net Promoter Score (or NPS) — and used controlled regression on observational data to generate an estimate of the effect of course quality on week-to-week retention. Do learners consuming higher quality content, all else equal, retain better in content?

To measure first-week NPS, we ask all enrolled learners who complete the first week of a course to rate on a 11-point scale how likely they are to recommend the course to a friend, with 10 being “Extremely Likely” and 0 being “Not at all Likely.” Generally, NPS is measured as the share of promoters minus the share of detractors. For this analysis, however, we use the raw numerical ratings as our measure of course quality to preserve individual-level enrollment data.

We chose to use first-week Net Promoter Score (NPS) as opposed to end-of-course Net Promoter Score as a proxy for course quality for a couple reasons. First, it’s far enough through the course that the learner has an informed opinion of the content’s quality. Second, it’s early enough in the course that it’s not subject to too much selection insofar as learners who dislike content drop (and therefore never get to the NPS question).

Below we plot the relationship between a learner’s NPS rating and her retention — both as the proportion of learners who gave each NPS rating (0–10) and started the second week, and as the proportion of learners who gave each NPS rating and completed the second week.

Regardless of whether we look at starting the second week (on the left) or completing it (on the right), the likelihood of a learner retaining increases with her NPS rating, indicated by the orange regression lines. More precisely, the likelihood of retaining is fairly flat in the range of 0 to 5, increases over the range of 6 to 7, and then flattens out again over the range of 8 to 10. That is, roughly speaking, most of the increase is associated with moving from a detractor (NPS below 6) to a promoter (NPS above 8).

While suggestive, we cannot interpret a causal relationship between course quality and retention from the plot above because of potential confounders. For example, learners might rate courses with more assignments harsher because of increased requirements and also be less likely to retain past the first week as it ramps up in difficulty. Similarly, learners who choose to purchase a course certificate might provide lower ratings than free learners because they paid for the experience, and they may retain in higher proportion as they are more committed to completing the material for the certificate.

If these omitted variables are correlated with both first week NPS and course retention, then they may bias our estimate of the relationship with course quality. We need to see whether the estimated effect of course quality on retention is robust to the inclusion of these learner and course features into a regression model.

To estimate this we model the likelihood of starting and completing the second week as a function of first-week NPS. We have one regression with just first-week NPS and another regression with first-week NPS plus controls to capture key potential confounders such as those mentioned above. If adding controls meaningfully increases the R squared of the regression (the fraction of variance in starting or completing the second week that our model can explain) while the estimated coefficient on first-week NPS remains largely unchanged, we can be more confident that the coefficient we are estimating is the true causal effect of course quality on retention. (See this paper by Emily Oster for more detail on the theory behind controlled regression.)

Whether we define our outcome of interest as starting or completing the second week, we see a significant effect between the outcome of interest and first week NPS in both the uncontrolled regression and when controlling for key confounders. In both cases the R-squared values increase substantially when controls are added while leaving the coefficient on first week NPS largely unchanged. However, a large amount of variation is still left unexplained — meaning that we cannot capture other factors affecting week-to-week retention — so it remains plausible that some of these are correlated with first-week NPS, creating bias in our causal estimates.

There is also a separate concern of nonresponse bias whereby we don’t know the ratings of learners who do not answer the NPS question. We attempt to address this through weighting the responses of those who do respond to match the overall demographics of the Coursera learner population. The results with and without weighting for nonresponse are similar, so we only report unweighted results.

Taking our estimates as given, we see that a point increase in a user’s first-week NPS would increase their likelihood of both starting and completing the second week of a course by about 0.6%, all else equal. If we focus on moving users from mid-range detractors (NPS of ~3) to mid-range promoters (NPS of ~9), which would effectively be moving them from low-quality content to high-quality content, this suggests an impact on week-to-week retention at around 3–4 percentage points. Regressing second week retention on NPS category in an ordinal regression where the outcomes are detractor (NPS below 6), neutral (NPS between 7 and 8), and promoter (NPS above 9) produces similar results.

With stronger suggestive evidence to corroborate our intuition that course quality has a big impact on learner outcomes like retention, we are working to expose learners to more information on course ratings and reviews in new and compelling ways when they visit course pages, helping them figure out whether the content is right for them. We also employ a rigorous beta testing framework before content launches to ensure it meets our high bar for quality.

Interested in Data Science @ Coursera? Check out available roles here.

--

--

Vinod Bakthavachalam
Coursera Engineering

I am interested in politics, economics, & policy. I work as a data scientist and am passionate about using technology to solve structural economic problems.