Piloting Self-Serve Analytics as a Service
This post is Part III of the “Building Data Services” series (Part II here).
Last month, we shared how we employ embedded analytics to power Coursera’s In-Platform Dashboards, serving ready-made descriptive and advanced analytics to partner institutions and enterprise customers. In this post, we summarize another application we piloted last year to support more custom analytical explorations for our partners: Self-Serve Analytics-as-a-Service.
The target customer we had in mind was stakeholders in partner institutions looking to implement and share custom analyses. Examples include the analyst staff who wants to build their own dashboards and reports for a broader audience within the university, or the program administrator who wants to explore the data for strategic decision making. This is an audience hungry for data beyond the ready-made analytics already provided in-platform, and an audience we hypothesized would be comfortable using self-serve analytical tools to get the additional data they need.
As we shared in our Intro to Building Data Services, the goal of the data engineering team at Coursera is to democratize data in service to our company’s mission of transforming lives through learning while ensuring strict data privacy. In-Platform Dashboards were a huge step in this direction, especially when tools like Looker can fulfill our partners’ data needs with quick time to market. So, could we take a similar approach to empowering them to slice and dice more data, explore pre-built and calculated metrics across a large number of dimensions, and build their own dashboards and reports directly? If so, this might further accelerate pedagogical improvements and learner outcomes on Coursera through deeper data insights.
We started small, with a limited set of partners, and again used our internal reporting tool, Looker, as the self-serve interface. On the backend, we leaned in on our existing core data models and pipelines, which allowed us to spin up the pilot quickly and with high-fidelity data.
Several of our pilot partners were excited: power users were building their own dashboards and reports, performing advanced analyses, and sharing their analyses with colleagues. Plus they were accomplishing all this with minimal technical efforts, for example without ingesting or curating any raw data from our platform, and without spinning up new analysis tools or packages. Some stakeholders even used advanced features on Looker, like defining new calculated metrics to fulfill their needs, and customizing visualizations. Many also adopted the scheduling functionality, for example to send daily dashboards to their leadership teams.
But in others ways, the offering fell short of our expectations. In particular, we had two key learnings:
- Our partners are not as familiar with Looker as we had initially assumed. A key step in making a project like this successful would be deeper investment in training and support to fully empower end users to take advantage of the functionality available via the tool.
- It’s easy to misinterpret the data. In order to set our partners up for success, we must empower them with detailed documentation and training to master the available data sets and how best to interpret the results.
All in, we decided to roll back the pilot until we can invest more deeply in the requisite training and support to fully unleash the offering’s value for partners. In the meantime, the pilot gave us valuable quantitative measures of the kinds of data partner institutions are truly interested in engaging with, and the kinds of dashboards and analyses they wanted built. It also provided a window into how they applied these insights to their daily workflows, and allowed us to uncover patterns across partners. These learnings have become a key input to the new dashboards and reports we are incorporating into In-Platform Analytics to benefit the full partner community.
Check back soon for Part IV: Data exports.
Interested in Data Engineering @ Coursera? We’re hiring!