Developing Tools for News

How the Shorenstein Center Is Building an Email Newsletter Benchmarking Tool

William Henry Hakim
The Single Subject News Project
6 min readDec 18, 2018

--

Hi there! My name is Will, and I’m a software engineer at the Shorenstein Center. I’ve been working with Hong, Emily, Elizabeth, and the Single Subject News cohort for almost a year now as the primary developer of our Email Benchmarking Tool (open source on Github). The tech team at the Shorenstein Center focuses on studying and building technology that helps newsrooms work toward sustainable business models.

Last month, Hong announced the public launch of our Email Benchmarking Tool. Any newsroom can apply to use the tool, enter a MailChimp API key, and receive a “report card” with a few email metrics benchmarked against peers.

We’ve received a lot of awesome feedback from our initial users, and we’re now in the process of incorporating additional features based on their comments. In the meantime, here’s a short compilation of the most common questions about the tool we’re hearing from newsrooms, tech teams, and research institutions alike.

What motivated the creation of the Email Benchmarking Tool?

Last year, the Shorenstein Center released a white paper on email newsletters, accompanied by a series of Jupyter Notebooks allowing organizations to analyze their own data. While these notebooks provide a fantastic range of benchmarks, they also presented a few obstacles for our research.

To start, the notebooks require a certain level of technical knowledge to set up, which renders them inconvenient for small organizations unlikely to have dedicated engineers or data scientists on staff. Second, the notebook format can prove troublesome for analyzing large data sets in that API requests often take an unreasonably long time and the resulting data’s memory footprint is very large. Finally, the notebooks do not provide context in terms of industry-wide best practices and benchmarks. The Email Benchmarking Tool attempts to solve these problems by a) layering an easy-to-use interface; b) abstracting away the data requests and number-crunching, and simply providing the results; and c) allowing us the opportunity to (with users’ permission!) anonymously aggregate participating newsroom’s newsletter performance data, using it to provide you with industry-wide averages, as well as customized statistical insight and actionable recommendations.

How do/did you choose which technologies to build with?

Our tech stack consists of Flask, RabbitMQ, Celery, and Postgres running on AWS. A Python backend made sense, as it’s the de facto standard for data science work, reasonably modern for a web application, and enabled us to build on the open source code we published in the Jupyter Notebooks. Other technologies (such as using a distributed task queue to do the number crunching) were chosen with an eye toward scalability as the user base grows.

What kind of feedback have you received so far, and how has the tool changed since its inception?

We’ve gotten a ton of feedback since we soft-launched the tool to a few partners last spring. The changes we’ve made mostly fall into three categories:

  1. UX design enhancements: We’ve tried to make the tool more accessible by doing a better job of explaining how it works and what you’ll get out of it. This category of changes ranges from explaining exactly how we’ll use your data, to creating a more seamless UI flow, to adding tooltips to confusing terminology across the site.
  2. Feature improvements: We’ve reworked our core functionality in a number of places. Examples include switching the library we use to visualize data (we started with Pygal, which was easy to use but quite bare-bones, and now use the much more fully featured Plotly) and collecting data about the organizations using the tool (we now ask applicants to provide information such as their organization’s size, budget, coverage scope, etc.).
  3. Under-the-hood updates: We spend a significant portion of our time on code that is invisible to the end user. This includes speeding up the way we import data (we’re knee-deep in Python’s async functionality), keeping our security practices up to date, and writing tests to ensure that our application works properly. While these changes may not be the most exciting, they are a critical component of the software development lifecycle.

What does the process of implementing a new feature look like?

We tend to have a number of features in the pipeline at any given time, and we’ll set priorities and due dates using a kanban-like methodology. We use Asana as our project-management board, and each feature request or tweak is captured in a task that has description, assignee, due date, and discussion. A feature lifecycle might go as follows:

  1. Discuss pending features as a team and add new ones as necessary. Assign highest-priority features to individuals and set tentative deadlines for them.
  2. Begin feature development. We use a slightly modified version of the git-flow model. Due dates are malleable based on unexpected obstacles. Sometimes a feature gets deprioritized, and it’ll remain in a half-baked state sitting on a feature branch until a later time.
  3. When a feature is done, we’ll do QA on a development server to iron out any last-minute bugs and changes.
  4. Roll out the feature publicly.
  5. Track the usage and adoption quantitatively and get qualitative feedback through one-on-one user feedback interviews. Based on these data points, we brainstorm new feature ideas and iterate through this process all over again.

We spec features such that they take, at maximum, a few weeks to implement. Furthermore, we subdivide each feature into smaller milestones in order to keep each chunk manageable. For example, a new series of input fields on the frontend might be broken down into mocking out the UI; changing our database models; writing a relevant controller route; updating elements of our codebase which interact with the new functionality; and filling out unit/integration tests.

What kinds of features can we expect to see in future?

Thanks to feedback from our incredibly helpful cohort of single-subject newsrooms and steadily expanding userbase, we’ve got a lot of ideas for the future! Since we’re constantly adapting to the needs of our users, I’d consider the following list a brainstorming exercise: we’re not committing to these features just yet, and we certainly don’t know what order we would implement them in. We hope you will chime in with your thoughts and guidance!

  1. Support for email services beyond MailChimp
  2. Reporting of additional metrics (including some more of those from the notebooks)
  3. Benchmarks tailored to organizations “like yours”
  4. Ability to analyze subsets of your mailing list, including through the use of segments, interest categories, and merge tags
  5. Web dashboards which break down the aggregate data we collect

What have you learned while working on this tool?

A lot! As an engineer, I’ve never been closer to the end user while working on a project. I’ve gotten much better at processing and iterating on the amazing amount of feedback we’ve received. The result has been a piece of software that a lot of people will (hopefully) actually want to use, and that will save journalism along the way!

What’s your long-term plan with the data you’re collecting from the tool?

Our long-term goal lines up with that of the Shorenstein Center more broadly: to assist media organizations in facing some big economic challenges. To that end, we’re still thinking about how best to present all the metrics in a meaningful and statistically rigorous way. One idea is publishing a more formal industry benchmark report once we have a meaningful sample size.

Whether you’re a journalist, software engineer, data scientist, or simply interested in our project, we welcome your help brainstorming! Please feel free to sound off in the comments, get in touch via email, or open an issue or pull request on Github. We’re ultimately looking to become an important resource for data-driven, actionable insights for email newsletter best practices.

We hope this answered some of your questions about our new tool. As always, feel free to reach out to Emily Roseman (Emily_Roseman@hks.harvard.edu) or myself (via Github or leave a comment on this article).

--

--

William Henry Hakim
The Single Subject News Project

Research Software Engineer at the Harvard Kennedy School’s Shorenstein Center