Markus Barmettler
Oct 18, 2018 · 6 min read

Today at Neue Zürcher Zeitung, a Swiss publisher of high quality journalism for a German speaking audience, we are launching a new smart, personal newsletter. “My NZZ” is an experiment that automatically curates a personalized list of articles missed throughout the week.

You can sign up here — it’s free.

One small newsletter, a much larger vision

Our goal with this experiment is to create a smarter, more personal news experience for our subscribers. Neue Zürcher Zeitung publishes up to 200 pieces of content every day, from video to audio to written and visual journalism. The issue is: Our subscribers live busy lives, seeing only 8 of those pieces of content per day on average. We want to make sure they don’t miss the ones most relevant to them.

Around a year ago we started this effort by launching a new section in our app and web experiences with personalized lists of recommendations. (If you want to know more about how exactly we got there you can find some interesting insights here). There are currently 3 personalized news products available:

  • Catch-up: Articles you might have missed since your last visit
  • Evening Reads: Top stories of the day that you haven’t read yet
  • Weekend Reads: Longer reads that you might have missed or did not have time to read throughout the week

In close collaboration with our Data and Product teams, we learned that our catch-up features were found to be most useful. We also learned that our users would like to receive personalized catch-up-lists not only on our website, but also via email. This is why we’re now doubling down and expanding our personalization to other parts of our product and platforms, like this new email product.

One thing will never change: Our editors are in the driver’s seat

Although we went through some intensive testing and optimization of the underlying algorithms since we first launched our data products, there are still two core factors driving these personalized recommendations:

  • Editorial Score — the importance of a content piece as curated by our editorial team derived by position and time on our homepage
  • Personal Score — based on personal interest and behavior

We mix in other factors, like what we call the “Crowd Score” which is more dominant if you’ve just created your free account. Over time, the “Personal Score” will evolve and gain in importance. If you would like to better understand our problem definition, algo design strategy and underlying tech stack, you may find additional details here.

A smarter, more personal news experience — across channels

Creating great, personalized experiences in email can be tricky. This will get a bit technical, but bare with me. In our process we identified our email outbound system as being the technological bottleneck in our effort to offer a personalized experience to our subscribers. Sending one email to many subscribers is a much less resource intensive task than sending one unique email to every subscriber.

In addition to that: news changes rapidly. In comparison to eCommerce — where content is kind of static — news content typically expires rapidly. That’s why with every email distribution we have to make sure to refresh content so that it can be properly displayed in an email. Having hundreds of thousands of newsletter subscribers we were a bit worried about runtimes to send a newsletter. With this in mind we chose to start with a weekly email provided by a general purpose RESTful API.

Emails are scheduled to be sent once a week (Friday afternoons), giving us some time to analyze and tune performance. The actual send time is not that critical. The newsletter can arrive anytime between 5pm and 10pm giving us a sufficient time window to process all emails.

Being aware of the fact that most of our users are accessing newsletter emails on their mobile devices, we reduced the number of articles displayed to 5, in contrast to the 10 articles shown on the web. We expect to test the best number of articles further down the road. And by the way, did I mention that it is a good idea, to always enable each API endpoint to accept limit parameter? Thanks to some clever developer with foresight we were able to quickly address this requirement.

Prototyping the very first iteration: an MVP

Now that we are ready to launch the very first iteration of this email, we know: It is definitely a good idea to start with the final email design in mind and then work from that design backwards. It helped a lot when putting together the data requirements that were needed in order to automatically populate each email. On the website as well as in the native app, our recommender API is only serving the article ranking for each user. It is then enriched with article metadata — title, author, publication date, teaser and other info — before being sent to the client. On web and app we only have one consuming client at a time, so all info has to be available at any time. The situation is a bit different when populating emails, it’s basically all consumed at once by one single endpoint. We had to decide whether we choose one enriched feed for every user, or separate them out into two feeds, one containing metadata for each article, and one just containing the article ranking for each user. Although the “one-feed-option” was beautifully simple, it had its disadvantage in the fact that article metadata would be highly redundant and stored multiple times. It’s important to know that we are using a cloud based email provider with limitations on data upload speeds and storage. We chose the “two-feed-option” and provided an additional API endpoint which served a distinct list of article IDs that occurred across all user recommendations for weekend reads. This additional feed was then routed via our metadata enrichment service. Having the final design in mind, we were able to further reduce the amount of metadata to exactly fit what was needed to populate the email.

Having all endpoints in place, we were able to start integrating them and composing the email. The entire email send job is made up of 4 steps:

  1. Refreshing article metadata: Truncating table, calling API endpoint and storing data physically in a table within our email provider system
  2. Refreshing article ranking per user: Truncating table, calling API endpoint and storing data physically in a table within our email provider system
  3. Populating Email (dynamic content) for each user: Looping through every user that is on the send list, read each users individual article ranking and lookup metadata for each article on the list. By having all underlying data available in the email provider system we are expecting sufficient runtimes
  4. Email sending: Well, no real magic happening here

What we learned and what is coming next

Having smartly designed algorithms for recommendations and personalization in place, and even serving them via sophisticated APIs is one side. In order to actually make a data product at the point of impact is another story. Enabling personalization where our digital subscribers need it most is a complex task, which we’re experimenting with heavily currently. Having the right MVP and strong collaboration across many disciplines and departments is key. A few tests that we are running or trying to run:

  • Creating “Trust Features” which will give our subscribers more transparency and more control regarding how our news personalization works.
  • Finding clever ways to integrate personalized experiences to the core of our news products — with editorial integrity in mind

But first, we’re monitoring closely how our new email product is performing. We will be watching how our subscribers like it, how much value it brings to them, how our systems perform during email distribution, and what we still need to tune. Once we have the technical backend stable, we will be able to start testing our various hypotheses on the weekend read email newsletter in order to fine-tune layout and size. In this process we want to be led by data and our subscribers, acting fast and collaborative. We’ll closely listen to your feedback, so please reach out with your thoughts!

Sign up for the email for free here.

NZZ Produktentwicklung

Was wir lernen, während wir die «Neue Zürcher Zeitung» der Zukunft bauen.

Thanks to Anna Wiederkehr

Markus Barmettler

Written by

head of data, analytics and market research@nzz

NZZ Produktentwicklung

Was wir lernen, während wir die «Neue Zürcher Zeitung» der Zukunft bauen.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade