Creating a user event pipeline for analytics

Tim Meeuwissen
Jumbo Tech Campus

--

We are all about serving our customers to the best of our ability. But you can only serve your customer well, if you know what they like and what they don’t like. We therefore, like many other companies, track customer behaviour on our website. Do they like the menu? Can they find what they are looking for? How well is our new feature found and can we remove some old features?

There’s nothing new here. Any company, small or big, does this in some form.

It starts to become interesting though, when you work with many teams on different aspects of the same solution at the same time.

How hard can it be… Right?

Event logging is easy, right? Developers create functionality for the users. Then a great and gifted team of analytical experts look at the website and apply some scripts to it. So now every time a button get’s pushed, an event is constructed and sent to the server. Done!

Well, albeit this might be fine to start with, some issues will arise

  • when the solution changes, tracking will fall behind
  • the responsibility of changing the button versus tracking the button is separated. This also means that the change of data or the lack thereof will take time to be understood and fixed
  • transformational logic will start to arise in this middle layer that does the logging. Websites are ever changing, and separate code for logging is not tightly bound to the code that brings the functionality. Over time, a lot off dangling ‘dead’ code will be left in the logging codebase, and no one will be able to say which code is actually operational and which isn’t
  • code that attaches itself in hindsight, creates an unforeseen liability on page loading time, user experience (even functionally) and can even break operations
  • Because of the disconnect between cause and effect, a developer isn’t aware that he changes something that affects logging. In effect, you’ll only notice problems with logging when metrics are affected strongly (e.g. not coming in at all, or coming in at double the rate). But how about changes that affect only 1 in 5 of the use cases?
  • You should be able to radically A/B test, and understand how the test have affected the measurements when you display your data to the interpreter. However, when you always look from the outside-in, chances are you’ll miss this relevant data.

And we haven’t even spoken about the data collection itself

  • because it involves a lot of manual work, the data isn’t collected in a structural and predictable way itself
  • new events have to be identified and amended, rather than automatically being absorbed by your data platform
  • there is a constant translation needed to backport to the old way of collecting data for homogenous data processing, in effect destroying valuable historical data
  • as soon as your website goes multilingual or starts addressing more than one target audience, you’ll have a big job at hands to ensure compatibility
  • A/B testing is hard to do and to interpret. You’ll have to know which experiment was running at which market segment for which quantities and which contextual situation with which versions of your code in order to let the metrics make sense. In other words, you need really structured data otherwise this will fail.

Setting up the foundation

You might have noticed that we think it’s time to do it in a better and more structured way. Well, you are right.

In order to fully understand why we choose this direction, it might make sense to first read the article: Why we need a front-end rendering platform. We are transitioning to a service oriented architecture, and creating a rendering platform for our front-end. Because we go to a platform, we inherently rid it from all functionality and content, and migrate those pieces to their respective components, services or backend systems dedicated for 1 task.

What’s a component

It’s good to mention, that we treat each isolatable function as a component. That is, there is nothing special about something fine grained as a button, versus something as course and complex as a checkout flow. Both are components that live in the same component library.

There’s a risk of grouping components based on granularity. We all know it, and each system that I’ve ever worked on has fallen for it and got it corrected once it matured. I’m talking about ‘core’ folders and the likes (atom, molecule, organism stuff like that). Because what is core? If you think about it long enough, you’ll become entangled in your own explanation. So instead, we’ve removed the complexity of classifying our components with vague terms, and simply introduced rules that all components should adhere to.

Structure

The basics behind this are really, really simple. We want a (business) user to be able to click together a page, without them requiring a developer to build it for them. This becomes increasingly easy if all components interface the same way, and more complex components simply are a combination of other components combined as an entity itself. Each component exposes properties, and those properties can be set though an interface by the user creating the page. This constructs a set of data. That dataset is then sent to the front-end rendering platform, and that platform simply renders this structure with all components at the right spot, executed with the right arguments. Unmistakably, undeniably simple.

All moving parts are easily identifiable and work in predicted ways, which forms the ideal basis to work from when it comes to data.

Event Structure

Following some thoughts from the Domain Driven Development methodology, each component has the responsibility to emit its own events that are logical for its own context. What this means is that you don’t hand over data because some downstream component would like to have it. No, you only emit stuff that makes sense to your component.

Example: [button component] got clicked.

In a lot of cases this click means something greater in a downstream context. Where the button wouldn’t care what happens after, the higher component takes this click and now knows that it should do something.

Example: [checkout component] next step

But as you can see, there is no relation to be found between the user going to the next step, and the user clicking the button, even though there’s causality. In order to not-lose this data, we provide a link to the original event, causing that event to bubble while providing some extra context to it.

Example: [checkout component] next step — because — [button component] got clicked

Distribution

Creating the events is one thing, but how do we get them to where they need to be?

Pipeline

Blegh what a buzzword, right? So let’s first deconstruct what we are talking about. A pipeline, is a way to transport something in a standardised, structured and easy way.

Simply put, we need a way to take all these events from our components, and bring them somewhere to do something with them. Now let’s see what that would mean.

  • each event caught leaving a component will be shipped down the pipeline
  • their ancestry is amended (all the places it bubbled up from)
  • the information about their component is amended (version, component name, config hash)
  • global context is amended (information about user segmentation, traffic percentages, A/B tests, stuff like that)
  • timings like loading and execution are amended

And it is transported to a part of the application responsible for processing.

Example: [checkout component] next step — because — [button component] got clicked by user A on page B, it took 10 ms to process this click and the user successfully stepped into the next context. We show this exact button on 10% of our traffic from the province Utrecht of the Netherlands. This button is button version 1.11.111 and it’s config says it’s label shows “Volgende stap”

As you can see, these events already tell a lot of information in a structured way, without having the developer actively involved. Less involvement means that the odds are higher that it’s done, and that it’s done right. Only when the developer changes it drastically he should be made aware that it breaks logging, and for the rest he or she has more time to invest in improving the component itself.

Shipping

This pipeline formats and carries the message in a convenient format. But whereto? Like with many distribution models, we have to introduce a depot. A spot where the messages accumulate before they are being shipped off to their final destination. This depot is specialised in these kinds of messages and accommodates for all common things you’d like to do with them. Here also, we rely on a wel known mechanism called pub/sub.

The module that wants to send events to e.g. Google Analytics or another analytics platform subscribes to the bus, filters (by blacklisting) the events that it isn’t interested in and ships it off over the internet. An advantage you create here is that when the users connection is troubled, you can wait a bit with sending (e.g. by having a serviceworker send it as soon as the connection restores).

But there are also client sided observers that can take interrest in an event to be happening. Remember, not all events bubble all the way up through their ancestry. But some events need to trigger global actions nonetheless. This mechanism is ideal for making that happen.

Logging State

There’s a real pitfall to be discovered. What we emit is an event. Not the state change, nor the state. We propagate something that the user has communicated to us.

We often want to log state to our analytics platforms nonetheless.

In that scenario you can choose to observe, obtain state, publish and / or ship that information in a separate asynchronous step. This way you can modularise all the custom computational requirements for your analytics platform and remaining a clear relation with the metric, the code, and the origin.

To conclude

Running your analytics observers next to your website surely works when your business is in start / scale up. After that, setting up a user event pipeline for analytics is nothing more than logically thinking and applying a lot of patterns we see in the backend world as well. It’s key to protect your data and its validity, because you use it to base your next move on. Hope you’ve enjoyed this (long) read, thank you so much for reading and please let me know what you think!

--

--

Tim Meeuwissen
Jumbo Tech Campus

Seriously passionate in understanding how stuff works