Image Credit: Skyler Rodriguez

Re-Thinking Engagement at AJ+

AbderRahman Sobh
AJ+ Platforms
Published in
5 min readMay 11, 2017

--

Why are Metrics and Content So Inseparable?

Within the contextual space of Social Media, there is one aspect that dominates the lives of content producers and analysts alike as they go about leveraging various social platforms in order to publish content in the best ways possible. That aspect is, of course, data!

The hallmark of a new era of digital productions ushered in by Moore’s Law and computer scientists excitedly pushing the boundaries on what is physically possible with a few electrical signals skittering across printed circuit boards. Specifically, one of the most pervasive elements of data that factors into a typical Executive Producer’s decision-making is the metric success of a given production.

Metric success on social and digital media platforms has an added dimension of complexity that traditional media, in comparison, scarcely keeps track of. The specific interactions of each and every person that views, rates, and in general engages with a story forms a sphere of interaction which envelopes the evaluation process.

While journalists have a vision in mind of the topics they wish to cover, there also exist many other variables at play with respect to actually displaying the prepared content to the ears and eyes and minds at the other end of the pipeline. By measuring user behaviors under different conditions, such as time of day or format of publication, it is possible to pick out the ability of a piece to reach its intended audience.

Distributed Publishing Platforms’ Metrics Overload

Digging another level deeper, platforms such as Facebook report dozens of unique metrics on videos which relate to Engagement Value:

A subset of the metrics that Facebook reports which relate to Engagement Value. Image Credit: Skyler Rodriguez

Keeping track of even this small subset of features all in one place and at the same time is no small feat. It often requires true skill in Data Visualization and or Data Transformation to make easily understandable, as well as deep contextual knowledge of the media being analyzed.

These characteristics however, have at least one thing in common. They all measure, the aggregate ability of an audience to interact via that specific method i.e. they each measure a different type of engagement success.

Crafting Your Own
Locally-Sourced Engagement Measuring Stick

Viewing five different dashboards each with their own unique combinations of colors and wingdings definitively reduced to one cold, hard line of digits! (destined to be read in a dashboard of its own, of course…) We let a bit of statistics and math do all the cross-feature comparison for us, leaving behind a much simpler inference interface for our colleagues outside the Data Team.

The general premise of the project here is to define a single “Engagement Score” metric which cumulatively evaluates all available data on ways audiences can participate.

The metric needs to ultimately define the level of audience participation for a piece as a result of comparing said participation to a benchmark model.

Piecing Together the Benchmark Model

  1. Feature Selection; pick out which metrics are most relevant to your intended scoring.
  2. Data Prep; binning data types (at AJ+ we have videos that are produced with different standardized lengths: 30 second videos, 1–2 minute videos, and so on)
  3. Data Cleaning; check that the metrics are distributed normally and remove outliers.
  4. Model Picking; most people use the mean or median of each metric for a given range of time, there is a lot of room for fancy Data Science here.

Using this historically-based benchmark model inherently yields a ratio comparison of the metrics to each other. Assume the following example is true for a certain data set:

Average Likes = 10,000

Average Shares = 1,000

Likes : Shares ratio = 10 : 1

Both a blessing and a curse, the benchmark model dispels the cloud of vagueness when it comes to comparing metrics while at the same time also prevents us from generalizing these ratios to any other set of data (i.e. another channel or platform). The whole model selection process has to be redone for any given set of data so that it can properly represent the metric ratio for that context.

Powering the Model with a Data Source

By applying the metric ratio values we can determine the “Engagement Value” of a post or video as a single metric. Take a sample case for a post that had 3,499 Likes and 150 Shares:

(3,499 Likes * 1) + (150 Shares * 10) = 4,999 Engagement Points

There is still a major caveat that remains in using this value arbitrarily stemming from posts/videos reaching different size audiences. One final transformation is required, the Engagement Points need to be scaled by an indicator of audience size.

When using data from Facebook, we can use a metric known as “impressions”. This metric accounts for any time a post or video has surfaced to a user regardless of if they interact with it. By dividing the “overall worth” by the number of “impressions”, we are left with a statistic that can properly give a sense of value while considering the audience’s size at the same time.

One Man’s Like is Another Man’s Share

It was not an entirely novel idea for our team at AJ+ to want to develop a concrete comparison between different metrics measuring user interaction. 3rd Party tools for tracking social media impact often make use of metric weighting and an in-house proprietary formula to provide a statistic that indicates performance quality.

The necessary mystery surrounding such formulae, as well as the typical restrictions on what clients can do to extend beyond the cookie-cutter reporting options, were both clues that we needed to start building something of our own. By doing so, we free ourselves from the assumptions being made with regard to how we choose to value metrics internally.

Additionally, we retain full control of the ability to adapt in the case of major events which can change platforms themselves. This inevitably happens, whether it is the release of a new format (i.e. Facebook Live) or new types of data being released via an API. By developing our own solution for defining an Engagement Score statistic, we have created an agile pipeline which can respond in a timeframe that we define.

--

--