@ Promoted
Published in

@ Promoted

What Metrics Promoted.ai Logs and Why

Promoted.ai works by deciding what to show in marketplace apps’ search and feed, measuring the user response, and then learning what decisions produce what user behavior. Promoted attributes user actions back to specific display decisions, called insertions. That way, Promoted can learn what to show. This high-volume attributed logging system is how companies like Facebook power their self-service performance ads and feed recommendations. It is the engine that powers Promoted.ai Metrics.

Promoted.ai logs the entire user path, from insertion to impression to click to conversion.

This type of logging powers reactive, real-time optimization, even without user identity or item metadata, solving cold-start and new or anonymous user personalization.

Typical product analytics log disconnected user events, usually the highest interest ones like sales.

This is unlike typical product analytics products like Amplitude or Segment, which are designed to log lower-volume, user-attributed events like page views (sometimes called “page impressions”) and purchases. While it’s possible to send item impressions to Segment, the volume cost is huge, and without a system designed to use that data, the value is low.

Across web, mobile, different screens, and different apps, user engagement types vary considerably. To standardize optimizations, we map raw user events to standard events. This mapping may happen (1) directly at logging time in our SDK, or (2) in Promoted’s Metrics backend as a combination of signals.

Promoted then predicts the probability of every standard action per every insertion to rank search and feed. Downstream optimizers use these predictions as features to maximize long-term objectives like “user booking 14 days after search.”

Despite logging thousands of times the data of typical analytics services like Mixpanel, Amplitude, and Segment, Promoted provides this service at no additional cost as part of our search and feed optimization service. Make money, not metrics.

Promoted.ai Necessary Standard Events

We require these events to work.

Insertion: A decision to show an item. You may have thousands of insertions for every page view, and each insertion may have thousands of ML features used to decide what to show. This logging is done server-side and within Promoted’s servers. This is the highest volume and largest event, but it does not involve client logging.

Impression: Did a user see the insertion? We use IAB online advertising standards: >50% visibility for at least 1 continuous second. Impressions without engagement are “negative” examples in learning, and they are immediate feedback that people are seeing an item and that Delivery is working. We use p(impression|insertion) to determine “insertion rank.”

Click: Did the user select this insertion? “Click” is a legacy concept from cost-per-click web advertising that still maps well to mobile and video as the first signal of user intent, or “performance.” It’s the highest volume type of engagement, but about 1–5% of impressions in volume. If we model only one engagement, we model click first. Clicks are a first step in the path towards a conversion and, because of their high volume, they are the easiest to model with limited data. We also use clicks to model “presentation bias:” the tendency for users to engage with whatever is shown first. We account for this bias so that whatever is shown first doesn’t have a “winner take all” feedback loop. Clicks must have a mapping back to a specific impression that was “clicked.” By convention, many product analytics configurations only log the destination view, not the click action that created that transition.

Conversion: A sale. The final objective. Conversions may happen minutes, hours, or even days after an insertion. Conversions may have a numeric value, like revenue, which can also be used in optimizations like maximizing total revenue. There may not be a direct path from insertion to engagement to conversion, and conversions may happen outside of your app. Further, because conversions can take a long time to happen, while historic conversions can personalize results for returning users, they don’t help power reactive recommendations for new users or within a session. We have a variety of modeling techniques to handle these cases.

Supplementary Standard Events

These events are not necessary, but including them improves system optimization.

Like or Save: Many apps have a “like” feature. Likes are a rare but strong signal of positive intent that is sufficiently different from clicks that modeling them separately can notably improve optimization performance.

Hide: A negative engagement. Hides are important to offset “greedy” engagement optimizations and give users a feeling of control over recommendations. Content can be annoying, distracting in irrelevant ways, or repetitive which does increase short-term clicks (clickbait) but hurts real user satisfaction. Predicted hides offset these tendencies in recommendation engines.

Report: A rare but critical engagement that your marketplace needs to react to immediately. Something about this content needs human review before it should be shown further, like content safety (pornography) or an obvious error (missing image or language mismatch). We stream reports through Metrics so that we can immediately delist such content automatically.

Long Click: After a user clicks into an insertion, they can engage with the item on some destination view. This can include a large collection of different engagements depending on the app, item, and surface, including reading and scrolling through text, swiping through images, navigating to subviews about the item, writing comments, and participating in a live chat. We aggregate all these signals together into a “sufficient landing view dwell time” threshold call “long click.” Long clicks correlated to conversions, but happen first and at higher volume, so they can be important optimization targets for maximizing sales.

Add To Cart: Like Long Click, these are high intent actions useful for recommendations modeling. Many marketplaces already log these actions for re-marketing activities.

Bounce (short click): A “negative” click, when the user quickly exits the click destination view. Like “Hide,” bounces are a correction to annoying content with inflated engagement. Clicks with a short destination dwell time, under 2 seconds, are considered “bounces.”

Engaged Dwell: Similar to Long-click, but for engagement on the listing before the click, like image carousel swipes. We prefer engaged dwell (length of pre-click engagement sequence) over pure “impression time dwell” for quality modeling, although we log the latter, too.

Share and Follow: Difficult to model due to low volume, but extremely valuable to increase because they generate organic user growth.

Video Events

Videos have their own characteristics in engagement modeling.

MRC Video View: Like impression, but for video. 2 seconds of continuous play, at least 50% visible, includes auto-play in listings and plays without audio. Does not require a close-up view.

95% or 10s view Listing: 95% of the total video played, or 10 seconds of continuous play. This indicates that the video was watched, but the user may not have interacted with the video or expressed intent. Uses both a % and fixed seconds to correct for different video lengths. If the event was only 95%, then very long videos would be penalized. Likewise, videos shorter than 10 seconds will never trigger this event.

95% or 10s view Close Up: Like the listing version, but the user deliberately watched the video with their full attention. Most apps and website have a full-screen view for videos.



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store