Cookieless Contextual Targeting

The digital advertising world is going through a transformation, where browsers like Safari and Firefox have limited the lifespan of third-party cookies and Chrome has announced plans to deprecate third-party cookies due to user privacy concerns. While third party cookies will be blocked by the browsers, publishers will still be able to use first-party cookies to gather and analyze user-level information.

Let's understand what are third party and first-party cookies -

First party cookies are set by the domain we are visiting and used to store and analyze user information by the website to provide a better user experience. It cannot be used to track users' activity on another website.

Third-party cookies are set by third party’s such as DSPs or other ad tech vendors and used to track user activity across multiple domains for digital advertising. Additional data points such as as- IP address (which can provide the location), device, and other user information for ad targeting and retargeting can also be connected to third-party cookies.

Cookieless contextual targeting can be a substitute for cookie-based targeting, where the website content is used to target ads and not the user’s cookie ID. The concept is the content of the website matches the advertisement. For example — A travel blog showing flight ticket ads.

Source: wordstream.com

At MiQ we are exploring context to cookie-based targeting —
How can we run a predictive model for ad targeting given just an advertiser’s brand name?

As, the targeting here is cookieless contextual targeting, the best possible way to do it is to target publisher domains with similar content to that of the advertiser.

We explored various data sources that provided additional contextual information about the page. We used the data including publisher domains (page URLs), the keywords from the page

Model Building

We treated this problem as an NLP document similarity problem. Two documents are said to be similar if they are semantically similar and to calculate the similarity between two documents we need to define a mathematical equation and represent the text in a quantifiable form for the machine to compute. We used a word2vec word embedding method to represent the website keywords in a quantifiable form and calculated the similarity scores using cosine similarity function.

Cosine Similarity is the cosine of the angle between two vectors and determines whether two vectors are pointing in roughly the same direction.

Embeddings are the vector representations of text where words or sentences with similar meanings or contexts have similar representations.

The keywords from the website dataset were cleaned of stop words and special characters before creating the word embeddings using word2vec, The advertiser website data was scraped from the meta information of the advertiser website and the similarity of the word embeddings from the publisher and advertiser keywords was calculated using cosine similarity method.

Model Results

The cosine similarity for two documents(domains in this case) will be between 0 and 1. If the similarity score is close to 1 then the documents are considered to have a high semantic similarity. In our use case, we were able to fetch relevant publisher domains for the advertiser using a threshold value and filtering out domains with a high similarity score.

Results of word2vec model

The performance of the top similar domain in a live ad campaign for the advertiser was fetching higher CTR than the average.

Endnotes

Document Similarity approach can be used to compute the similarity between an ad and a webpage. This method is effective in filtering out similar webpages/ domains for targeting for branding campaigns. As the digital advertising space is changing due to browser policies and privacy laws, more research and exploration are needed into machine learning approaches for contextual targeting which can serve as a great alternative to cookie-based targeting.

--

--

--

MiQ Tech and Analytics Blog

Recommended from Medium

[Paper review] Trained quantization thresholds for accurate and efficient fixed-point inference of…

You have one voice, defend it

Dog Breed Classifier

Different dog breeds

Soft-RL: Maximum Entropy Reinforcement Learning as Probabilistic Inference

Can We Use Stochastic Gradient Descent (SGD) on a Linear Regression Model?

13 Vital Machine Learning Interview Questions

Advancing Machine Learning with H2O

How Disney uses PyTorch for animated character recognition

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Asim Abinash

Asim Abinash

More from Medium

eXplainable Artificial Intelligence (XAI): Using AI to Minimize Risks and Improve Customer…

Cloud gives you a competitive “EDGE”

My experience contributing to Open Source -AFME2 Oct 2021

Prim’s Minimum Spanning Tree Algorithm.