ON CLOSING THE LOOP WITH FIRST PARTY DATA
26 March 2021 | Advertising, AI and Machine Learning, Audience, Causality, Consumer journey, Customers, Data, Data Science, First-party data
For those responsible for online content of any kind, editorial or advertising, one of the great tools to use — to find out what works best by determining cause and effect — is A/B testing. Unsurprisingly, use of A/B testing is growing fast as publishers and advertisers seek to raise levels of user engagement with their pages and ads.
A/B tests are controlled experiments involving an existing content layout (editorial/advertising) that is shown to a Control group of users, and a variant layout that is shown to a Test group. Users are randomly split (persistently) between the two groups. The process requires decisions on what to test (the hypothesis), the user sample size needed for a robust result (within a confidence level), the test period required to achieve the sample, and the outcome metric for the test.
A/B testing can be used to determine the effect of a wide range of content changes, such as:
- Editorial: copy, length, headlines, fonts, colours, layout, image, video, links etc.
- Advertising: ad copy, size, format, colour, font, image, video etc.
- Ecommerce: page layout, navigation, copy, offers, checkout, images, call-to-action etc.
An issue with A/B testing is that it can take a long time to discover content changes that make a predictive and significant difference (signal rather than noise). Many tests are cull-de-sacs, with lower or no discernible effects. When tests are run sequentially the discovery process can be slow and laborious.
A potential solution is to deploy machine learning that automates iterative and concurrent testing of content changes. For example, a font/colour change or copy/format change can be tested using a matrix of possible combinations. The automated system could iterate test combinations, and concurrently evaluate multiple tests through automated selection of user samples. This much faster approach can discover content change combinations that have significant predictive effects.
From an advertiser perspective, the best outcome metric for an A/B test is sales. Advertisers want to know the effect of editorial and advertising changes on their sales. However, this requires combining editorial + advertising + ecommerce into a single ‘closed loop’ system to determine the effect of context/ad changes on sales. To close the loop, the system needs visibility of users across editorial, advertising, and ecommerce environments.
As third-party cookies come to an end, look out for the companies best placed to use their own first-party data (customers with log in identities) to maximise the effect of combined editorial, advertising and ecommerce environments. Machine learning may soon discover new rules for how brands grow.