Effective tests: statistical approach to testing offers
Working with Facebook is a bit like an attempt to find a black cat in a dark room when you’re blindfolded. However, if we don’t have light, we’ll walk to the sound. Noises in the opposite part of the room suggest that there is no exact answer to the question of how much time and what budget must be spent on the offer’s test, but there is a checklist that will make these tests easier and more intuitive. Let’s turn to the statistics to deal with this issue.
3-day and 50 conversions rule
Facebook wants to get 50 conversions before the full advertising optimization, and affiliate bloggers advise not to touch recently uploaded ads for at least three days.
3 days of watching the budget going negative, and of waiting for Facebook to complete the learning phase — it sounds depressing, doesn’t it? 50 such conversions will cost 1150$ even if the lead price is ideal, i.e. 3$ and who does not dream of such a price with the rewards of $20? This maths is not really good, if we take into consideration the possibility of leads missing at all. Too many expenses to realize that the ad is not working.
Ads that will never be saved by anything you can do
The effective ad is noticeable almost immediately. However, if it works badly from the very beginning, then after 3 days the miracle will not happen as well, and 50 conversions will not arrive at once.
In order to understand whether the ad is effective or not, you need to determine the critical values of the click, lead and CTR price for a particular geo, reward, average approve and conversion. Then compare them with the obtained values.
Allowable cost per lead
The standard ROI formula is as the following:
ROI=(Income from investments for a period — amount of investments for a period)/amount of investments for a period.
We decompose each constituent of the formula into separate components, and obtain the following:
ROI=(reward*number of leads*approve — lead price*the number of leads)/lead price*the number of leads.
In other words, we achieve the following result by using maths:
ROI=(reward*approve — lead price)/lead price.
Here is the universal formula of acceptable lead price, in which you can substitute the desired ROI and other exponents:
Lead price = reward*approve (in decimal format)/(ROI (in decimal format)+1)
Let’s say the reward is $20, the average approve for the offer is 55%. Next, you can adjust the acceptable ROI level that you expect from the tests. This can be either 0%, 10%, 50% or -50%, if you, for example, want to get information about the target audience of the offer so that to use this data afterwards in the advertising campaign.
Calculation of accepted values with the table
We have prepared a table with all the calculations for you not to recalculate these figures manually every time. You can follow the link or download the table template with the calculation of accepted values (attached to the post). We will return to it more than once further on.
This is what the table looks like:
Point 1 sets the reward value of the offer.
Point 2 is the desired ROI value. In the table, they are set to 50% by default.
Point 3 is the possible approve on the offer.
Point 4 is the possible conversion rate.
Point 5 is the average CPM for geo.
The obtained data will help to orient yourself in the initial results, i.e. reveal specific values with their limits not to get ROI of 0%, 50%, etc.
The calculation of accepted values illustrated by an example
For example, to break even on the Flawless Brows — ES offer with approve of 42% and reward of $15, the lead should cost 6,3$. The average CPM on a similar offer in Spain is 7$.
Then we get the following critical values:
Green shows critical values for lead cost, clicks for different values of conversion and CTR.
Cost of a click and CTR can be calculated manually the as follows:
Accepted cost per click=lead price*average CR.
For example, if the average CR of the offer is 3%, then the acceptable price per click is $0,19.
With the knowledge of these figures, it will be easier to understand whether the campaign went off well or not, or the results are disastrously terrible.
Cost per click=CPM/1000*CTR,
This formula can be used for CPM and CTR values. Yes, CPM is never the same, but the average cost for thousands of impressions can be determined from the past campaigns.
The average CPM for the selected audience is $7, so the critical CTR is 3.7%.
Oh yes, we have completed the first stage of performance predicting in tests!
Lead for $1 and 2 clicks
Or, how many clicks and leads you need to understand that the ad has to be turned off?
It is not a good idea to uncork a bottle of champagne, if clicks are 1 RUB, or the lead has arrived from the second click and costed $1. Such data are not statistically significant, and after few hundreds of drained rubles, lead for 30 can turn into lead for 300 or 500.
Statistical validity should first be checked for critical values. So you will know when exactly the ad has to be turned off.
How long time you need to test the offer to get reliable data
When you receive the first data on clicks and leads, you need to evaluate the reliability of the results.
Checking the statistical significance of the clickability index
Does the number of drained clicks has any impact on CTR? How many clicks have to be drained off to understand that the received CTR is stable?
We have analyzed data from more than 15 campaigns in different GEOs and came to the conclusion that 15–20 clicks are enough to get a reliable CTR. The average value of clickability can change after 20 clicks, but not so much.
If the initial data is several times above the critical value, for example, clicks priced $5 on a cheap GEO, then you should not wait for 20 clicks.
In usual cases, we check the significance of clickability with the table.
The figure to be verified (1) is the CTR value to be checked.
Required number of events (2) shows how many clicks you need to receive for validation. The default is 20.
Number of impressions (3) shows how many impressions have already been made in the advertising campaign.
Sample size (4) shows how many impressions are needed to prove the accuracy of the data.
Significance (5) shows the ratio of the impressions number to the sample size. The closer to 1 it is, the higher the significance is.
That is, if CTR in your campaign is 1%, but you have made only 100 impressions and received 1 click, then the data is unreliable. You need to make 2,000 impressions to get a stable indicator, which can be taken as a basis for further analysis.
The problem in determining the significance of indicators is that they all change with each impression and it is impossible to trace which point exactly has influenced the overall picture at a particular moment. In practice, CPM may double tomorrow or FB may start to drain off irrelevant audience due to a failure in the auction. That is why it is necessary to monitor the change of all values in total.
This is important not to forget to check which layer of the selected came from Facebook. If you drain on the wide audience, then have a look at how many actions were committed by certain categories of the audience, and make a decision and general conclusion afterwards.
Checking the statistical significance of the conversion rate
Leads are a bit harder issue: they are more expensive than clicks, and the budget for tests is limited. So first we’ll define a critical conversion value that needs to be checked. Anything below it produces a negative result.
To do this, we get down to the table named a calculator of critical conversion value.
Here CPC (1) is the obtained click cost,
CPL (2) is a critical lead cost, which we have determined at the initial stage,
CR (3) is the minimum conversion rate that will allow you to earn a profit with these values.
The last step is checking the accuracy of the conversion rate.
Everything is the same as in the clickability check, only the required number of events is reduced to three. Sample size is the number of clicks you need to get in order to check the conversion rate.
That is, if you have drained more than 116 clicks but still haven’t got a lead, 2 leads will not come on the 117th and 118th click for sure. Therefore, the conversion rate will be lower than the critical, and the ad will work negative.
Important: if the clickability rate and conversion rate are reliable, then you can consider the click and lead cost reliable as well.
Reliability of results with split-testing
The difference between two ads is not always obvious; therefore, you can use a special calculator when choosing a winner from the tests. For example, ABTestGuide has this one:
Take two ads, the results of which you want to compare. Fill in the data on visitors and conversions (or impressions and clicks) and click “Apply changes”. Green color means that the results are significant. This is really cute that this calculator has a mathematical justification for the conclusions. In this case, the graphs show that variant B has results statistically more significant than A.
Insights on effective tests:
1. Define critical values, everything below them should be disabled
2. Optimization does not save very bad ads, it is better to immediately disable them or re-upload again
3. Check the reliability of the obtained values
4. When checking the reliability, focus not on the budget spent but on the number of events received
5. Pay attention to the change of each value during testing
6. In split-testing, the difference in results may not be obvious, so it is better to use a calculator.
Money spent negative for tests do not disappear in vain. They give you the information which ads work and which don’t, which audience should be excluded, and so on. In any case, do not miss the opportunity to make your tests a little bit more effective! And statistics will help you to do it.
About the author: Nice to e-meet you 💚💜
We are Leadrock — no. 1 for those who are eager to move forward and want to work in the CPA market by their own rules. We have more than 200 Whitehat own products for European, Asian and Arab countries, as well as top CPA-market offers and approx up to 100% in some GEO. Exclusive offers blew up the local market, and now went beyond its borders.