A Safari Intelligent Tracking Prevention Risk Analysis
Picture if you will..
You hear stories of the impending doom that is Safari Tracking Prevention. It sounds scary. You’re not technical, and all the information you are finding is clearly written for developers, referencing things like cookies, APIs and the like.
How do you figure out what is going to happen? How it applies to you? Do you even know if you should be worried, how can you figure it out?
What’s a worst case scenario?
I mean really how bad can it be? Let’s look at a super unlikely scenario.
Most of your traffic is Safari, and all of it is running the more recent versions of the browser, leveraging Intelligent Tracking Prevention. in that it runs analytics software, as a client side tag manager — such as Google Tag Manager, a survey tool, and maybe a client based A/B testing platform for good measure. You’re partnered with affiliates, but maybe have a longer sales window of eight days.
Note: A lot of this is the same outcome as if someone manually deleted their cookies. However, in the case of Safari this is automatic and happens without any user interaction presenting several scenarios which can affect analysis.
Let’s see what could happen!
First up — the analytic system
The two more widely known systems, Adobe Analytics and Google Analytics tend to suffer from the same mechanics unless steps are taken to mitigate them. At a super high level this breaks down to:
- Your retention buyer segment is likely to be lower than it actually is. Safari limits the ability to persist state, and when the under-laying cookie is deleted, the user, upon return, is considered ‘New’. This can occur in as little as 7 days under ITP 2.1, and as little as 24 hours in ITP 2.2. This can play out in the cohort reporting as having more severe drop off than what actually may be happening.
- Your ‘new’ buyer segment is likely to be inflated over what it actually is due to Safari removing the cookies that have previously identified the visiting browser. As such, over a long enough time period, the browser may actually be counted as ‘new’ several times over.
- Depending on how you’re attributing campaigns between first interaction and conversion this loss of persistence could result a shift from the specific campaign/acquisition channel to ‘Direct’. For example, Firefox traffic could be tied back to the original display ad for 30 days, but Safari may only be tied back for 24 hours. The net result is — your Safari traffic would likely seem to contribute less often than may be actual as the lookback window increases.
- If you are tying the user by some other mechanic, such as user id — how this could play out is as the browser deletes its state, on a return visit it would look like the user is using more browsers / devices than may be actual.
- Any event tracking that fires on the presence (or lack of) a cookie may either fire more often for Safari, or less often for Safari (depending on which state you are checking for).
The danger here is in not understanding what may be happening and altering spend patterns to address concerns which don’t actually exist. For example — you back off acquisition to focus on retention, or you back off targeting Safari users for ads because you believe they convert less often than may be actual.
The Tag Management System
Depending on what tags you are specifically loading this can play out a few different ways.
- If you are running specific kinds of affiliate tracking — say to calculate commission, the loss of persistence as the time window increases could result in a commission calculation less than what you would actually owe.
- As above, but for Ad vendors this could manifest as the Ads are reporting less effective than they are, due to the attribution issue discussed above.
- If you are conditionally loading tags based on cookies — those may load less often for Safari users.
- Other tags, such as those that power Customer Data Platforms may be affected depending on their implementation. This would play out as the ‘new’ vs ‘retention’ scenario described above. Special note here however, is if the state is persisted server side here, you could wind up with a divergent analysis when compared to the client side analytic system. Both systems would be ‘right’ based on their respective understanding of the world — but if you needed to pick one, the one with server side persistence is the one to go with.
The A/B Testing Platform
Client side A/B testing tools are exceptionally popular, and part of the critical aspect of how they work is persisting state to ‘lock’ you into a specific experience for the duration of the test. Due to Safari’s stance tracking however, this can play out as follows.
- Safari traffic may lose their segmentation, and upon re-entering the test wind up in the alternate cell.
- Testing platforms that persist any other state via cookies — would also lose that data, resulting in a user experience that may not be ideal. Users may be prompted more than intended, or they may have to re-enter information more often.
You would want to see how long your test is running and compare that against your estimated retention for Safari over that time block to get a gauge on how likely this was to affect your data, which could impact your final analysis.
Assessing your level of Risk
OK wow, so a lot of things could break, or you could be reviewing data that doesn’t reflect reality as closely as you may like but how to assess your level of risk? Where to begin?
Ideally a few things are already in place.
- You have a working analytics system.
- You have had a working analytics system in place for some period of time (the longer, the better).
- You have an engineering team who will work with you to deliver their own part of the analysis.
- and finally, as a nice to have — Vendors whom will respond to your support requests.
The Working Analytics system
Here you’d find out how your system identifies browser versions. You are specifically interested in the following.
- How many visits / sessions you get total.
- What percentage of those visits / sessions are using Safari
- What the browser version numbers are for all the visits / sessions.
This is where the time-box comes in — roll back as far as you can and see if you can find a pattern in your users Safari upgrade rate. In some cases, Safari is upgraded automatically, and in some cases Safari is only upgraded when the user buys a new device. What you’re looking for here is a rough ratio of how long it takes for people to upgrade. You are specifically interested in the fall time frame when Apple tends to release the major version Safari updates.
So with this knowledge you can determine the following.
- The Safari usage rate becomes the rough baseline for your possible level of risk. Should you do nothing, over time, this is a rough estimate of the maximum impact (assuming no large external market shifts) that your traffic may be subject to.
- The actual versions in use which are running the versions of Intelligent Tracking Prevention you care about represent your immediate level of risk. Should you do nothing, this is the percentage of traffic already being affected.
The Engineering team
You should have a meeting with your engineering team. Specifically you would like them to do the following.
- Scan the code base for references to ‘document.cookie’
- When a reference is found, get a description of what functionality that is powering and what happens when the cookie doesn’t exist.
Ideally, this isn’t a large lift for them, and you get the following information back.
- A listing of all functionality that may be affected that your development team has direct control over.
- A listing of what happens when the cookies are deleted for that functionality.
You can use this to figure out if they should consider refactoring that logic to use different state persistence, or what the expected behaviour on Safari should look like when compared to other sources of traffic. Basically this allows you to have a better understanding on if the functionality is behaving correctly / as you would intend.
Vendors — the black box
As we discussed above, the tag management system (or hard coded tags) represent a group of vendors who may set and care about state. We covered a bit of what could go wrong when that state is removed by Safari’s cookie handling.
Should it not be clear, or you have questions on specific scenarios, it’s worth reaching out to your account managers (or general support) to find out what happens when the cookies are not found as expected. Does the vendor recreate them from say localStorage? Does the vendor default to treating them as a ‘new’ user? Does this matter? It may, depending on if the vendor needs some sort of concept of history, such as those found in say, Recommendation systems. You’d want to understand what will break when, and if anything could be done to mitigate the impact / risk.
Pulling it together
The above sections help you frame your estimated level of risk, and establish what will break and how. Understanding this is important for discussions with management as nearly every mitigation technique involves developer work.
It’s hard to put a return on investment on this, but it comes down to “How much would you pay for more accurate data?” and “Does having more accurate data allow you to reduce costs and / or increase return on investment?”
Still, with that in mind this at least would allow you to have a honest discussion supported by likely scenarios to support if you need to take action and run a effective Cost / Benefit Analysis.
Possible Roadblocks and Limitations
A few things come to mind to be aware of when discussing mitigation activities.
Access to Technical Teams
- You may have a technical team, but they are otherwise allocated and can’t support this.
- You may have a technical team, but they may not have the skill set to address.
- You don’t actually have a technical team, and would need to hire someone.
- Speed of Work: There is a very good chance (unless your company is exceptionally agile) that things will break due to the automatic browser upgrades prior to your team addressing the mitigation. This may mean the relevant analysts have to be briefed that their data may look different or ‘shift’.
Access to the Platform
- Most of the ways to address requires setting state on the server — which means you have a high likelihood of having to modify server code.
- Certain web hosts do not allow you access to server logic. In these conditions — the mitigation of the impact of ITP could actually result in needing to do a server migration to a new host.
- Do vendors have existing mitigation practices you can leverage?
- If not and it’s critical — do you need to consider different vendors?
- Depending on the proposed mitigation — is there a heavy developer lift?
The above post is a good example of just how much work can go into supporting accurate data collection for just a segment of your traffic. The privacy wars rage on — and between things like the General Data Protection Regulation and California Consumer Privacy Act on the legal front and the different handling of tracking tech between Firefox, Chrome and Safari — it could very easily be one or more people’s full time job to keep a company in both a state of working data collection and a state of legal compliance.
As such, it may be a good idea to have a honest discussion with stakeholders as to how important accurate data capture and storage is for your company and what the administrative overhead (analyst/technical) would look like to keep everything running.
The year (2019) already has had two major changes for Safari, and we’re also looking at planned changes for Chrome and Firefox. This is no longer ‘implement and you’re done’, now it’s a constant process to ensure browsers don’t break your collection in unforeseen ways. This is the new normal and companies should be ready to spend accordingly to their needs.