The Ultimate Guide to using Google Analytics for Cross Device Optimisation

Craig Sullivan

Part 3: Desktop Browser Analysis

This is Part 3 of a 7 part guide. Parts 5–7 are coming soon. All the articles can be found in the Article Index

(1) Data Crunch: Desktop Browsers

Navigate to the Browser and OS report:

Apply the ‘Desktop Device Category Only’ and turn off the ‘all sessions’ segment.

You’ll see a report like this:

Initial Analysis

Firstly, what this report is showing you is the browser ‘brand’ across all operating systems. At this stage, it’s just interesting to see what the percentages are.

In this example, Chrome, IE, Safari and Firefox account for 98% of sessions — wow, if we can cover some big chunks of that, we’ll keep people very happy!

The new Desktop Browser that comes with Windows 10 (Edge) is growing in my client base (I can see this over time). I expect this to get a lot higher.

Expand the report to show the whole list and take a look at everything — it’s always interesting to see what is trending or on your future radar, as much as what dominates your present reality.

You’re also wondering if I look at the conversion rates, bounce rates or other stuff here? Yup — I’m looking for low and high performers — that I can segment and find out the versions that are driving these topline variances.

But for this exercise, I wanted to establish a foundation of traffic knowledge before we add the empathy and money bit. Optimising the opportunities will be covered in part 6, when we return to this dataset and flow diagram you’ve been secretly building.

(2) Desktop Split by What?

Now I’d like you to pick the Secondary dimension of ‘Operating System’ and apply this to the report:

You’ll then see something like this:

Now we can start to look at what platforms our browser ‘brands’ are running on. The first insights we have are:

Chrome, Internet Explorer, Firefox, Edge on Windows — accounts for 70% of ALL desktop browsers.

Safari, Chrome, Firefox on Mac — accounts for 27% of all desktop browsers.

So — going back to that original diagram of the flows, we can redraw that map a little like this:

As you can see, we’ve already mapped out the Tablet part of this diagram and are beginning to map out the desktop part of this expanded diagram.

Your notes should have session counts for Tablet:Apple, Tablet:Android and Tablet:Windows (or the other splits you found under your tablet devices).

Let’s add the figures for the Windows and Mac split for Desktop Browsers.

Choose ‘Operating System’ as your primary dimension (removing any secondary dimension) — as follows:

You’ll then see this:

Make a note of the following figures:

Desktop:Windows

Desktop:Macintosh

Desktop:Linux

Desktop:Chrome OS

You may have other platforms on your list so collect anything with significant traffic — games platforms are always cropping up somewhere on this list.

Now it should be as simple as looking at the browsers used by each Operating System, right? Just like we did on Tablets?

Gah. Arf.

If only it were that simple <sigh>

On Tablets, we didn’t really care about the browser type. Why? Because we accepted that the default installed browser is the best candidate.

However, on desktop — that’s simply not true. We have to find out the browser versions actually in use, because that’s what drives big differences in bugs and behaviour. Knowing that you have a high percentage of ‘Google Chrome’ visitors is useless, without knowing the performance of all the subversions.

The problem we have in ‘splitting’ our report by the browser version numbers is one of noise! On the Internets, there are several browsers that upgrade themselves manually or automatically — at many different rates. This means you end up with hugely fragmented version numbers in your data.

When we’re dealing with Internet Explorer, it’s very simple — there are a limited number of versions ‘in the wild’ and the builds are predictable, if not standards compliant <grin>.

When you look at a browser like Chrome, you’ll find a huge fragmentation of sub-version numbers, which really skews the reporting, especially if you just look at the ‘top’ browser versions. Google analytics really should offer a dimension of ‘Major Browser Version” which can broadly be deduced, as I have shown on some large datasets.

My apologies — I’m noodling about the data. If you are trying to figure out which versions to test, we need to simplify the dataset and find ‘clusters’ for you to pick. No tools required — just a copy of Excel or Google Sheets.

(2) Data Crunch: Windows Desktop Browsers

Let’s deal with the Windows browsers first — as lumping our Apple and Windows browser data together simply doesn’t make sense.

One big mistake I always made when looking at browser data, was to forget that the version numbers are also on different PLATFORMS. This is why I’ve chosen an OS split to my model.

In theory, you could have a different way of doing this but in practice, this is the best way for desktop browsers — when you do the testing, it becomes much easier!

So. You should have the Browser and OS report still open. Choose a primary dimension of Browser and a Secondary dimension of Browser Version.

You should see something like this:

Make sure you have only the ‘Desktop Device Category Only’ segment applied.

Now I want you to make a copy of the segment, like this:

Now give it a name like “Desktop Device Category Windows Only” and go to technology.

Select ‘Operating System’ exactly matches ‘Windows’, as follows:

Make another copy of this segment and call it “Desktop Device Category — Mac Only”. Filter to ‘Operating System’ exactly matches ‘Macintosh’.

If I was being thorough, I’d also take a look at the Linux or Chrome OS splits here. You may have more OS segments than I have but this lot are the biggies on desktop on most sites.

Data Pull

Now apply the “Desktop Device Category — Windows Only” segment (and turn off all others) — make sure you have the secondary dimension set to ‘Browser Version’.

You should see a list like this:

Now we have our raw data we need in Excel. Expand the report to the maximum amount of rows in the entire dataset.

One small gripe here to Google — Chrome (the browser) seems to be pretty unstable running large dumps of Excel from GA. I’m not sure why it’s flaky but on premium, I don’t seem to have the same issues. If you run into this and get stuck, you’re going to need to try a different browser. If you *still* get stuck, use the API. Never.Give.Up.

Download (export) the entire dataset from GA to Excel, where we will do some massaging:

Note that you will need to repeat this exercise for each OS split you’ve decided to include here. In my example, I’m just after the Mac and Windows browsers.

(2) Data Crunch: Excel Clustering

REPEAT this section for Mac, Windows, Linux or any other splits.

Once you have the Excel Datasheet open it should look like this one:

Delete columns D onwards — we’re just doing traffic splits right now (we’ll return to this stuff later). Tidy the columns up a bit and it should look like this:

Create a new column next to the Browser Version column

Insert a formula in the first empty field in column D (see below)

The formula should read :

=IFERROR(LEFT(B2,FIND(“.”,B2)-1),B2)

like this:

Now grab the little right hand bottom corner of the box — when you see a plus sign, drag it all the way down the spreadsheet:

When you get to the bottom, delete the summary line here:

Delete the ‘Summary’ and ‘Dataset 2’ tabs on the Spreadsheet.

Rename the remaining tab to “Desktop Windows Browsers”

Add a header to the row of ‘Major Version number’

Now we’re going to subtotal the figures.

Select the entire data reporting area in Excel and choose Sort:

Sort the data by column A, Browser, in ascending order. Then add a second sorting level, C — “Major Version Number’ in ascending order.

It should look like this:

Apply the sort.

You should see a list starting from A-Z and then with ascending major version numbers at each new browser.

Now we need to subtotal this spreadsheet. Select the data area and choose the ‘Subtotal’ function from the menu. Now we’ll do a subtotal of sessions by major version number of the browser — with me so far?

In the subtotals, you’ll see that it says the word ‘Total’ now against every single total. We’re going to replace that with the actual browser.

Choose level 3 on the subtotal.

Now for each major browser (e.g. Chrome) — I want you to do a search and replace, like this:

VERY IMPORTANT — please note the space at the start of the search term and the replaced term.

Now go through and replace all the chrome rows with the updated text I’ve given you. Make sure you only update the Chrome subtotals, then do the Internet Explorer versions and so on. It’s a bit fiddly but you should get the idea.

Now choose ‘Level two’ on the subtotal menu, so you see this:

I don’t really want all the subversions now, so one handy tip is to take this subtotalled view, copy and past it into Word and then back again to Excel.

This strips out all the subtotalled lines, giving you just Major version and Sessions, like this:

Now add an Autosum total to the end by highlighting the first empty cell at the bottom of the sessions column:

Now we’ll add a formula — make a note of the row of the subtotal (e.g.191)

=B2/$B$191

like this:

We’re basically going to divide the sessions by the total shown, to get a rough percentage.

Drag the formula down the entire spreadsheet to the bottom by holding the bottom right corner of your newly created field.

You should have something that looks like this:

Now format the cells as a percentage, like this:

Now sort the data, in descending order, by the percentage of the total, so you see this:

Now we can see that if we test:

Chrome 44,45,46 = 48%

Internet explorer 8–11 = 35%

Firefox = 10%

Essentially this is heading for 95+ percent of all windows browsers!

PLEASE Don’t try to be perfect — as long as you hit high the major browsers and get to eighties in raw percentages, you are likely to be punching much higher than this, due to how similarly some of the smaller clusters perform.

Now for some good news — we don’t need to test ALL those versions of Chrome, Firefox or Safari. WHAT? HURRAH!

[Game of Thrones style cut sequence where groups of downtrodden serfs throw leather caps in the air and lustily cheer.]

You don’t have to test everything because a lot of them are on auto upgrade curves. Chrome, Firefox and many other more, er modern browsers — have auto update functionality. I always see these gradually changing, as they get updated. If I move my dataset forward 3 months in this article, the biggest Chrome version is now 47, and so on. For some browser versions, you *can* predict the future.

So, musing over this — in terms of ongoing testing, I should really be aiming for Chrome 47, even though 46 has a higher percentage right now. From years of testing work, I know they ‘respond’ similarly (although sometimes that may be untrue). Since you are always running restrospective data for your browser reports, it’s a trailing edge trend. Try to weigh up whether the browsers are static or auto updating!

For Firefox, I’d probably plump for testing 42 or 43 — as this is where people will be moving towards. We have a lot of 40 and 41 here but this is not where it will end up.

When it comes to Internet Explorer, this is a much more static group of browsers— so you have to test the specific versions that are used by customers. You should also be testing Edge (the new windows 10 browser) as that is going to grow in size.

Unless you have some really important reason to include Internet Explorer 7 — some very good and sound business reason balanced with the cost of making that decision — simply take this off the list, for-ev-ah.

[Downtrodden serfs now really party and set the town on fire]

IE8 is also hard to get working optimally but at least I can make a decent business case on some sites, as the reward is often much higher than the effort. Don’t think of browser support — think of cost vs. opportunity.

[Serfs grumble with tradeoff between whip and gruel quality]

So — my final test list for Windows clients would be:

Firefox (recent / growing version)

Chrome (recent / growing version)

Internet Explorer 8, 9, 10, 11

Edge 12

Safari is simply too small on Windows to bother with for this client.

That’s several thousand rows of browser data crunched — and the client now has a list of 7 core windows desktop browsers that should be tested for major releases, functional changes, funnel updates, AB tests — you name it!

As long as you repeat this exercise regularly, you’ll keep the knowledge within the team current about what people are actually using.

(2) Data Crunch: Macintosh Browsers

I repeated my example for Mac browsers and got this result:

Safari 8, 9 = 41%

Chrome 42–47 = 39%

Firefox 38–42 = 8%

So — my final test list for Macintosh clients would be:

Safari 8,9

Chrome (recent / growing version = 46)

Firefox (recent /growing version = 42)

(3) Data Crunch: Windows and Mac Combined

Firefox and Chrome work very predictably across platform, so I’d combine the Mac and Windows list like this:

Windows — Firefox 42

Windows — Internet Explorer 8, 9, 10, 11

Windows — Edge 12

Mac -Chrome 46

Mac — Safari 8,9

That now gives me 9 browsers to test, which covers a huge percentage of the desktop browser experiences, across platforms, that use this site.

If I even test a subset of of this list (concentrating on say the 5 or 6 key browsers) — I will STILL be hitting more real customer experiences than what you would have drawn up WITHOUT some data.

Tip! It’s worth mining the screen resolutions for Mac and Windows browsers — so you can make sure when you’re testing that you use MORE than one screen size that represent big chunks of users (and not your personal machine setup or preference).

You’d be surprised how many screens are in use on desktops and laptops that have ~<750 pixels to play with in terms of the browser viewport. PLEASE PLEASE Stop testing your stuff on huge monitors with large viewports — at least try to mix and match, so if you test stuff on some large screens, get someone else to test a smaller resolution (e.g. 1366x768).

In this particular site, I opted to test two resolutions across the mix of browsers — 1366x768 and 1280x1024. These represented the best mixture of resolutions that act as a proxy for what people have. I’d rather be testing at a lower resolution (where I spot issues that impact conversion) than at a nice high resolution where I’ll miss stuff like this.

(4) Yawn, why bother, whatever man…

This is LOST productivity — it’s like designing a door, a car, a piece of plumbing so that people can’t use it. A form of discrimination is to be so neglectful, that you unintentionally harm your sales, reach or brand perception through s*** not working.

Browser and Device neglect is also discrimination.

You’d get fired for this kind of neglect in retail but in e-commerce, it’s common to waste two things:

(1) The time you spend on supporting and testing stuff with things that people do NOT use.

(2) The huge pile of money you lose by not working with stuff that people DO use.

The first slows down all your development work for no real purpose (developers can smell this and hate it) and the second loses the company more money than the entire IT budget for the year.

That enough?

Final part now — where we draw up a nice diagram from all the data.

Read Part 4— “Mobile Device Analysis”

Craig Sullivan

Written by

Conversion Optimisation, Usability, Split Testing, Lean, Agile,User Experience, Performance, Web Analytics, Conversion Optimization ,#CRO http://t.co/BSWwzHj00S

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade