The Endless Suck of Best Practice and Optimisation Experts
So what’s this week about? Unicorns, Useful Best Practice and Optimisation Experts — all rather rare and mystical beasts.
And this quote encapsulates everything I’m going to talk about today:
“Stop copying your competitors — they may not know what the f*** they are doing either” Peep Laja, ConversionXL
So what’s the problem? The uselessness of ‘best practice’ as a tool for correctly predicting what the outcome of a test or change will be.
So let me explore this using a UX situation. Let’s say you’re a UX designer and you’re testing a process that involves persuasion, elements of friction (a form), lots of data entry and interaction points inside each page.
We all know the inspection methods you’re likely to use — right? Usability testing, interviews, diary studies, session replay, voice of customer — imagine these are all in the mix. You’ve observed lots of people struggling and found the interaction points in the process that really hit the exit rates for this process. High five everyone!
But wait a minute — we haven’t solved these problems yet. We’ve maybe quantified them (a rarity for most UX designers) using sources like analytics and we’ve got the qualitative feedback from the user testing and voice of customer. Fixing these should be a piece of cake eh?
And that’s where the problem comes in. My failing as a UX practitioner when I first started out was to fall into this trap and to think my guessing was expertise.
“Because you’ve seen lots of problems, it doesn’t mean you know the optimal response or solution that alleviates them.”
Are we all guessing then?
There are phenomenal amounts of people just guessing out there. Marketers, CEOs, Business owners, Small companies, Big Corporates, International Brands, hip startups — it’s an equal opportunity game.
So surely they hire CRO or UX experts (or other types of experts) to save them from guessing? Usually it’s for another reason but that’s a longer story about the psychology of organisations and people!
For those experts that actually have good experience, surely all that counts for something? Surely thousands of hours observing, fixing and testing products makes you better at finding solutions? And yes — of course it does. But not in the way some people think.
So let me illustrate three problems I found on a form:
(1) When people enter their postcodes, they sometimes don’t enter a space (for example, “SE136DH” instead of “SE13 6DH”). The form rejects this.
(2) People click the continue button on the form, rather than the ‘search postcode’ box — nothing happens. They think they’re doing a search.
(3) People are missing how to fill out a different delivery address (from billing) so their payment could get rejected
For this third part, there was no option to tell the website whether this was the billing address or not. Later in the form, people simply got asked for their billing address — but without being able to say ‘yeah, the same one I put in earlier, dumb website’.
In the first example on this form where the postcode failed validation because it expected a space, this is just a really stupid moment. We simply fix and test the postcode validation before making it live. I looked at all these retailers who were called out on this simple practice here (https://econsultancy.com/blog/10959-are-online-retailers-being-tripped-up-by-postcode-entry/) and they’ve all fixed the problem. However, I’m constantly delighted by how people implementing new interfaces on the web fail to leverage the good work done before. Companies keep doing this all the time.
We might measure the impact of fixing this kind of validation issue but mostly, we’ll just accept we’re doing something dumb and fix this as a bug or BAU (business as usual) change. The solution is completely straightforward and implementable with little or no discussion required.
In the second case, my experience tells me that they’re not seeing the button to perform the search for a postcode, because there are two call to action buttons. You have one that says ‘Find postcode’ and a second one saying ‘Continue to next step’.
Two potential solutions — I could either emphasise the search postcode button or I could remove the continue button — only showing it once they picked their address. I have a high degree of confidence in one of these working, probably the latter, but I’m not 100% sure. I need to see evidence that this is working or I need to run a test. I’m quietly confident but I can’t be completely sure.
With the billing address problem, this involves working across multiple pages to solve. Should we simply ask if the delivery and billing address are the same? Where should we ask them? How does that impact further screens people see?
This is a complex multi-step problem and I not only don’t have a solution — I know that whatever design hypothesis I come up with, it WILL need validation and measurement.
Experience Does Count
This example illustrates where experience actually counts. It helps me to:
(1) Spot soluble and straightforward defects and to know what is and ISN’T in this category.
(2) Find stuff that I’m very confident I have a pattern, solution or approach to take.
(3) Understand which things I need to iterate, test, improve and keep optimising.
(4) Know what I don’t have a bloody clue how to solve.
It does NOT let me know what solution will work — only the tools, the method, the journey I need to go on to get closer to whatever that solution may be. It’s not about the AB testing — it’s the journey that counts.
If I was an Expert Car Mechanic — my skills would allow me to discover and query the symptoms or car setup to isolate, understand — to diagnose the problem. I may see something like a flat battery and know immediately that this is probably the entire issue. I may have something really complicated where I need to inspect the engine, trace the electrical signals or plug in a diagnostic system.
And I’ll experience a a complete range of problems from the bonehead obvious all the way to the head scratching end of the scale. And that experience gained working on cars, helps you work out the *approach* to iterating your way to the solution, finding the fastest path and picking the right tools at the right point in that process.
And hey, replacing the battery is a nice pattern to spot. But sadly, it was the alternator which was dead. The battery was flat from the failure of another component. You won’t find any good mechanics saying they can fix any problem or improve any engine from just looking — they’ll more likely be able to tell you how your problem and their skills line up.
You won’t find any mechanic saying “Yup — found it — it’s the battery. We’re done here.” when they haven’t checked and validated that solution.
“The skills an expert builds up are the means by which they correctly diagnose, fix and validate solutions or change — not the means by which they can predict the freaking future.”
And so we can take UX experts and CRO experts who are good at spotting problems and say one thing confidently. They don’t always know, with a wide range of things, what the hell will happen once they’re implemented.
The range of confidence varies depending on your experience and the approach but a large amount of the ‘solutions’ that any CRO or UX person presents will be an ‘informed guess’ or ‘hypothesis’. It’s not a guarantee, a solution, a dead-cert or an easy win — it’s a *potential* solution waiting to be tested and validated.
So there’s a kind of arrogance here that I developed — that because I knew the problem domains intimately, I had to also know the solutions too (and I’m biased, lol). And a bad sign of any UX or CRO expert is unshaking confidence in a solution, pattern, wireframe — especially when that confidence may be unfounded or based on limited evidence or data. I was that idiot a few years ago but plenty of testing, humble pie and seeing my predictions shattered — has cured me of this disease.
Knowing the extent and boundary of your knowledge about a problem (or system) and therefore the confidence you can have about potential changes, is one of the best things I ever learned from testing, observing and measuring people using my designs. The more inspection methods I used (diagnostic tools for my work) the better and faster my ability to achieve behavioural shifts and move clients forward — in the same way that good tools for a mechanic helps them solve problems with your engine.
So what the hell does this have to do with Best Practice?
Well, UX researchers and CRO people will often cite these as examples of what to test or what to do. You’ll be presented with an example or a screenshot of a site and in some cases, an exhortation to try or test them.
And they can be useful, if presented with a pinch of salt. They’re example patterns of something that might have been used elsewhere but to be honest, the UX expert doesn’t KNOW if that nice checkout payment pattern that stripe implements actually works or not. They might be running a test and it might totally suck.
The CRO researcher doesn’t know either. They might be suggesting you do a test like some other company, but they’re up against another barrier. You don’t have the context, customers, background, data or traffic on the site you’re optimising that’s identical to the test you’re looking at!
Context : When you look at AB test results, they often don’t tell you about the customers, the flow, the intent, the traffic source, the target audience, the company brand or credibility, the barriers, frustrations and worries — you’re looking at one page in a stream and so see the smaller, not bigger picture. You have a tiny window onto the site and data.
Customers : I laugh when people say ‘Oh this worked on X site and this would be great to try on yours’ — particularly when it’s not a ‘confident pattern’ in my book and secondly, when the site it was tested on is NOTHING LIKE your business.
An ‘Adult Shop’ may hand out lubricated prophylactics to customers as they enter the store. This might work wonderfully for their business and increase sales and customer happiness but would it work in your store? If you read about this test in a newspaper article, would you decide to start handing these out in your shoe store without looking for evidence it might work?
So if you copy things (your competitor, an AB test pattern) — it’s very easy to do this without any understanding of what you’re copying, what bits worked and the relevance or indeed comparative point (in any meaningful way) with YOUR customer base. You don’t know why it worked (the tester might but may not tell you) so without that context, it’s of limited value.
Background : You have no idea if people saw an awful landing page, were paid leads at great expense or organic traffic. You don’t know if the page was responsive, how many mobile/tablet/desktop visitors saw it, what browsers they used and anything else that might potentially make copying this test almost useless. Even if you’re running an identical business, would your traffic (and response to any test) be likely to be the same? Unlikely unless it’s just a dumbass fix you know how to make anyway.
Data : Most AB test results pages are a kinda like stats porn. You may look at what you see but trying this at home might not work out for you. They also bias our heads — making us think that if we can only ‘do stuff like this’ then our form would convert better.
Data on the test result helps but you often have no traffic composition, cost data, sources or an idea of upstream traffic — most results published often show almost nothing of the journey. Some AB tests don’t even show sample sizes, confidence and particularly error bars — the former and latter being criminal omissions.
Traffic : The traffic on my customers sites vary all the time. Unless I can be sure that the sample I’m testing is somehow similar and representative to the sample THEY used in their AB test, how could I expect the response to be the same? Sure — if it’s something bonehead that the new creative solved — that works at the fundamental level of core usability, clarity or persuasion. But maybe not.
So even if you knew everything, had seen every test and had access to all this data that’s missing from AB test examples, you STILL couldn’t guarantee a similar response. Knowledge of this is both ego crushing (a good thing) and also your salvation.
Do I cook like Gordon Ramsay at home? In my head, of course but not in real life. I might not even follow his method or his recipe — he’s informed me about the right way to cook a dish and serve it — but my target audience is very different. He’s provided me with a suggestion or a recipe I might try — but that response is the precious thing. No matter how nice it looks on telly, my wife and daughter may not like it. Probably my cooking but also their taste <grin>
Best Practice and Experts who tell you confidently that they know what to do, are just a confusing smokescreen for the real truth. It’s about the site, the product, the service, the customers, their context, their fears, worries, barriers, motivations, emotions and responses — with you, and not someone else’s product. Start loving and understanding them, rather than chasing the illusory value of slavish or dogmatic copying, whatever the seeming promise of the source.
A disclaimer — I’m not denigrating the great UX and CRO people that I know. Just the bullshitters out there who fall victim to the conceit, the arrogance of thinking that they actually know the solution. The best experts are those who shrug, say “I don’t know but we sure can find out. Here’s how.” They have confidence not in their belief about what they don’t know but confidence in their ability to get that information and to really know.
Great UX and CRO people will be using constant and iterative feedback and testing to shape and improve products towards user task & goal outcomes and some business goals too. They won’t profess to know the answers but can explain the likely cause of problems and a range of potential solutions or tools to move away from where you are and towards where you need to be.
“There is no such thing as best practice for me. There are only users, the boundary layer between their minds and my product — and the tools that I can use to understand what’s happening there.”
My experience in observing and fixing things — these patterns do make me a better diagnostician but they don’t function as truths — they guide and inform my work but they don’t provide guarantees.
So next time you hire a UX or CRO practitioner, go for the one who shows humility — who may not know from looking at your company’s engine with one glance what’s wrong — but who will roll up their sleeves and find out, using every tool at their disposal to find truth, solutions and the desired outcome.
And lastly, please stop copying slavishly. If you copy without knowing why it worked, you may not get the same result at all and it may not teach you anything useful about *your* customers. It’s a race to the bottom in business terms too, because by the time you’ve seen any test or pattern — the company who did this will have moved on.
So you want to go back 18 months in time, to copy something someone tried that long ago, and being presented to you today? Most of your competitors would love if you were wasting time trying their old stuff or something random you saw — better for their business, all day long.