Good Copy / Bad Copy (Tests)
Kate Harris
When I meet new people and the question “So, what do you do?” inevitably arises, my answer gets the same reaction nearly every time: “Oh. You write product copy… That’s your whole job?” This is usually followed by them asking skeptically if there’s really THAT much copy in the actual product to worry about other than a few headers and a handful of error messages. I’ve experimented with a few different responses—everything from “WELL, let me tell you about my day,” to “Actually, I just play a lot of foosball.” But in all honesty, you’d be astonished at the amount of copy there actually is, most of which would otherwise be written last-minute and late at night by a ship-hungry engineer. More importantly, it might also surprise you how one simple header can measurably change the way people interact with an entire feature on Yammer. And how do we know that? Because we A/B tested it.
Copy can impact A/B test results just as much as design can.
This we know, now. Of course, like many other tech companies, at Yammer we A/B test the feature changes we make to our product — to make sure our hypotheses are correct. But since we started testing copy strategically as an isolated design variable, we’ve seen some huge deltas in results that startle even our most seasoned product managers.
Okay, so we test copy. But should we test ALL the copy?
My goodness, absolutely not.
Let me run you through a familiar scenario at Yammer:
Person 1: “I think notify is better.”
Person 2: “I think alert is better.”
Person 3: “I don’t think it will affect engagement either way.”
Person 4: “Let’s test it!”
Hold your horses, Person 4. There are a couple of boxes we need to check before we get to copy tests.
Remember that A/B testing is expensive—in terms of time, manpower, and future work. You might assume that copy is easy to change and thus low cost to test, but there’s also QA testing, results analysis, and code complexity to consider. So if you’re testing the copy, treat it like a feature test and don’t let it muddy your codebase without a healthy hypothesis that can survive a good beating in product council.
So when SHOULD you run a copy test?
- If the test validates a hypothesis that can inform future product decisions, regardless of a win or a loss. Test copy concepts, not copy strings . Concepts can apply to more than one design, not just word combinations (i.e. test different user motivations, not sentence structures). Imagine all the possible results you could get from the test and figure out what you might learn from it. Will it be able to apply to future features and not just this one? If all you’re going to learn is that “Start here” performs better than “Get started”, it’s probably not worth a test… UNLESS—
- If the feature or interaction is heavily dependent on copy to drive user action. If the copy is the main design element that will help the user understand what your product is for and why they should use it, you probably want to test it. Think about it in terms of user motivations — if you’re depending on the copy to get users to take an action, you’ll want to see which one works best.
When should you NOT run a copy test?
(You will encounter way more occasions that will fall into this category, so be prepared to sniff them out.)
- If you are doing it because the team has been having a neverending argument about four words. This is by far the most common BAD reason to run a copy test. A/B testing should be used to help inform the product in the future, not resolve arguments. Copy is the canary in the coal mine for the user experience as a whole. Instead of defaulting to “test it,” go back to your UX designers — nine times out of ten you’ve got some thin spots in the ice of your design. This is because copy is the last line of UX defense and it usually takes the heat for weak parts of the interaction. If things are hard to describe, it might mean that the design isn’t carrying the user through the flow, and thus the weight of the explanation lands on the copy. Watch out for long, complicated explanations — what bit of faulty UX is it covering for?
- If you can only measure click through and open rates rather than long-term product engagement (they’re not correlated). I’m looking at you, email. A/B testing in product has a different goal than A/B testing in email marketing — it needs to tell you how certain changes influence the value a user is getting from the product, not if they will click on a button you put in front of them.
- If the results won’t tell you anything relevant to future features. Will these results get shoved somewhere on a shared drive where nobody will ever be able to make use of them again? Then it’s not worth even an engineer’s coffee break.
So there you have it. The do’s (please please do’s) and don’ts of copy testing.
Now if you’ll excuse me, I’ve got a foosball game to get back to.
Kate Harris is a User Experience Writer at Yammer. She says she is a vegetarian but we all know that sometimes she goes to In-N-Out.