SEO needs and deserves better testing

Mark Traphagen
Scale
Published in
5 min readOct 27, 2016
How my boss reacts whenever I’m in our SEO lab

Search engine optimization (SEO) is both an art and a science.

It’s an art because, despite all its technical aspects, it is a marketing discipline, and thus involves a certain amount of creativity.

But it’s also a science. How so? Let’s think about what scientists do.

Into the mystery

Science exists because we humans are innately curious. We aren’t happy with merely existing. We want to know why. Why are things the way they are? Why aren’t they some other way? Why does the world work the way it does?

Because of science we know a lot more answers to those questions with every passing year. But as the saying goes, “the more we know, the more we know we don’t know.”

The job of scientists is to attempt to roll back the darkness, to push further and further into the “things we don’t know (yet).” They do that by setting up careful tests and experiments.

Experiments start with a hypothesis: an educated guess about what might be the answer to a particular mystery. An experiment is set up, with carefully controlled conditions, to test the hypothesis, to see if it is a model that makes accurate predictions. That is, the model comports well with what actually happens in the case being tested.

The process doesn’t end, of course, with an experiment that seems to affirm the hypothesis. Other scientists then attempt to replicate the experiment, to see if they get the same results. Through this process, over time, hypotheses are refined.

A hypothesis that has been so extensively tested and refined that it is recognized as a highly accurate predictor of reality becomes a theory (the science word most misunderstood by the general public).

SEO in the lab

Science works because we live in a universe that appears to operate according to fixed laws. Therefore, even though there is still much we don’t know about that universe, we have the opportunity and possibility of uncovering what we don’t know. Through the process described above, we can experiment and test our way into discovering the as-yet unknown laws.

SEO operates on a much smaller universe: the search engines. Even though the world of search is much, much smaller than the cosmos, it shares some similar properties:

  1. It is reasonable to assume that search works according to certain laws (search ranking factors and the algorithms that implement them — although unlike our universe, the “gods” of the search engines do at times alter those laws!)
  2. We know some of the laws (ranking factors) but much remains as yet unknown, or only vaguely known.
  3. Through experimentation and testing, we can develop useful hypotheses about what more of the actual ranking factors are and how the search algorithms act on them. Over time and repeated testing, some of these hypotheses will gain enough confirmation to become useful SEO theories.

So yes, it is possible to do SEO science. But unfortunately, that doesn’t guarantee that it’s always done well.

Counting hypothetical chickens before they’re hatched

Just as there is junk science in the “real world” (“Scientists demonstrate fusion reactor generating unlimited energy!”), so there is junk science in SEO.

And same as in real-world science, the junk kludges up knowledge via two pipelines (which sometimes work together):

  1. Poorly-designed or implemented experiments.
  2. Premature or inaccurate reporting of experimental results.

Experiments gone bad. The former happens when someone who doesn’t clearly understand the rules of creating valid experiments sets up and runs a test. Some of the common deficiencies include:

  • Undefined variables. In an experiment, a variable is something that is altered within the test to observe its effect. For example, in an SEO experiment to see if content length predicts ranking power, the variable would be the number of words on the page. If you haven’t carefully defined your variable, you don’t really know what you’re testing.
  • Too many variables. Generally, you should try to have just one variable per experiment. The more variables you introduce, the less sure you can be about which (if any) actually influenced your results.
  • Unknown or unaccounted-for variables. The experimenter must do her best to try to figure out variables that might be acting on the test data that are not part of the experiment. For example, if your data shows a strong correlation between click-through rate (CTR) and ranking, are there other variables out there that might account for the correlation? (By the way, I think the answer to that particular question is “yes.”)
  • Lack of controls. A control is a set of data derived from a group of test subjects to which the variable of the experiment has not been applied. If what happens in the experimental group (the group to which the variable has been applied) does not vary significantly from the control group, then it’s a good sign that the hypothesis about the variable is wrong.

Don’t bother me with the facts! The second way in which SEO experiments turn out to be not helpful is in how they are reported.

In some cases, it is the SEO scientist himself that is the problem. In his desire to have something significant to say, the experimenter may leave out critical data, not explain his methodology clearly (or at all), or gloss over aspects of the experiment that might not be so favorable to his conclusion.

Or sometimes the problem is generated by search marketing writers who either report results of someone else’s experiment inaccurately, with exaggeration, or prematurely (before anyone else has tested or vetted the reported results).

Why does SEO deserve better experiments?

Very simply, because publicly-shared SEO tests can have a significant effect on the bottom lines, and even survival, of businesses.

Businesses with an online presence need to build a good flow of organic traffic from search. Search remains the number one place where people go when they are researching what to buy or are ready to buy.

Since so much of how search engines rank our sites and content remains in the “black box,” we are dependent upon the findings of search experiments to give us guidance in what we should do to improve those rankings and bring more traffic to our businesses.

Therefore, it is imperative that we in the SEO industry strive to do better and more valid testing and experimenting, and that we are careful and ethical about how we report our findings. These principles have always been at the forefront of all the industry-acclaimed studies we do at my agency.

Finally, here’s a brief and (I hope) entertaining video where The Art of SEO lead author Eric Enge and I explain the principles I’ve covered in this post.

If this made you think, tap the heart and share it with other too. Thanks!

--

--

Mark Traphagen
Scale
Editor for

Digital Marketing Consultant/Teacher/Speaker | VP Content Strategy for AimClear | Content Marketing | Mandolin Maven 🏳️‍🌈 he/him/his