Coincidence & Causality

In the late, cool evening of April 14, 1912, lookouts on the deck of the steamship RMS Titanic spotted an iceberg off their starboard bow. Alarms were rung, orders were issued, and engines were reversed. We know the rest of the story. However, you may be surprised to learn about a remarkable coincidence, and what it shares with software design.

Among the Titanic’s survivors was a stewardess named Violet Jessop. In her twenty-four years, she had already endured much before the ship’s sinking, including what must have seemed like a warm-up act to her Titanic voyage. Through an unfortunate stroke of luck, she had just seven months earlier been a crewmember on the doomed RMS Olympic, which nearly sunk off the Isle of Wight during a collision with the warship HMS Hawke. One maritime disaster is harrowing. Two are unusual. But a third is remarkable. How familiar Violet Jessop must have felt to once again find herself onboard an ill-fated vessel, when the hospital ship Britannic struck a mine and sank into the Aegean Sea.

One maritime disaster is harrowing. Two are unusual. But a third is remarkable.

You might think Violet would reconsider her choice of professions after being onboard three sinking ships. However, she continued to work for cruise and shipping companies throughout her career. Despite an amazing level of coincidence, Violet Jessop had no bearing on the three events. She merely had a hapless employment history. Violet did not cause a single shipwreck, let alone all three.

Our adventures in digital project work are not nearly as hazardous, but we do witness coincidences on a regular basis. Sales briefly increase. Page views temporarily decline. “Likes” momentarily stagnate. These behaviors are noticeable, but are they notable? Coincidences during a project can often mislead us into making reactionary and shortsighted decisions. We will discuss three common hazards, which hide under the surface of your projects and sink good ideas.

Texas sharpshooter fallacy

Imagine for a moment a gritty cowboy, the type of fella that might have a mouth full of chewing tobacco and hips flanked by two Colt .45 revolvers. A western sun offsets his dusty silhouette, as tumble weeds blow in the distance. Our cowboy stands motionless, guns at the ready, staring with focused attention at an old, wooden barn sitting several yards away. He spits, raises his revolvers and quickly fires 12 shots.

As the dust clears, we see bullet holes scattered across the barn’s wooden wall in no apparent order or pattern. A few shots hit near the center of the wall. Some hit near the roof. Others hit near the foundation. The cowboy walks up to the barn, pulls out a piece of chalk from his pocket, and draws a single, continuous line around all the bullet holes. His drawing forms a large, weirdly shaped outline. Upon its completion, the cowboy exclaims, “Well, look’y here. All my shots hit the target!”

“Well, look’y here. All my shots hit the target!”

We can all be Texas sharpshooters if we do not carefully evaluate the entirety of available data. Simply looking for clusters that align with our biases may lead us to incorrect conclusions.

For example, the review of a website’s analytic information serves as an excellent resource to evaluate past performance. However, we can use analytics to predict future performance only if the website stays the same, devoid of any design or technical changes. To do otherwise would be like trying to count old bullet holes on a new barn. Once you introduce changes to an experience, analytic information becomes purely historical. Until you accumulate a sufficient mass of new information, analytics are irrelevant. New barns only show new bullet holes. Even then, you might draw the wrong target.

Pick your gun and draw your target before you evaluate data. This sounds simple, but even experienced pros sometimes misunderstand this concept. Consider the following scenario:

Acme Company changes their website’s home page and wants to evaluate its aesthetic merits. They measure the number of visits. After making the change to the home page, fewer visitors view the page. Therefore, Acme Company believes the new design is less successful than the previous.

In the above example, Acme Company counts bullets (the number of visits) on the target (the site’s homepage). Outside of search engine optimization, the number of visits rarely has anything to do with a page’s visual design. After all, a visitor could view the page and say, “I think this home page looks horrible” then leave. However, analytics software still counts his or her visit. A page visit is an ineffective way to evaluate visual design. Visits indicate market awareness and supporting media efforts, not the page’s visual design. Acme counted bullets but chose the wrong gun.

Choose your gun, paint your target, and then count the bullet holes. You will be a sharpshooter in no time.

Procrustean Bed

If the Texas Sharpshooter Fallacy exuded a country charm, the story of the Procrustean Bed should scare the hell out of you. According to Greek mythology, an old ironsmith named Procrustean would offer shelter to weary travelers along the road to Athens. While they slept, Procrustean would strap the travelers to their beds and stretch their bodies to fit the bed frame. Short people got off easy. The tall ones truly suffered. Procrustean chopped off their feet, ankles and shins until the travelers fit neatly into their beds.

You find Procrustean solutions frequently in digital work. Data is stretched and truncated to meet a chosen solution. Business objectives are overplayed; user needs are downplayed. Device requirements are overplayed; affordability is downplayed. Gesture controls are overplayed; the aging population is downplayed. Stretch. Chop. Enhance. Remove. We become data sadists.

We become data sadists.

We also affect data while collecting it. Selection bias stretches and pulls data by altering whom or what we select as the data’s source. Research trends, such as “get out of the building” (GOOB) can be a powerful tool to solicit feedback from users. Here we leave our offices and visit a public setting. We find users and show them an app or website, engaging and testing how the audience responds. However, like Procrustean sizing up his guests on the road to Athens, we may inadvertently — or intentionally — select users based on non-representative criteria. We unconsciously select people who look friendly, relaxed and outgoing. On-the-street interviews, retail intercepts, and all face-to-face interactions carry the possibility that we may reach only those people who are willing to talk to us. Are they representative of your audience — or are they representative only of the people willing to talk to an inquisitive stranger holding an iPad?

Are they representative of your audience — or are they representative only of the people willing to talk to an inquisitive stranger holding an iPad?

Keep a vigilant eye on data that fits a bit too neatly into recommendations — even your own. Realistic assessment of data may occasionally clip your wings, but it will help you avoid getting cutoff at the knees.

Hobson’s Choice

Livery stables were the 17th century equivalent to today’s car rental companies. Riders chose a horse, rode it, and then returned it. Thomas Hobson ran a livery stable outside of Cambridge, England. He realized riders chose the good horses far more often than the bad, resulting in overuse of some horses and underuse of others. Like automobiles, horses accrue mileage. Hobson decided to remove the rider’s choice. He gave prospective riders a single option: ride the horse he chose for you or do not ride at all — in short, “take it or leave it.”

We often face a Hobson’s Choice when crafting digital solutions. We accept a bad solution, rather than do without one. A feature falls short of exceptions; an experience feels awkward; an app’s performance trots rather than gallops. However, your team employs the solution anyway. Short schedules and insufficient budgets often take the blame.

We accept a bad solution, rather than do without one.

If a user experience were bad, you would be well-served to not take it out of the stable, so to speak. Yet, this decision takes courage, because even a bad solution stands as evidence of our work to our peers. It is proof of activity. In today’s world of rapid iteration, we sometimes accept a Hobson’s Choice solution in the hope that it will be eventually replaced. We emphasize the now over the good at our peril. As the saying goes, “The joy of an early release lasts but a short time. The bitterness of a bad release can last for years.”

A good idea does not always succeed, but logic certainly helps improves its chances. Logic cleans the livery stables of our own messy minds. Brainstorming, designing, developing, scheduling, budgeting and managing generates a lot of horseshit. You need to find a way to stomp through it and reach the road leading to your audience. Avoid the hazards along the way: recognize coincidence, pick your targets, and always be wary of strange, old men offering help — including me.

In summary…

  • Noticeable data is not necessary notable data.
  • Choose what you will measure before measuring.
  • Be skeptical of research findings that contain no exceptions.
  • A discarded solution is often better than an unsupported one.

Originally published at www.edwardstull.com.