Twitter, Moneyball, and the Vietnam War
For a while, as Secretary of Defence, Robert McNamara was convinced the US was winning the war in Vietnam. All the stats told him so. Material in and body count out. The US was putting in more resources and killing more people, so by McNamara’s reckoning, that meant winning the war in the long run.
The dissonance between McNamara’s view and the reality of the war led to his resignation and widespread recognition of the McNamara Fallacy:
The first step is to measure whatever can be easily measured. This is OK as far as it goes. The second step is to disregard that which can’t be easily measured or to give it an arbitrary quantitative value. This is artificial and misleading. The third step is to presume that what can’t be measured easily really isn’t important. This is blindness. The fourth step is to say that what can’t be easily measured really doesn’t exist. This is suicide.
There were many variables McNamara was missing, but it’s worth focusing on public opinion as it’s particularly interesting. In the US, there was great public opinion polling carried out throughout the war. The longer the war dragged on and the more American casualties there were, the more Americans were opposed to continuing the war.
There wasn’t good polling in Vietnam. It’s easy to postulate in hindsight that the reverse trend was true for the National Liberation Front or Viet Cong. Since they saw America as an occupying force, the more the US committed to the fight and the more Vietnamese were killed in the conflict, the greater their resolve grew. But that’s guess work. There wasn’t good polling, and thus McNamara was blind to it.
Why does this matter right now? I worry in the world of business (and social organisations) we’re forgetting this lesson. We’re looking exclusively at data to make decisions and forgetting the variables we can’t measure very well. And we’re doing it because it’s trendy.
Moneyball & The Lean Startup
McNamara wasn’t the first nor the last person to use data to inform decision making. As the Information Age came into full bloom, more and more organisations had data on customer behaviour and could use that data to gain insights and guide business decisions.
Then in 2011, the perfect combination of Moneyball and The Lean Startup created a movement. The Data-Driven Decision Making Movement.
Eric Reis’s Lean Startup codified much of the thinking that defines Silicon Valley and the current tech boom. One of the key tenants is “validated learning” — creating a goal, choosing a metric, performing an experiment, analysing the results and making adjustments based on those results. Basically applying the scientific method to business. When done well, it means you’re perfectly measuring how you can improve your business and making decisions based on hard numbers.
The book has become something of a sacred text to anyone starting an organisation. Yet, it might not have had so expansive an impact if it weren’t for a little boost from Hollywood.
At the same time Lean Startup came out, the film Moneyball gave us all a concrete example of the potential of data-driven decision making. The film tells the story of the making of the 2002 Oakland A’s. With a smaller payroll than almost every other team, the manager had to reject traditional means of determining a player’s value to the team, and instead used complex data analysis to identify potentially valuable players. Unsurprisingly, they are hugely successful as a result, and we are given a modern-day fable of the power of data-driven decision making.
I don’t have data (I know) to show a causal relationship between Moneyball and The Lean Startup and the huge surge of interest in the concept of data-driven decision making starting in 2011. But anecdotal evidence is legion: people look to these two pieces of work when they want to tell a story of using data to make decisions.
But a problem arises when organisations look exclusively data to make all decisions. Whenever good data is available, they should certainly inform decisions. However, there will be a multitude of instances where there simply isn’t good data or the right data to guide a decision.
Twitter is still a large and influential company, but there’s a growing feeling amongst the chattering classes that there’s something deeply wrong at the heart of the platform. umair haque summed up the problem perfectly in one word: abuse.
a social web which is infected with the abuse will inevitably see a decline in usage. I can put that in economist-ese if you like: network effects power social technologies, but abuse is a kind of anti-network effect, not a positive one, but a negative one: I don’t benefit from you being on the network, I suffer.
Twitter has to be aware of this problem. There’s the seemingly repeat headline of celebs quitting the platform because of abuse, Jimmy Kimmel’s repeated “Celebrities Read Mean Tweets” segment, and of course Twitter is periodically taken to task by the media for letting abuse slide.
But this is all anecdotal. If you demand hard data to back up every decision, as Twitter is reported to, then there’s a McNamara size fallacy waiting…
“The third step is to presume that what can’t be measured easily really isn’t important. This is blindness. The fourth step is to say that what can’t be easily measured really doesn’t exist. This is suicide.”
The data on positive interactions is easy, retweets, likes, time on platform, and content views can all be taken as positive actions. Twitter takes action on this data following Lean Startup principals to incrementally improve the product. For example, the switch from ‘favourite’ to ‘like’ has increased use of this feature by 6%, and I would wager it was made in an effort to improve the ratio of positive to negative interactions on the platform. Where there’s actionable data, it’s easy for Twitter to look at it and improve the platform.
Negative interactions are much harder to measure. Abuse reports are certainly one type of data, but just ignoring an abusive tweet shows up in the data just like ignoring a boring tweet. And negative interations sometimes might look just like positive interactions, for example a sarcastic reply looks no different then a conversation with friends.
And hardest to measure is the impact of abuse on people who don’t receive it. I’m certainly dissuaded from using Twitter every time I hear about the preponderance of vile abuse on the platform. Even if the abuse isn’t directed toward me, it poisons the space for us all when threats like this are pervasive:
In Mid-2013 Twitter was being roundly criticised for not dealing with abuse, and in late-2015 people are calling it a dying platform because of abuse. They’ve obviously got a significant problem and for years have been unable to address it. It’s just a hypothesis, but it wouldn’t surprise me if the organisational barrier here is looking to data to make all decisions and having a massive blind spot to the damage being done by poorly measured but widely documented abuse.
All of this isn’t to decry the trend of data informing decision making. When it’s done well, insisting on good data to guide decisions can be hugely beneficial. But only making decisions based on available data can be damaging at best or catastrophic at worst. It’s important that we always know what we can’t measure and use logic to fit data and our blind spots together to plot the best course of action.