Understanding Abundance, part 1: It’s Caused by Consumers

Alex Danco
Social Capital
Published in
9 min readFeb 5, 2017

--

Hello! If you’re coming here for the first time, thanks for checking out my writing on Medium. I don’t publish much here anymore — I’ve switched over to publishing entirely on my own website, alexdanco.com. I also write a weekly newsletter which comes out on Sundays, you can sign up at danco.substack.com. I write a lot, and I don’t want you to miss it! So please head over there and subscribe.

Welcome to our four-part series on abundance. If you’d like, you can start with our introduction to the series, which you can find here.

What do we really mean when we talk about “Abundance”?

Let’s start this series by answering that question, because it’s one of the more important concepts we’ll have to grapple with over the coming decades.

There are a number of different ways to conceptualize abundance, but I favour one in particular. To me, abundance is not really a quality of technology, products or the supply side at all. It’s actually a statement about the consumer:

Abundance is the condition reached as the friction involved in consumption decisions approaches zero.

To unpack this statement, let’s first examine the opposite state: scarcity. We’re used to thinking about our free market economy in terms of scarcity and friction, because they’re what drive return on invested capital. In an environment of scarcity, the friction, switching cost, and deliberation involved in a consumer purchase is high and there are many factors to consider before a decision can be made. That is how we usually imagine customers making decisions! When a consumer buys a TV, we picture them asking: Will it fit in my living room? Do I like how it looks? Do I need a new TV at all? When a business signs a contract with a supplier, they make a decision based on many factors. What is your reputation? Do you meet our requirements? Do you comply with regulations? Do we have a prior relationship? Is the price fair?

The outcomes of decisions in environments of scarcity often follow a normally distributed pattern. There is a pricing exercise that is taught sometimes in business school to illustrate that idea that goes something like this:

The professor announces, “I have two plane tickets to Hawaii, for spring break, that I will sell to someone in the class. Everyone must write down on a piece of paper the maximum amount of money they would offer for these tickets, and then pass the paper forward.” One aim of the exercise is to demonstrate that across the group, a wide range of numbers may appear. Each student has to ask herself a number of questions in order to come up with an offer: How much money do I have? Do I like Hawaii? Do I have anything else going on during spring break? Could I resell them?

If you survey a large number of students, the final outcome may look approximately like this:

It may not look exactly this way: it may be skewed in one direction or the other; there may be some outliers. But in general, you’d expect a range of outcomes that are distributed about a mean in a fairly Gaussian manner. Most people’s interest in these plane tickets are not binary Yes or No. It’s Maybe; at the right price.

There’s a reason for this: Gaussian distributions occur frequently in nature whenever some outcome variable is an additive function of many different input variables. The genetically governed height of human beings is distributed much the same way as is the roll of two dice while playing Settlers of Catan: lumpy in the middle; predictably sparse on either side. In an environment of scarcity, we consider many factors; the more equally-weighted factors involved in the decision, the more we would expect these Maybe-dominated outcomes, at least in theory. Enterprise purchases, for instance, often involve hundreds of variables that are considered and negotiated over months. They must consider those variables very carefully, because switching costs are likely to be high. Furthermore, their selection is not only influenced by past events but will also be likely to influence future events: compatibility between systems, standards, and performance specs are all important elements to consider.

This is all because of friction. The more scarcity and friction involved in a transaction or decision, the more factors will likely be taken into consideration, the more carefully we will deliberate before consuming, and the more Maybes we contend with.

But what if there is no friction?

An increasing number of our actions, decisions and consumptions in the modern era are not ‘Maybe’ kind of decisions. They are no — unless it’s exactly what I want, in which case yes. I don’t deliberate very hard when I open the tap for a glass of water, turn on the lights, click on a gif, or follow someone on Twitter. It’s not a multifactorial decision. It’s a binary decision. Your answer is no, unless it’s yes.

When consumer choices are low-friction, we don’t see normal distributions at all. We see bifurcated distributions, often between differentiated if, then choices on the one hand and a default else option on the other:

-I have 10 minutes to browse the web. Either I’m going to go to a destination site that’s exactly what I want, or I’m going to go on Facebook and scroll idly.

-I have to get a new phone. I’m either going to get exactly what I want (probably an iPhone) or default to whatever phone comes free with my plan.

-I’m going to shop for Christmas gifts. I might shop from a highly specialized boutique store, or else default to Amazon.

As friction is removed, it opens the gates for customers to consume without hesitating, treating the object of their consumption as abundantly available. And when there’s no local friction, something curious happens. We often mistakenly assume that when presented with an abundance of friction-free options, our choices are widely distributed and reflect that variety. In fact, removing friction doesn’t create a level playing field; it creates the opposite conditions: hyper-targeted “if” options, or default “else” options.

Zipf’s Law, bundling, and if/else decision-making

As friction goes away, compounding can proceed with fewer obstacles. The 20th century linguist George Zipf gave the world a very handy anecdote to explain what’s going on here. His question was: out of the tens of thousands of words in the English language, and even the ones of thousands of words familiar to those with basic fluency, how come the majority of our speaking and writing consists nearly exclusively of only the top several hundred words or so? The reason is that the more we use a word, the less effort it takes to retrieve that word and use it a second time. There’s no incentive or disincentive to use any word over another. It’s just what we naturally do.

The Zipf-like effect going on here is the following: the less friction present in a system, the more you see people default to an if/else decision profile. You get winner-take-all effects, or at least winner-take-most, in the ‘else’ category. This is why on a street with three coffee shops, Starbucks will be crowded; on a street with ten coffee shops, Starbucks will be even more crowded. But the other quirky independent espresso shops are probably doing okay too! They may have their own little dedicated customer base that chooses them preferentially.

If [I am on Queen Street West] and [find an espresso shop that’s exactly what I want]

then [Go there]

else [Go to Starbucks].

When viewed from the outside, we recognize this process as a kind of “Consumerization”: what happens when friction is eroded and eliminated, and we take fewer factors into account when making a consumption decision. And it doesn’t only involve simple one-off consumer purchases like coffee, either. The “Consumerization of IT” is an example. Our transition from mainframes to iPhones; from Siebel, Oracle and SAP to Salesforce, Workday, Slack and Dropbox; all are representative of the consumerization process — away from friction and measured deliberation; towards automatic and frictionless consumption of default options that compound over time.

Consumerization in context

When establishing frameworks in tech, we should start with the Granddaddy of them all: The Innovator’s Dilemma. Disruption Theory astutely describes a particular kind of market entry we see very often with software and the Internet. Start off with a product that is not taken seriously, yet with a powerful technical advantage; then expand either upmarket, outwards, or both on the basis of that technical strength. Chris Dixon’s contemporary corollary, “The next big thing will start out looking like a toy”, illustrates this concept well. The driving force for this market transformation is not product prowess nor market positioning, but rather the customer’s job-to-be-done.

In my Emergent Layers series last year, I introduced my own views on this process with the Overserved / Underserved concept. Many of the great growth stories in modern tech (Intel, Sun, Oracle, Google, Uber, just to name a few) were companies that discovered a market of customers with a particularly explosive set of properties. They were simultaneously overserved by a current solution, in the Innovator’s Dilemma sense — they were being served something too expensive, too complex, too much — yet underserved for another important aspect of their job-to-be-done, giving a compelling reason to adopt a new solution beyond just cost. Oracle’s prospective market, for instance, was greatly overserved by the specs, features and exposed complexity of IBM’s Information Management System, yet underserved in their need for a relational database. iPhone buyers were overserved by many of the capabilities of a PC, yet underserved along many other dimensions — its portability, the camera, the GPS, and more. Uber’s customers are greatly overserved by their car which spends 90% of its day parked, yet underserved in their desire to be chauffeured around while playing on their phones; to not need to park, nor find a designated driver.

We can appreciate now that both Disruption Theory and our Overserved / Underserved framework help explain consumerization: how we got from the centrally-procured IBM Mainframe to departmentally-scrounged modular PCs, and eventually “Bring-Your-Own-iPhone.” Consumerization, Disruption, and power law outcomes are all part of the same cycle of fewer factors being considered in the consumption process. They push us towards abundance: the state when the friction involved in consumption decisions approaches zero, and we then consume accordingly.

If this circle demonstrates a virtuous cycle of consumerization and abundance that we’re starting to see, then so far in our series we’ve covered the upper-left half. How lower switching costs, masked complexity, and cheaper options remove the friction from consumer deliberation, lead to single-variable consumption decisions, and then create bifurcated, compounding outcomes. But what we haven’t talked about yet is, why don’t we see this in every industry? What’s special about modern tech, about software and the Internet, that lets this cycle of abundance flow freely, unlike in many other environments? And in these situations, who does well and who does poorly?

In frictionless environments of abundance where the consumer is choosing among competing differentiated options, we face an interesting dilemma on the supply side. Abundance allows for compounding to proceed unimpeded, but it also destroys pricing power and threatens ROIC.

As we’ll see, If you want to compete in a low-friction environment, you essentially have two options:

  1. Be as differentiated as possible and serve the customer exactly what they want.
  2. Power everything. Don’t pick winners; have the winners all pick you.

As it turns out, the modern tech industry became the perfect supply-side invention for this dichotomy, and for stoking this cycle and feeding consumers’ growing demand for abundance. We’ll get to it — and a lot more — in part 2.

You can read Part Two here.

--

--