Imagine you’re searching for an apartment in San Francisco — arguably the most harrowing American city in which to do so. The booming tech sector and tight zoning laws limiting new construction have conspired to make the city just as expensive as New York, and by many accounts more competitive. New listings go up and come down within minutes, open houses are mobbed, and often the keys end up in the hands of whoever can physically foist a deposit check on the landlord first.
Such a savage market leaves little room for the kind of fact-finding and deliberation that is theoretically supposed to characterize the doings of the rational consumer. Unlike, say, a mall patron or an online shopper, who can compare options before making a decision, the would-be San Franciscan has to decide instantly either way: you can take the apartment you are currently looking at, forsaking all others, or you can walk away, never to return.
Let’s assume for a moment, for the sake of simplicity, that you care only about maximizing your chance of getting the very best apartment available. Your goal is reducing the twin, Scylla-and-Charybdis regrets of the “one that got away” and the “stone left unturned” to the absolute minimum. You run into a dilemma right off the bat: How are you to know that an apartment is indeed the best unless you have a baseline to judge it by? And how are you to establish that baseline unless you look at (and lose) a number of apartments? The more information you gather, the better you’ll know the right opportunity when you see it — but the more likely you are to have already passed it by.
So what do you do? How do you make an informed decision when the very act of informing it jeopardizes the outcome? It’s a cruel situation, bordering on paradox. The crucial dilemma is not which option to pick, but how many options to even consider.
How do you make an informed decision when the very act of informing it jeopardizes the outcome?
When presented with this kind of problem, most people will intuitively say something to the effect that it requires some sort of balance between looking and leaping — that you must look at enough apartments to establish a standard, then take whatever satisfies the standard you’ve established. This notion of balance is, in fact, precisely correct. What most people don’t say with any certainty is what that balance is. Fortunately, there’s an answer.
If you want the best odds of getting the best apartment, spend 37% of your apartment hunt (eleven days, if you’ve given yourself a month for the search) noncommittally exploring options. Leave the checkbook at home; you’re just calibrating. But after that point, be prepared to immediately commit — deposit and all — to the very first place you see that beats whatever you’ve already seen. This is not merely an intuitively satisfying compromise between looking and leaping. It is the provably optimal solution.
We know this because finding an apartment belongs to a class of mathematical problems known as “optimal stopping” problems. And as it turns out, apartment hunting is just one of the ways that optimal stopping rears its head in daily life. Committing to or forgoing a succession of options is a structure that appears in life again and again, in slightly different incarnations. How many times to circle the block before pulling into a parking space? How far to push your luck with a risky business venture before cashing out? How long to hold out for a better offer on that house or car?
The same challenge also appears in an even more fraught setting: dating. Optimal stopping is the science of serial monogamy.
Before he became a professor of operations research at Carnegie Mellon, Michael Trick was a graduate student, looking for love. Suddenly, it dawned on him: dating was an optimal stopping problem! And so he ran the numbers. Assuming that his search would run from ages eighteen to forty, the 37% rule gave age 26.1 years as the point at which to switch from looking to leaping. A number that, as it happened, was exactly Trick’s age at the time. So when he found a woman who was a better match than all those he had dated so far, he knew exactly what to do. He leapt. “I didn’t know if she was Perfect (the assumptions of the model don’t allow me to determine that), but there was no doubt that she met the qualifications for this step of the algorithm. So I proposed,” he writes.
“And she turned me down.”
Mathematicians have been having trouble with love since at least the seventeenth century. The legendary astronomer Johannes Kepler is today perhaps best remembered for discovering that planetary orbits are elliptical and for being a crucial part of the “Copernican Revolution” that included Galileo and Newton and upended humanity’s sense of its place in the heavens. But Kepler had terrestrial concerns, too. After the death of his first wife in 1611, Kepler embarked on a long and arduous quest to remarry, ultimately courting a total of eleven women. Of the first four, Kepler liked the fourth the best (“because of her tall build and athletic body”, he wrote in a letter to an unknown nobleman) but did not cease his search. “It would have been settled,” Kepler wrote, “had not both love and reason forced a fifth woman on me. This one won me over with love, humble loyalty, economy of household, diligence, and the love she gave the stepchildren.”
“However,” he wrote, “I continued.”
Kepler’s friends and relations went on making introductions for him, and he kept on looking, but halfheartedly. His thoughts remained with number 5. After eleven courtships in total, he decided he would search no further. “While preparing to travel to Regensburg, I returned to the fifth woman, declared myself, and was accepted.” Kepler and Susanna Reuttinger were wed and had six children together, along with the children from Kepler’s first marriage. Biographies describe the rest of Kepler’s domestic life as a particularly peaceful and joyous time.
Both Kepler and Trick — in opposite ways — experienced firsthand some of the ways that the 37% rule oversimplifies the search for love. In the classic version of the problem, offers are always accepted, preventing the rejection experienced by Trick. And they cannot be “recalled” once passed over, contrary to the strategy followed by Kepler.
In the decades since the 37% rule was first discovered, a wide range of variants on the underlying problem have been studied, with strategies for optimal stopping worked out under a number of different conditions. The possibility of rejection, for instance, has a straightforward mathematical solution: propose early, and often. If you have, say, a 50/50 chance of being rejected, then the same kind of mathematical analysis that yielded the 37% rule says you should start making offers after just a quarter of your search. If turned down, keep making offers to every best-yet person you see until somebody accepts. With such a strategy, your chance of overall success — that is, proposing and being accepted by the best person in the pool — will also be 25%. Not such terrible odds, perhaps, for a scenario that combines the obstacle of rejection with the general difficulty of establishing one’s standards in the first place.
Kepler, for his part, decried the “restlessness and doubtfulness” that pushed him to keep on searching. “Was there no other way for my uneasy heart to be content with its fate,” he bemoaned in a letter to a confidante, “than by realizing the impossibility of the fulfillment of so many other desires?” Here, again, optimal stopping theory provides some measure of consolation. Rather than being signs of moral or psychological degeneracy, restlessness and doubtfulness actually turn out to be part of the best strategy for scenarios where second chances are possible. If you can recall previous options, the optimal algorithm puts a twist on the familiar mix of looking and leaping: a longer noncommittal period, and a fallback plan.
Rather than being signs of moral or psychological degeneracy, restlessness and doubtfulness actually turn out to be part of the best strategy for scenarios where second chances are possible.
For example, assume an immediate proposal is a sure thing but belated proposals are rejected half the time. Then the math says you should keep looking noncommittally until you’ve seen 61% of the possibilities, and then only leap if someone in the remaining 39% of the pool proves to be the best-yet. If you’re still single after considering all the possibilities — as Kepler was — then go back to the best one that got away. The symmetry between strategy and outcome holds in this case once again, with your chances of ending up with the best person under this second-chances-allowed scenario also being 61%.
For Kepler, the story had a happy ending. In fact, things worked out well for Trick, too. After the rejection, he completed his degree and took a job in Germany. There, he “walked into a bar, fell in love with a beautiful woman, moved in together three weeks later, [and] invited her to live in the United States ‘for a while.’ ” She agreed — and six years later, they were wed.
In optimal stopping’s highest-stakes incarnations — real estate and romance — we ideally don’t have to solve them more than once. But in another domain, optimal stopping haunts us multiple times a day without relief: parking. Do we take the space in front of us, and possibly end up with a long walk past other closer spots? Or do we drive on in the hopes of a better berth, but risk needing to backtrack — and the chance that this particular space will be taken by the time we return? And here again, the field of optimal stopping has us covered.
Assume you’re on a single long road heading toward your destination, and your goal is to minimize the distance you end up walking. The optimally stopping driver should pass up all vacant spots occurring more than a certain distance from the destination and then take the first space that appears thereafter. And the distance at which to switch from looking to leaping depends on the proportion of spots that are likely to be filled — the occupancy rate.
If this area has a big-city occupancy rate of 99%, with just 1% of spots vacant, then you should take the first spot you see starting at almost 70 spots — more than a quarter mile — from your destination. But if occupancy rates drop to just 90%, you don’t need to start seriously looking until you’re 7 spots — a block — away.
Here the science of optimal stopping offers us not just the ability to make better and more confident decisions behind the wheel, but at a broader level gives us a new perspective on urban planning. A neighborhood’s public parking going from 90% to 99% occupancy on average only accommodates 10% more cars, but multiplies the length of everyone’s walk by tenfold. As a result — argues UCLA’s Donald Shoup, considered the leading voice of urban parking reform — planners are woefully mistaken to think about parking as a simple question of maximizing utilization. Shoup’s reforms, in places like downtown San Francisco, have resulted in deliberately higher prices at the meter — and dramatically less circling the block.
We asked Shoup if his research allows him to optimize his own commute, through the Los Angeles traffic to his office at UCLA. Does arguably the world’s expert on parking have some kind of secret weapon?
He does: “I ride my bike.”
Having looked at the solutions for a number of the optimal stopping problems we face in our everyday lives, the irresistible question is whether — by evolution or education or intuition — we actually do stop correctly.
At first glance, the answer is no. About a dozen studies have produced the same result: people tend to stop early, leaving better options unseen. To get a better sense for these findings, we talked to UC Riverside’s Amnon Rapoport, who has been running optimal stopping experiments in the laboratory for more than forty years.
In the 1990s Rapoport and his collaborator Darryl Seale led participants through a number of repetitions of the classic, apartment-hunt-style optimal stopping problem. Most people acted in a way that was consistent with the idea of looking, then leaping — but they leapt sooner than they should have more than four-fifths of the time.
Rapoport told us that he keeps this in mind when solving optimal stopping problems in his own life. In searching for an apartment, for instance, he fights his own urge to commit quickly. “Despite the fact that by nature I am very impatient and I want to take the first apartment, I try to control myself!”
But that tendency to stop early suggests another consideration that isn’t taken into account in the classic version of the problem: the role of time. After all, the whole time you’re searching for an apartment, a partner, or a parking space, you don’t have one. What’s more, you’re spending your time and effort conducting the search instead of either enjoying the fruits of your decision, or simply doing whatever else you might have done.
This type of cost offers a potential explanation for why people stop early in the lab. Seale and Rapoport showed that if the cost of seeing each option is imagined to be, for instance, 1% of the value of finding the very best, then the optimal strategy would perfectly align with where people actually switched from looking to leaping in their experiment.
The mystery is that in Seale and Rapoport’s study, there wasn’t a cost for search. So why might people in the laboratory be acting like there was one?
Because for people there’s always a time cost. It doesn’t come from the design of the experiment. It comes from people’s lives.
The “endogenous” time costs of searching, which aren’t usually captured by optimal stopping models, might thus provide an explanation for why human decision-making routinely diverges from the prescriptions of those models. As optimal stopping researcher Neil Bearden puts it, “After searching for a while, we humans just tend to get bored. It’s not irrational to get bored, but it’s hard to model that rigorously.”
But this doesn’t make optimal stopping problems less important; it actually makes them more important — because the flow of time turns all decision-making into optimal stopping.
Viewed this way, optimal stopping’s most fundamental yet most unbelievable assumption — its strict seriality, its inexorable one-way march — is revealed to be the nature of time itself.
“The theory of optimal stopping is concerned with the problem of choosing a time to take a given action,” opens the definitive textbook on optimal stopping, and it’s hard to think of a more concise description of the human condition. We decide the right time to buy stocks and the right time to sell them, sure; but also the right time to open the bottle of wine we’ve been keeping around for a special occasion, the right moment to interrupt someone, the right moment to kiss them.
Viewed this way, optimal stopping’s most fundamental yet most unbelievable assumption — its strict seriality, its inexorable one-way march — is revealed to be the nature of time itself. As such, the explicit premise of the optimal stopping problem is the implicit premise of what it is to be alive. It’s this that forces us to decide based on possibilities we’ve not yet seen, this that forces us to embrace high rates of failure even when acting optimally. No choice recurs. We may get similar choices again, but never that exact one. Hesitation — inaction — is just as irrevocable as action. What the motorist, locked on the one-way road, is to space, we are to the fourth dimension: we truly pass this way but once.
Intuitively, we think that rational decision-making means exhaustively enumerating our options, weighing each carefully, and then selecting the best. In practice, when the clock — or the ticker — is ticking, few aspects of decision-making, or of thinking more generally, are so important as one: when to stop.
Excerpted from Algorithms to Live By, by Brian Christian and Tom Griffiths. Copyright © 2016 by Brian Christian and Tom Griffiths. Used by permission of Henry Holt and Company. All rights reserved.