A Plan For Humanity

Bryan Johnson
Future Literacy
Published in
20 min readMay 15, 2018


We need future literacy to survive.

I don’t go to pools. I don’t drink margaritas.

Somehow here I am, sitting at a pool with a margarita in hand, editing my 82nd draft of a plan for the future of the human race.

My girlfriend is worried people might think I’m crazy.

“Baby,” she posits, “most people just want to make it through the day. Who really cares whether humans survive in 200 years?”

She’s kinda right. Fortunately, tequila takes the edge off.

For the past decade, I’ve been obsessed with solving a problem that too few people seem to care about. And that, in my estimation, is the biggest problem of all. But who I am to say?

Perhaps humans are destined to fail. Like so many other species before us, perhaps we’re simply meant to approach a natural, unavoidable end. Maybe we deserve it.

But what if….what if we adjusted a few things, turned a few knobs, and set off in new directions. What if we could not only survive, but create an existence more exquisite than we can even imagine?! What if.

These are the ideas that consume me. I can’t stop thinking about them. Dreaming about them. I burn inside with an ambition that feels out of place in the early 21st century.

We plan everything else in life: when we wake up, when we start work, meetings, social events, marriage, kids, career progression, financial goals, vacations, purchases, war, retirement, death — everything.

Why don’t we have a plan for our collective survival?

We are racing into a wildly complex, unknown, and accelerated future, and we are unprepared.

And so, in the absence of seeing any viable plans for the future of the human race, I’ve decided to write one. Love it, hate it, or just find me crazy — it’s a place to start. I’d love to hear your thoughts.


Prior to getting into the plan, a few quick notes:

  1. Throughout this document, I’ve traded comprehensive analysis for brevity. This 5,000 word treatment is meant to be a conversation starter. Just something to keep in mind :)
  2. Many of you are experts or have better ideas than I in these areas, so know that I’m eager to hear your thoughts. I hold nothing sacred. I welcome every and all viewpoints in response.
  3. This plan, and my reasoning behind it, is of course, riddled with my biases, blind spots and other cognitive shortcomings. Please help me see them through your biases, blind spots and other cognitive shortcomings!

Becoming Future Literate

A 7 Step Plan For Humanity

In 1820, only 12% of the world’s population could read and write. Imagine what our daily lives would look like right now if we hadn’t achieved a 83% basic literacy over the past two centuries. I imagine that it would be significantly less prosperous, healthy, and interesting.

Now, imagine that only 12% of us are “Future Literate,” in other words, highly skilled at approximating what’s to come and capable of preparing accordingly. We’d be in very big trouble.

But it’s actually worse than that. My guess is that the rate of future literacy is less than 1%. As a society, we’re future illiterate.

We obsess over the short-term. We fly by the seat of our pants. We ignore risks until they become a crisis. We can’t imagine beyond the familiar. This cognitive disposition creates excessive risk for us as a species.

The most effective way to deal with our circumstances is to recognize the underlying, foundational reasons why we naturally behave the way we do, so that we can get to work on these root source problems.

Future literacy is the ability to forecast approximate milestones and create the capacity to reach them, regardless of contextual change. It’s the act of creating mental models for an emerging future while living experimentally and adventurously.

If enough of us become future literate, we stand a chance of surviving ourselves and creating an exciting future.


Our Brains are Dangerously Flawed

In a sample of 600+ US residents, more than 85% believed they were less biased than the average American. Only one participant believed that he or she was more biased than the average American. Known as the Bias Blind Spot, the study demonstrated that most people are less likely to detect bias in themselves than in others.

To help us navigate a complex world, our brain has developed hundreds of cognitive biases. The challenge is that we can’t recognize them easily. We create our own unique, individual distorted realities .

It is scientifically proven that humans are irrational, illogical, have blind spots, process information to support our existing beliefs, and are vulnerable to manipulation. We clutter our brains with unnecessary information, get confused by statistics and probabilities, have poor recollection when truth matters, and are generally terrible at predicting the future. And yet, our brains hide this reality from us.

Even with the desire to recognize the problem and the stamina to try and fix cognitive errors, there are no tools sufficiently powerful enough to get the job done. (See my attempt at achieving Cognitive “Perfection” here, inspired by Benjamin Franklin’s attempt at moral “perfection”.)

So Step 1 begins like any therapy group might — by admitting we have a problem.

I’ll go first: “Hi, my name is Bryan Johnson, and I have a problem. I am riddled with cognitive biases that I can’t detect, define, or fix. My brain and my five, limited senses put me in a myopic frame of reference where only I appear at the center. My belief systems force me to believe untrue facts, but I still hold onto them with a death grip. My brain spends heaps of its precious energy trying to figure out the present and dwelling on the past, but is terrible at planning for the future. I am irrational, illogical, hypercritical, and find most pleasant the information that best conforms to my worldview. My memory is faulty, and I don’t even know where the gaps are. I fill in the rest with made up stories, like our eyes do with their tiny blind spots. Of the hundreds of cognitive biases and default errors I am aware of, I can’t stay focused on any single one long enough to overcome them. I only imagine in terms of things I’m familiar with. And worst of all, despite these admissions, I believe everyone else’s reality is even more distorted than mine, that these flaws apply less to me than to others.”

For now, you and I are both stuck with flawed cognition, along with nearly 8 billion others we share the planet with.

For those of you unaware of the severity of your cognitive shortcomings, or think that you are the exception to the rule (85% of people do!), I’d encourage you to become familiar with the literature. It is a useful and high value form of enlightenment that will help you in every aspect of your life. (A suggested reading list is below plus a visual representation of your 188 cognitive biases.¹)

For me, I find my daily practice of working to improve my cognition by trying to lessen my biases, blind spots, distorted reality, etc. is higher value than other practices I’ve had over the years (i.e. meditation).


After recognizing our flawed cognition, the next step is to improve it. After all, if we can improve our cognition, we can also improve everything downstream from it: ourselves, relationships, health, environment, religion/beliefs, politics, economics, education, security, and…our individual and collective futures.

This is really important to understand: our political systems, our economic systems, businesses, war, relationships, scientific discovery — everything — all lives downstream from our minds.

To be clear, my suggestion of up-leveling cognition is not merely about doing what we already do better, faster. This is not about taking a performance enhancer or trying to climb the IQ charts.

Meditation, supplements, exercise, education, self-help programs and therapy, are all useful starting points for self-improvement. The next level of improvement needs to be enabled by substantially better tools that allow us to systematically improve ourselves on known fronts and on new frontiers.

If our cognitive potential were represented on a scale from 1–10, imagine that we are a 3 right now and can barely catch a glimpse of 4. What levels do we need to achieve in order to solve the problems we face?

It may be the case that becoming perfectly logical, rational, and eliminating all blind spots is a level up, and beyond that, we need to enhance our minds by extending our sensory experiences and capabilites. To go even further, maybe we build AI so that it assumes logic/reason-based responsibilities so we can focus on higher order creativity, emotions and complexity.

The evolutionary potential I am suggesting sits beyond our current ability to conceptualize. Just because we can’t currently see it doesn’t mean it doesn’t exist. We are limited by our imaginations.

When we think about AI, we imagine it as being capable of becoming and doing anything, yet, when it comes to our own potential, we’re woefully unambitious and unimaginative.

There is no reason we can’t be as ambitious about our own potential as we are about the potential development of AI.

When I’ve spoken about this idea, people often get confused and think that their abilities are the same as their technology, that their smartphones capabilites are their own. It’s easy for that to happen.

Upon further examination, we do not get a system upgrade every few weeks. We do not increase our native abilities by orders of magnitude every decade. (This is a form of Group attribution error, a bias where we often mistake traits of an person, or in this case, technology, for traits of the group as a whole.)

Independent of whether you love or fear AI, intelligence is the most powerful and precious resource in existence. As the most generally intelligent species on planet earth, humans rule, and we do so ruthlessly.

We decide who we eat, who we have as pets, who goes extinct and who we save. I am not suggesting that AI will behave like us — at least, I certainly hope not. And I frankly don’t think we have the ability to discern how AI will evolve over the coming decades.

Absent being able to predict the future, I am suggesting that it’s urgent that we begin radically improving ourselves so that we can catch the wave of co-evolving with our digital intelligence.

There is a window of time, which is right now, in which co-evolution needs to begin in earnest. Otherwise, we risk leaving ourselves behind, by our own design, being at the mercy of whatever AI evolutionary path lies ahead of us.

Radically improving ourselves is also the correct response to the coming unemployment crisis.

Right now, we’re doing the exact opposite. Not only are we not urgently working to radically improve ourselves and thoughtfully co-evolve with our digital intelligence, we have built the perfect economic system to race ourselves to irrelevance. Oops. More on this in Step 3.

Recognizing this need to radically improve ourselves was a primary reason why I started Kernel, personally seeding it with $100M. We are building next generation, non-invasive brain interfaces to accelerate our evolutionary advance.


Make Humans Economically Viable

The future of the human race can be forecasted through a single metric: return on investment (ROI) of intelligence. Our current economic incentives (the engine that drives the world) are perfectly designed to put humans out of business and make us irrelevant as fast as possible.

Take this example: if 100 investors were each given the option to invest $1M into improving employee skill sets at their company OR investing the $1M into a group of employees building digital intelligence (aka artificial intelligence), the vast majority would invest in digital intelligence. This is highly rational.

The investment returns, for a large percentge of scenerios, are higher for digital. After all, it takes 33 years to produce a single human PhD! Digital intelligence, generally speaking, simply produces higher ROI than investing in humans in our current economic system, and the delta is getting larger daily.

Regardless if you believe the rate of progress of digital intelligence is an exponential curve, punctuated equilibrium, or linear — that line is moving up and to the right. Economic returns follow. Humans, comparatively, are flatlined.

Yes, we are capable of living in a more complex society than previous generations. We have better technology at our fingertips than any previous era. We can do things that previous generations couldn’t do and we understand more about the world than any time in history.

Still, the rate of human improvement is deminimus relative to the dynamic progress of digital intelligence.

This is not an argument that AI is going to take over the world. I am arguing that our own, human built capitalist systems, are designed to invest money in things that create the highest returns, which is increasingly favoring digital intelligence. That means, soon, there will be little economic incentive to invest in humans.

Let’s take a closer look at the economic system that some of the most powerful, capable and successful companies in the world have designed. I’ll create a fictitious company for example purposes called….Fakebook.

The Economic Cycle For Human Irrelevance

1. Fakebook mines our digital “selves,” i.e. data that represents our wants, preferences, habits, proclivities, knowledge, friends, personality, etc.

2. Fakebook sells this data to advertisers, politicians, nationstates — whomever will pay the highest price.

3. Yay! Fakebook makes piles of money and hires the best people in the world, offering them so much money they are de-incentivized to work on solving other problems in the world (climate change, fake news, you name it.)

4. These brilliant new hires from the best universities, build the most advanced digital tools in the world with two goals: 1) get us to use more Fakebook and 2) mine more information about us. This makes Fakebook more money, so they can hire more smart people to keep us using it more.

5. Now that people are literally addicted to Fakebook, Fakebook can direct their attention anyway they want. People waste their precious time, become increasingly depressed, and get psychologically manipulated.

6. Return on investment for digital intelligence increases; while return on investment for humans goes down.

Rinse, repeat, and the cycle continues.

So how do we create new incentives that align with human improvement? We have to change the current systems, for example:

  • Businesses need to be incentivized to improve our cognition, not manipulate us at any cost to make money.
  • Political systems need to be designed to make methodical and data driven decisions, not do whatever it takes to get someone elected or pander to the wealthiest or best organized special interest.
  • Rate of human improvement needs to justify direct investment capital.

There are many ways we might think about shifting the incentive structures from the top down — and plenty of smart people are currently thinking about this (i.e. blockchain technologies, impact funds), but another method to consider is from the ground up.

In taking a ground up approach, one solution would be to reclaim the value of owning our own digital data, which can then be leveraged to improve ourselves and increase our value. This way, we become an asset to be improved, not an oil well to be extracted from.

Fakebook’s business model, and market cap, assumes that a person’s digital data is “ownerless;” whoever can scrape and mine the most data about each of us (from whatever source) can do whatever they want with it. We have no say or control, can’t use it to improve ourselves, and we don’t financially benefit from its use.

If there were to be a change in public tolerance or legal status around our digital data being “ownerless,” a Data Crisis may ensue and could possibly be more consequential than the 2007 Subprime Mortgage Crisis. The second, third, and fourth order consequences following a shift in data ownership to individuals could scramble economic and political power, concentration of engineers, revenue models, social norms, and incentive structures.

In order to encourage investment into human improvement, people need to own their digital data. Here’s what that cycle of development will look like:

The Economic System For Radical Human Improvement

1. We digitally mine ourselves, converting everything knowable about us, into 0s and 1s. Think of it as your digital operating system. This is your property, like a house or car. This is your single most valuable asset as a human as it’s the key to your self improvement and future relevance.

2. We leverage this digital representation of ourselves to begin radical cognitive improvement.

3. Employers pay us more for improved abilities.

4. Other businesses make money when we improve (versus making us worse versions of ourselves). Here is how this ecosystem might emerge: Changing Our Minds One Attebyte at a Time.

5. With our profits, we build better tools to mine and improve ourselves.

6. Better tools = faster self and community improvement.

7. Human improvement ROI increases.

In previous eras, improving oneself was a luxury. Now, it’s a necessity for our survival and relevance.


It’s Time To Give AI A Hug

Like it, love it, or fear it, AI is here and here to stay.

While there are plenty of well-founded conversations about how we proceed in thoughtfully building AI, it’s also our best co-evolutionary partner. Leveraging AI is simply the only shot we have at successfully managing complex systems such as geopolitical cooperation and conflict, resource management and our environment. Without AI, we will collapse under the complexity of our emerging world.

More importantly, we need AI to offer us “strategically unprecedented moves”.

Looking one step further, AI is also essential in its enablement for our radical upleveling and evolution, a necessity for our future relevance and survival. AI creates a new opportunity for us to decide what is or is not worth our scarce cognitive resources.

Imagine instead of spending years in school learning things that will be outdated by the time we graduate or will never use in life, we focus on exploring new and increasingly higher-order cognitive functions while AI takes care of the more functional aspects of society (i.e. self driving cars as a simple entry point to imagine — why would we ever waste cognitive energy driving ourselves from one place to another?).

In Changing Our Minds One Attebyte at a Time, I explain how this new neuroeconomy might develop. If we can’t get our act together and still manage to survive ourselves, it will be because AI somehow dragged us along with it.

AI might just be best thing since sliced bread.

How we build and use AI, lives downstream from human minds, at least for now. This co-evolutionary neccessity, to simultaneously uplevel ourselves while building AI for a thriving future, cannot be overstated.


We Need to Neutralize Threats

Over the past few decades, society has built robust, swarm-like intelligence and infrastructure in digital intelligence (software and hardware). If a problem or opportunity arises in the world, or even a market need that can be addressed with digital intelligence, literally millions of talented software and hardware engineers, with cheap and powerful machines at their fingertips, leverage a vast ecosystem of tools, providers, knowledge, capital, etc. to address it instantaneously. We’ve built the beginnings of global digital intelligence immune system.

This is incredibly valuable for humanity.

However, we do not yet have these same capabilities in biology, genetics, chemistry, or materials.

Since you and I are built from biology, and we live on a big round ball of biology floating in space, that’s problematic.

We need robust capabilities to respond in real-time to the risks that pose the greatest risk to our continued existence.

I’ve been trying to help build this global biological immune system with my venture fund, OS Fund. In 2013, after selling my company Braintree (Venmo) to Paypal, I seeded the fund with $100M and focused on investing in the hardest of sciences: biology, materials, chemistry and genomics — technologies critical to our collective thriving and survival, yet underfunded globally relative to their importance and potential.

Four years in, the fund performance is in the top decile among U.S. firms and of the 28 investments, we have 4 unicorns, 26 up valuations, and 2 exits.

For more detailed explanations of what the companies are tangibly doing, plus some imagination exercises of how we might respond to certain risks (i.e. environmental), see OS Fund: Building A Global Biological Immune System.


Work on Game Over Problems, not Horseshit Problems

You may have heard the “Parable of Horseshit.” In New York in the early 1900s, one hundred thousand horses produced 2.5 million pounds of horse manure per day. Horses were the primary mechanism for transportation and industry, but the smell and the public health implications of being surrounded by horse manure were untenable.

New Yorkers were terrified. They demanded change.

Elections were won and lost on the issue; the greatest minds of the time were focused on figuring out a solution.

Then, in 1908, Ford rolled the Model T off the assembly line and horse manure was literally no longer a problem. Horse manure went from being a Level-5 hurricane to a footnote of history.

“Horseshit problems” are the kind of problems that will likely be solved in the normal course of human innovation, science, and grit.

For example, antibiotic resistance (ABR). I think ABR is a “horseshit problem”, not because I don’t think it’s serious or real (it is), but because I think we’ll eventually figure out a technical solution that solves over-farming and over-prescribing.

The economic incentives will drive the solution to ABR — not any political, collective decision making.

Our politics currently rely on poorly informed, irrational, illogical, and reality-distorted citizens with a myriad of competing demands (I include myself in this category) to make collective decisions about where to focus our resources. We cannot rely on political frameworks to find and identify Horseshit Problems — it’s not their job.

“Game Over” problems are different. They’re the kinds of problems that end the human species, or perhaps less dramatically — and likely first — make the world inhospitable. These are problems like climate change, nuclear annihilation, and mass human economic irrelevance — problems that no technical tweak can fix because they live at the core of our basic operating systems.

Our political and economic systems are currently geared toward solving horseshit problems by rewarding short term solutions and smaller problems at the expense of addressing Game Over problems.

Future literacy demands that we develop systems to identify and drive collective resources to Game Over problems, driven by a methodical data driven analysis, prioritizing the gravest and most immediate threats to humanity.

Factfullness, is a useful guide in demonstrating ways to make data driven conclusions about the world instead of persisting in our self-created distorted reality.


It’s Time to Update Our Belief Systems, Now.

Science progresses one funeral at a time.
Max Planck

We are stuck in our ancient brains — hardware/software hundreds of thousands years old — and our belief systems aren’t much better. In many ways, we are still stuck in the Cognitive Paleolithic.

Belief systems are the gears of society, determining speed and torque. They’re the most powerful technology to facilitate mass human cooperation and also one of our biggest liabilities.

Belief systems are also serious business. More people have died from conflict over belief systems than any other cause in history.

I know first hand how hard it is to update a personal belief system. I was raised in a devout Mormon family and community and it was my singular reality for more than 30 years. When I finally concluded, after much anguish, that I no longer believed in what they taught, leaving the organization and regaining my footing elsewhere was the hardest thing I’ve ever done.

We struggle to update our belief systems even when we have clear economic incentives to do so. For example, twelve percent of the original Fortune 500 still exist. The other 88% couldn’t update their belief systems fast enough to see, adapt, and build the next thing. They’re now extinct.

History has repeatedly proven that updating our belief systems is vital. Humans used to attribute illness to mystical influences; now that we understand the atomic, chemical, and biological nature of disease, we have updated our belief systems to compensate. Had we not progressed beyond leeching and prayer, we would not have the lifesaving medicines and procedures many of us have access to today.

We now need more inclusive belief systems that recognize the interconnected nature of our fates, versus those that just opt to take care of a singular tribe at the expense of everyone else. We share a single planet, and whether we like it or not, we’re in this together.

An example of caring for a tribe at the exclusion of others, and even, ultimately, only being responsible for yourself, is an unintended consequence of many religions.

Take Judeo-Christianity as an example. A person’s “salvation” is not inextricably tied to anyone else’s salvation. Service, kindness, forgiveness, etc. are all encouraged, but in the end, the religious adherents quest to achieve the ultimate goal of “salvation” is only responsible for themselves. While they may have commandments to be zealous in converting others to the religion, ultimately: you obey the commandments, you get heaven, and win the game. In this case, religion is a single-player sport.

This is not how we survive as a species.

For humans to be successful in the future, we must play a multi-billion person sport. Any desirable future we can contemplate hinges upon the cooperation of a large percentage of our roughly 8 billion people. We are all interconnected, our fates intertwined.

Our ability to build a future we love depends on our ability to adapt our belief systems to reflect the realities we live in. This means being more flexible and nimble in our response to external circumstances. Entrenchment in old beliefs will prevent us from leveraging the exceptional possibilities we have before us.

The faster we can create new belief systems to encourage mass cooperation towards our collective well-being, in this life, the higher our probability that we’ll each get the things we most care about.


I was recently in the Middle Eastern desert where a business associate was telling me about the 2030 plans for his country.

“Planning for 2030 will be difficult,” I replied. “The world will change many times over between now and then.”

He gave me a skeptical glance.

“Ok, let’s play a game,” I said. “You and I need a plan to get a robot over that sand hill sitting on the horizon, and we can’t intervene once it sets off. There are two ways we can do this. First, topographically map the area and chart the path. The problem? Within minutes of departing, the sands will shift and the robot will be stuck in the sand. The other way would be to first acknowledge that we can’t predict how and when the sands will shift, so let’s identify the end point and then give the robot the tools it will need to adapt on a second by second basis.”

Drawing maps in the sand won’t be a winning strategy.

The future is unpredictable, and becoming even more unpredictable. The only way to succeed in this environment is to prioritize future literacy and adaptability — in ourselves and society.


Never before has the distance between imagination and creation been so narrow.

In the past, we didn’t have the sophisticated tools to build the world we dream about and aspire towards. Now we do.

It’s the best opportunity any of us could ever ask for.

Like you, I’ve got stuff to do tomorrow (stuff I’m really excited about!). And I’ve got stuff going on tomorrow’s tomorrow. And tomorrow’s tomorrow’s tomorrow. In fact, tomorrows for decades to come.

If we want to ensure we all get as many tomorrows as we want, it’s time to get to work.


The size of scope of these ideas really requires something like a book. It’s why writing this condensed 5,000 word version has taken me more than 82 drafts and the remarkable patience of my girlfriend. If these ideas interest you, I’ll continue to write about them, and you can receive them via my weekly(ish) emails.

Subscribe to be in touch!


1. Factfulness, Influence, Predictably Irrational, Pre-Suasion, The Righteous Mind, Thinking, Fast & Slow

Credit: John Manoogian III / Buster Benson



Bryan Johnson
Future Literacy

Founder of Blueprint, Kernel, OS Fund & Braintree Venmo