How the economy of tomorrow might look like and why you should care

Felix Geilert
Future Vision
Published in
16 min readApr 28, 2019

Want cool Future Vision Merch? Check out our store here

In this article, we will look at potential technological break-throughs, how likely they are to occur and which impact on the bottom line of the economy they might have. We will do so by creating a definition for the junctures and then looking at different areas (including bio-tech, quantum computing, AI and mobility) where such junctures might occur. For each of these, I will also offer my opinion on how they might influence the economy.

This is a shortened version of the article. The original version is part of a longer series about the effects on education. In this article I will focus solely on the implication of technological trends on the future of the economy.

Disclaimer: As always I try to use reliable data sources for every fact I use and try to highlight parts that are based on my opinion. Since those sources might not always be right I welcome every discussion. In addition to the data sources, I will provide links to books that I find relevant along the way. In general, these links will be affiliate links to amazon, which I will mark with *. If you do not want to use these, you are free to google the books yourself (though the prices should be the same). My articles also contain sections marked as “Detour”, which contain additional information not essential to the general line of reasoning. Also, as I am always looking for improvement, I would love to hear your opinion on the article!

Let’s get started!

How do we measure the future?

To answer this question, we will first have to get a general overview about what we want to predict and what data we might need for that. This will help us to figure out where we currently are and what dimensions our problem space has (i.e. which factors we might need to influence to move in a certain direction). For this article I would state the following goal:

Identify relevant trends and technologies that might change the structure of our society and/or economy.

This is intentionally a very broad statement. As stated above, we are specifically interested in break-throughs that would set the economy on a new trajectory. I will use the term juncture for such an event. There can obviously be a large list of such junctures and, as they are more-or-less singular events, they are difficult to predict. From my perspective (based on articles, books and experience) I see the following junctures as relevant for the discussion (feel free to comment if you disagree of have extensions):

  • Emergence of quantum computing
  • Creation of a artificial intelligence on par with humans (AGI)
  • Widespread availability of gene editing (intelligent design)
  • Emergence of brain interfaces (brain augmentations)
  • Space population & general availability of space travel

For each of these junctures we will now look at the probability & time-horizon of it happening as well as potential effects. I will try to explain the general concepts of the respective fields, though we will focus more on general trends and realistic developments than the detailed state of the technology.

Quantum Computing

The power of quantum computing is not only based in the processing speed it might achieve (i.e. iterating through exponentially more possible solutions than a classical computer). It can also solve a larger set of problems in polynomial time (the so called complexity class BQP). The point when experimental quantum computers will outperform classical computers is dubbed “quantum supremacy”, leading to a new era of computing. The biggest impacts will be made in areas with problems, which require checking a vast amount of potential options (i.e. high dimensional problem spaces). Those include pharmacy, machine learning and cryptography among others.

This begs the question: When will quantum supremacy arrive? This very much depends on the set of tasks and universality a quantum computer should provide. In general the progress of quantum computers is measured in qubits (or quantum bits), which store relevant information similar to classical bits in the computer. In contrast to bits, who store either 0 or 1, these qubits can be in both states at once, allowing them to explore both options (called superposition). This means that a set of qubits can explore all possible states the same amount of bits could store at once, leading to exponential growth as qubits are added. This allows quantum computers to quickly explore a vast array of options in a single step, solving many classical problems. There is however a catch: The qubits have to be kept in superposition in order to leverage this computing power. This requires great effort, as qubits are very sensitive to changes in their environment, causing them to collapse into a specific state (either 0 or 1).

In these approaches experts predict quantum supremacy to happen with systems around 100 qubits. Google recently announced a system with 72 qubits, however with high error-rates, making it currently unsuitable for real-world computations. Other systems with 49–50 qubits are developed by Intel and IBM. The rise in the number of qubits thereby appears to resemble a linear trend over time (see figure below, even though the number of event-points might be too low for the trend to be reliable). However, linear trends are in general suspicious and we only have very few measurements. We also should be careful not to judge the progress of the field only by the number of qubits. Other factors, such as error-correction and working conditions do have a large influence on the usability of these approaches.

Indication of a linear rise in qubits

Therefore, not all companies are strictly optimizing for more qubits. Microsoft, for example, is working toward topological qubits, which are far more robust against errors. Thereby the information is encoded into the topological structure of the qubits, making them invariant to certain changes of the environment. Microsoft has yet to present the first system containing such topological qubits, but if they do, they might have the possibility to scale much faster than existing systems.

There is also a weaker form of quantum computing, called Quantum Annealing, that is already offered by firms like D-Wave. These systems offer up to 2000 qubits in an industrial setting today. However, they are not equivalent to the “general purpose” qubits as described above. Rather, they leverage the fact that qubits can fall out of superposition into a specific state (0 or 1) to find a concrete solution of an optimization problem. A program on these machines would define the dependencies between the qubits to resemble the energy landscape of an optimization problem (e.g. one that represents the loss landscape of a neural network). The system would then find a value for each qubit that minimizes the energy (i.e. end up in a local minimum). This means that a 2000 qubit quantum annealer can only solve an optimization problem with 2000 parameters (in comparison, many neural networks today have millions of parameters).

Of course, there is much more depth to the field, both in scientific background and on the application side. For example, there is also recent work, that indicates classical security approaches can be adapted to withstand quantum attacks (i.e. braking traditional security through the power of quantum computers). There are many resources out there for the interested reader (such as this GitHub repository).

Given the general funding interest of companies like Microsoft or Google as well as government initiatives like the National Quantum Initiative Act and similar programs in China and the EU, the pace of research is likely to hold. It has to be noted, though, that there are also experts, who doubt the general feasibility of quantum computers. Under the premise that quantum computers are generally possible, it appears likely (in my opinion) that the threshold towards quantum supremacy can be reached within the next 5 years. Industry scale adoption of this new paradigm will probably take longer. Especially given quantum supremacy just indicates the lower bound for creating useful quantum computers and these approaches have to work outside of lab environments. (Additionally, while cloud computing might rapidly spread access to the hardware, the wider industry will also need time to re-skill workers and adapt to the new possibilities).

Artificial General Intelligence

The creation of a general AI would have far-reaching consequences. First of all it is widely believed that human level AI would lead to an “intelligence explosion” (sometimes also called singularity, a term first popularized by I. J. Good). This means that the AI creates ever more intelligent AIs in a rapid fashion, widening the margin to human intelligence fairly quickly. (There are some very interesting books on this topic, including Superintelligence* and The Singularity is Near*). Those explosions might happen in a matter of hours or days. However, they require the AI to be scalable (i.e. in order to increase intelligence through scale) or already beyond human capabilities. Such scenarios can play out in a variety of ways, not all of which will be beneficial for humankind.

So, how likely is such a singularity? The answer is unclear. Even experts in the field of AI are divided on this question with answer ranging from decades to centuries. Some are also convinced that it cannot be done. This also sparks a debate among experts on how to prepare for such an event and how many resources should be used on topics such as safe AI.

What we can observe, is the fact that the research interest and major technological advancements in AI have been increasing. The most visible achievements include IBMs Watson, as well as DeepMinds AlphaGo (another article by the FLI) and AlphaStar. A closer look at the research progress in the AI community shows, that these are only the tip of the ice-berg. Publications and attendance of top conferences have soared over the last years (see figure below).

Conference sizes and papers published in the field of AI (Source: AI IndexCC BY-ND 4.0)

While the number of researchers in a field do not directly relate to actual progress in the field, it highlights a general trend for funding and focus. As we have explored above, todays AI has started to surpass human level on specific tasks, however they still lack the general ability to transfer knowledge between distant domains. Here too approaches like reinforcement learning are showing promising ways to bridge the gaps (e.g. the techniques from AlphaGo were transferred from Go to Chess without major changes or integration of human knowledge).

This kind of progress in AI might even be intertwined with progress in the area of quantum computing, as it would allow algorithm to out-scale and improve todays approaches. It has also to be noted that thinking in the brain appears to use quantum processes (even though it is unclear if these mechanisms can be integrated into quantum computers). Furthermore, researchers have started using todays AI to design the next generation of quantum computers, by learning the advanced error-correction functions. For a more nuanced look at how AI might resemble the human mind, I would suggest How to create a Mind* by Ray Kurzweil. Even though, in my view the focus lies a bit to much on one particular concept, leaving some generalization to the reader.

Given the large economic benefits, it also appears likely that research in the area will further accelerate in coming years. On the other hand, it appears unclear when and if AGI can be reached and another AI winter is indeed possible. Under the assumption that AGI is achievable I would assume it to happen at the earliest 50 years from now. This leaves us with two generations to build (ethical) constraints for these systems. For those interested in a detailed account on the impact of AGI on our future society, Life 3.0* by Max Tegmark. Nevertheless, it somehow misses a broader view on other technologies, such as gene editing.

Gene Editing / Intelligent Design

Since the discovery of its genome editing potential in 2012, CRISPR has launched a revolution in the area of bio-science. The potentials seem to be endless, reaching from better treatment of terminal diseases over improvement of crops up to the creation of super-humans. (Many SciFi novels have also explored the vast variety of options, one particular example is This Mortal Coil*). There are numerous application areas, including:

  • Crops & Plants
  • Modification of the Ecosystem
  • Medical Treatments (including genetic diseases)
  • Intelligent design of human traits
  • Biological weapons

By now there are many variants of the tool, each with different pros and cons. The most popular one is called CRISPR-Cas9. In general the tool consists of a guiding RNA, which can dock-on to the relevant parts of the DNA that shall be cut, and a Cas Protein, which will execute the cutting process. There are also other techniques that improve the work with CRISPR. One of them is a gene drive, that allows to improve the spreading of a modified gene through a population. Usually the edited gene would only be passed with a chance of 50%, a gene drive will modify the DNA from the partner, ensuring passing rates of 100%.

So, which application areas are realistic? Similar to AI and quantum computing, it is difficult to say how long it takes or how powerful these tools actually are. Nevertheless, there is an intensified research interest in the field (see figure below), accompanied by multiple large investments. Current research in the area include treatments for diseases like cancer or sickle cell anemia. Some researchers in China have even started controversial experiments on embryos. There have also been first embryonic tests in the US, but none of them were conceived. As this influences the human germline, which might have far flung influences on the entire population (as these changes are also inherited), experiments like these are considered unethical. However, it seems unlikely that these experiments can be stopped, given the fact that the usage of CRISPR can be as cheap as $65, enabling mass usage.

Publication rates in gene editing by topic (Source: Elsevier)

As an effect, many researchers are now trying to establish global oversight to avoid a biological arms race to create super-humans. Such modifications might have far reaching consequences, since we currently do not fully understand the inter-dependencies inside the human genome. Therefore, edits might lead to lethal chain effects, rendering entire generations sterile. Another important part is the accuracy of these approaches and potential off-target effects (i.e. unwanted gene cuts). Those are very much depending on the type of cell modified and the amount of cutting, reaching from 0.1% to 60%, as one researcher reports.

The possibilities of CRISPR appear to live up to the promises it made, yet in many aspects we appear not to be ready for the ethical fallout it will generate. As with AI, it would in my opinion make sense to start thinking about ethical guidelines and implications as early as possible. Especially given the accelerating pace of research. Already there are indications for new tools that might even allow shredding of entire DNA parts. There is even an interesting book on the implications of these techniques from one of the discoverers, Jennifer Doudna, called A Crack in Creation*. Effectively tackling the ethical problem might also include a formal introduction to ethics during education, which is often missing today.

Brain Interfaces

Yeah, I know this one sounds a bit SciFi, but research indicates that it might actually be closer than we think. In fact, there are quite a few companies working in this very field (e.g. Neuralink, Kernel, Paradromics). The effects of human-brain interfaces on our intelligence would be tremendous. If working, it could not only allow us to treat paralyzed or brain damaged patients. It might also allow us to access any information on the internet in real-time, change the way we interact with technology and could even enable direct brain-to-brain communication. These developments have also the ability to profoundly change the way our societies work.

What would we require to get there? Brain-Computer Interfaces (BCIs) can be divided along two dimensions. On the one hand, either invasive (i.e. under the skull) or non-invasive (e.g. electrodes attached to the outside of the head). On the other hand, either one-directional (signals coming out of the brain) or bi-directional (also sending signals back into the brain). All categories have made progress in recent years, especially through the wider availability of technologies like EEG (electroencephalography). Current prices are ranging from $100 to $25.000+, allowing even hobbyist to experiment with these technologies. EEG allows to scan brain-waves in a non-invasive manner. These data can then be interpreted to control things (e.g. wheelchairs or even games) with the power of thought. Another focus of research lies on bi-directional communication. Recently there have been experiments to transfer arm control movement between two persons through brain stimulation. There is even research on invasive brain stimulation, which might allow broader transfers in the future. However, not all technology is focused directly on brain control. Cochlea implants, which enable deaf people to hear, are wide spread. There have also been advancement in eye implants, transferring digital images onto the optic nerve. (In case of further interest, Brain-Computer Interfacing: An Introduction* is one of the classical textbooks in the field).

What is still missing from research is a deep understanding of the human brain. Recent approaches try to simulate the brain, as this would allow to analyze thinking patterns in real-time and observe effects of manipulations. A direct measuring and positioning of each neuron would be extremely time consuming and not feasible in decades. The focus resides on understanding underlying principles and smaller parts (recent research mapped out 50 neurons inside the visual cortex) and model the simulated brain accordingly. One such project is the human brain project in the EU (founded with $1 billion) and the Brain Initiative in the US (founded with $3 billion). Those simulations have made progress and are generating first insights (e.g. in cognition or the visual system). These kind of research might also have large implications for AI and machine learning, as several research projects are examining. There are also other approaches, like MAPSeq, that allow a more general mapping of the neurons inside a brain by using a sort of bar-code to mark single cells. However, as of this writing, they are not yet applicable to the human brain and still under research.

The research and industry interest in BCIs highlight the enormous potential this technology could have. The increasing amounts of money and tight research schedules for projects like the Brain Initiative also indicate that these technologies might be closer than it looks on first view. Nonetheless, it might take decades before a real bi-directional communication is possible and it will take even longer for such technology to be available commercially.

Space Population

Similar to the previous junctures the advent of large-scale space travel and population would transform many aspects of our economy. It would allow humankind to settle on multiple planets, strongly reducing the risk for annihilation (see next section). It would also open up new possibilities for gathering of resources (e.g. from asteroids) and could allow us to keep the planet cleaner, by moving heavy production industries into space.

The main driver for space expansion (at least in the “near” term sense around our planet) are the costs to bring objects into space. Those have been falling drastically over the last decade and are expected to fall even further in the future. The first commercial flight of the Falcon Heavy happened just recently (slightly ahead of the above projected time), pushing the costs further down. This means it will be easier for independent companies to bring material and technology into space, opening up new commercial possibilities.

Costs to lift up a kg of cargo to space (Source: FutureTimeline.net)

One of the goals of companies like SpaceX, Virgin Galactic and BlueOrigin is not only to lower the cost of bringing materials to space, but also to pave the way for commercial space travel. Reduced costs might also bring a more democratized access to space. In the very long run we might even see genetic modifications (see intelligent design) that adapts humans for these environments (e.g. life on mars), circumventing the slow process of evolution.

While an interplanetary travel system, similar to the airlines of today, might take some more time to take hold, we can still see the advent of a space age. The falling costs will likely open up new economic possibilities with many old companies and startups ready to explore the options. Allowing these companies to bring innovative solutions into space is a huge opportunity, but might also come with some problems (e.g. space debris). Nonetheless, the number of jobs in space related fields will grow in the coming years, requiring well educated people. Also many of the other trends (e.g. quantum computing for material science, AI, etc.) will factor into the improvement of space technology.

Detour: Epochs of the World

Lets take step back for a moment and put all the data that we have analyzed into perspective. To do that, we will look at the time horizons it might take for these changes to take hold and how they could influence each other. (Sapiens* from Yuval Noah Harari or The Vital Question* from Nick Lane are excellent sources, if you want to go into more detail on this topic and the previous epochs of humankind).

Summary of the trends (by my opinion) — Deep Blue indicates the highest likelihood

Conclusion

I know this was quite a bit to unpack in one article, but I wanted to provide a multi-faceted view on potential junctures and move away from looking at disciplines in their isolated silos. I think it is important to get a complete overview from time to time, as these disciplines will strongly influence each other in the future and we would skew our predictions, if we just look in one area. Of course, there are way more complex analysis that can be made on these junctures and the inter-dependencies between them, however this article is designed to be thought provoking. We might go into more detail on single trends in future articles.

(* = affiliate link)

--

--

Felix Geilert
Future Vision

Data Scientist with a research background in AI. I read a lot and like knowledge-exchange. I write about education and technology. Always looking for ideas.