On Mariana Mazzucato’s moonshot ideas

Jakub Simek
Collective Wisdom
Published in
4 min readOct 17, 2019
After 50 years of stagnation we need to get back into a new moonshot era. Peter Thiel says we need to have again an optimistic and definite view of the future.

I dropped a few ideas on how to get moonshot thinking to a next level. The inspiration came while reading a great WIRED profile of Mariana Mazzucato.

Mariana Mazzucato got famous by changing the narrative around innovation and showing how almost everything in the iPhone was originally invented with state-funding support. Now she goes to a next level and aims to promote mission-oriented policies and institutions, based on the moonshot thinking and the classic example of Apollo mission. Or what Peter Thiel calls optimistic and definite future that we haven’t seen for the last 50 years. The moonshots need to be very ambitious, risky, imaginative and have a clear deadline and a sense of urgency. She also advised AOC on her New Green Deal. She also helped to reform the EU’s Horizon 2020.

I like this this type of radicalism in thinking (as opposed to e.g. radicalism in hating opponents and othering). And moonshots are just one level of this sort of thinking. I also liked the idea to challenge cost-benefit analyses, as not suitable for evaluating moonshots on a single metric (e.g. dollars).One can go deeper still.

One important thing about moonshots is that they are highly risky and we are in the complex domain where causality is impossible to determine, we can only proclaim that a complex system has a disposition for certain results. To increase chances we need to engage in hit-based investing, aimed at increasing optionality — by putting as many small bets as possible and feasible, and conducting parallel safe-to-fail experiments. During Apollo Mission there were 300 such experiments, now due to exponential tech you could have orders of magnitude more experiments than that. Also you can strongly encourage rapid prototyping methodologies — the same idea but at the level of a startup — investing in 20–50 prototypes before settling on a product.

Then you can go a level deeper and focus not on a single metric (e.g. tons of plastic in the ocean) but on health, nurturing, regeneration, overcompensation… etc. of whole ecosystems. Basically to go beyond the paradigm of sustainability into the paradigm of excessive regeneration and overcompensation of nature and increasing the health and antifragility of societies. For example by starting to build a healthy and intact information ecology. But the idea is again not to optimize around a single metric, because that causes lots of problems.

Another problem is that lots of innovation can be weaponized and we don’t want that, because of exponential tech, this is ultimately self-terminating for humanity. Another problem is that a technology usually creates more problems than it solves. So the combustion engine solved the “horse shit problem in London”, but created current problems in the Middle East (and accelerated climate change…). Another problem are existential risks connected to AGI, nano-tech and bio-tech on their own.

So you can go again a level deeper and focus on what Daniel Schmachtenberger calls generator functions of existential risk. Deep rooted rivalry is one (zero-sum games, game theory and instrumental rationality, as well as narrow goals are problematic in themselves). I would maybe add a general and deep tendency to imitate others as another one. This is connected to the line of thinking in Peter Thiel’s Zero To One, but originates with René Girard’s mimetic theory (as a type of conflict theory). One can ease these tendencies by tools such as ITN Framework used by Effective Altruists to prioritize causes and investments.

The other two generator functions are mentioned above, as problems of “technology creating new problems” and unintended consequences. Because complicated systems are not complex systems. Because we turn complex systems (forests) into simple ones (lumber) to produce complicated systems (houses, Boeing 747s…). And so far we are not able to create complex systems from scratch. A similar idea by Bonnita Roy is to focus on underlying protocols that guide e.g. collective action, help to build trust, etc. Or an idea of Jordan Hall to focus on increasing individual & collective sovereignty and collective wisdom. Together with bunch of other thinkers (Bret Weinstein, Jim Rutt…) this areas and way of thinking are being explored as Game B for some years.

You can go a level deeper still and focus on topology and ethics. Some wise men or women sitting at a reformed Horizon 2020 as investment- or grant managers is still an example of top-down topology and a third-person view. One might want to balance and integrate also peer-to-peer and bottom-up efforts and first-person views. This is quite an abstract territory well explored by Forrest Landry. You can also look at it as maximizing the product of symmetry (science, space, top-down, third-person view) x continuity (ethics, time, bottom-up, first-person view) to get into something truly new and emergent (peer-to-peer view, effective collective action, collective wisdom, group flow, collective coherence…). The idea is that you cannot have a perfect symmetry and a perfect continuity at the same time. One needs to give up a little bit to maximize the product of both. The symmetry principle would be something like the golden rule (do onto others as you want to be treated yourself) and the continuity principle is something like the platinum rule (do onto others as they want to be treated themselves). And you need some sensible combination of both. Also meaning you need a combination of science and practical ethics.

--

--

Jakub Simek
Collective Wisdom

I cofounded Sote Hub in Kenya and am interested in technological progressivism, complexity, mental models and memetic tribes.