If the world knew what the world knows

Giannigiacomelli
8 min readAug 1, 2022

--

Photo credits: NASA

Genius and stupidity seem to coexist at an unprecedented scale in our world. As Edward Olsen said, the interplay between our paleolithic brain, medieval institutions, and advanced tech is at the root of many of our struggles. The collective intelligence emerging from those three elements is constantly tested and often fails — from populism to social media gone awry, to pandemic unpreparedness and climate change. It often feels like we are fighting tomorrow’s challenges with yesterday’s intelligence.

But there’s one significant reason for optimism, as one very large resource is largely untapped. Our world routinely throws away or ignores the knowledge we create. You can see it in your own daily work, and the work of your organizations: every day, we reinvent wheels, and we don’t access the right people (or organizations) at the right time to find (or remember) solutions. Our collective brain isn’t functioning as well as it could.

An infinite engine of knowledge

Thanks to the web, in the last twenty years, we have wired our collective brain in unimaginable ways. The world creates an astonishing amount of data — and knowledge — and makes it available online. It connects people in incredible ways that would have felt like sci-fi at the turn of the millennium. (Hundreds of examples of organizations, movements, and building blocks that harness this power have been inventoried.)

Yet, when it comes to harnessing planetary knowledge, we haven’t seen anything yet. Today there’s immense and untapped potential because of the convergence of a few powerful vectors. Consider these examples.

One of the most important innovations of the last twenty years has been the search engine, intended to “organize the world’s knowledge”, in Google’s words. Even video and audio content are easy to search today.

AI’s natural-language models have enormously progressed in the last years, leading to astonishing tools such as GPT-3 which have made language understanding and generation a lot easier. Beyond written language, image processing and generation like Dall-E2 or Stable Diffusion have also evolved by leaps and bounds (as a presage of future things, Stable Diffusion’s Text-to-Image prompt result database is now being mined by a dedicated search algorithm). By the way, this is not an unalloyed good, and needs careful design and implementation, as Meta found out in their recent experiment.

Knowledge-graph technologies that establish relationships between concepts, people, and organizations (“entities”), make the world’s knowledge even easier to mine, especially when combined with large language models (LLPs). New tools use that to enable richer search (and this) and, when combined with natural language understanding (for instance, in science, this, this, this, this, this, this, and this), hold promise for the exploration of specific topics (e.g., this, this, this, this, this and this). Knowledge graphs may soon be fed by AI natural-language technology, industrializing the organization of richer information (imagine mining the relationships between drugs, genes, and proteins, evinced from scientific texts). And the ongoing re-mix of everything made through social media makes connections between ideas, people, and organizations explicit — some of which can be mined through publicly accessible APIs.

Data science, including its crowdsourced citizen data-science form, enables the use of new and existing sources of data, including the increasing amount produced by the Internet of Things (IoT) both public (e.g. heat measurement), private (e.g. Google’s land development tracker), and crowd-based (e.g. Arduino based sensors).

People simply share more: thanks to self-publishing tools, and because of the importance of enterprise and personal thought leadership, the web is awash with publicly available content from companies that would have been considered trade secrets only a couple of decades ago. Scientific knowledge is increasingly retrievable, through specialized search engines (e.g. Google Scholar), portals, and networks (e.g. Researchgate), and because of the mounting pressure to make it freely accessible.

And it is not just “asynchronous” knowledge access. Modern cloud technology and sophisticated data compression algorithms make video and voice connectivity ubiquitous, at increasingly low levels of data speed, which makes synchronous knowledge retrieval and generation more frictionless than ever.

As a result, augmented collective intelligence (MIT’s Malone calls them superminds) is emerging, as the image below illustrates for the example of enterprise innovation teams, who harness the internal and external ecosystem as an extension of their own brain.

Still nowhere near our organic counterparts

But our collective technology and methods pale in comparison with what happens elsewhere. The world does countless “natural” experiments (both in our society as well as in nature) that aren’t harvested— unlike the “active inference” that our brain and in a way the natural world do. Take the following examples:

Search engines’ algorithms, and their use, are still largely driven by advertising markets, not knowledge industries. Research points out that AI can give innovators superpowers by, among others, improving search for knowledge across domains (if we enable people with the cross-disciplinary skills that they need to make combinatorial innovation happen). But search engines and commonly-used methods do not make truly advanced search available to most people. For instance, they don’t explicitly allow the exhaustive visualization of knowledge graphs, so that one could identify both content and people (and organizations) — as well as explore adjacent fields. Not all meaningful websites and content are inventoried. Much of the “new” action currently remains within enterprises through machine-learning-based knowledge management (such as Microsoft Viva Topics), but the overall knowledge ecosystem is many orders of magnitude larger.

Social media algorithms’ recommendations optimize for predicted engagement (e.g., likes, or shares), not problem-solving or creativity. And try to follow the right people and the right topics isn’t effortless: one can’t easily find people to follow based on the field they’re competent in. Similarly, professional social networks such as LinkedIn are not optimized for skill-based search (“which people work in my field?”) and do not facilitate field exploration (“which subfields exist, and who works there?”) or validation of ideas (e.g., assessing people’s claims credibility by checking their — or their network’s — skills).

Natural language models could proactively propose novel combinations of concepts for humans to refine — but they’re not used for that purpose yet.

Data science, and the translation of science into respective models, is still very much an elite job. Data crowdsourcing (e.g. through citizen science) and increasingly easy-to-use tooling (e.g. XGBoost) show that the floor can be further lowered so that more people can come on board.

Much knowledge, especially publicly-funded research, still sits behind paywalls, preventing deep mapping and access.

Too many people are so specialized that they can’t combine knowledge from different domains, and unlock combinatorial innovation — and training is often focused on specialization, instead of so-called “T-shaping”.

Surprisingly, language barriers are still significant and end up siloing up the world’s knowledge. Think about it: web searches only show results for same-language sites: if you are in the US and look for “heat pump installation methods”, you typically won’t see content from (machine-translated) German, Japanese, or Chinese sources. And while translation engines like Google Translate have improved remarkably, they’re still not used pervasively yet in a range of potential knowledge-sharing applications.

As a result, we collectively don’t learn enough from the experiments made elsewhere. Think of “Global South” practitioners quickly learning from cost-effective climate adaptation projects in other countries, irrespective of whether they are documented in Indonesian Bhasa, Spanish, Urdu, Swahili, Hindi, or Chinese. Conversely, developed-countries practitioners fail to access sources of “reverse innovation” — lower-cost ideas developed under significant budget constraints. And, generally, knowledge “backwaters” exist: users in many (non-English speaking) countries prolong the use of old knowledge because they don’t have access to the right networks in real-time (think of old schoolbooks and non-English language internet pages for technical topics).

Sadly, our organizational design practices reflect the issue: strategic knowledge creation and management isn’t a C-suite role, and that job is often fragmented across departments — domain practice groups, the CIO, sales support, etc. which weakens the much-needed enterprise transformation. Across even broader ecosystems, incentive systems are still broken, as attested by academia’s struggles to give appropriate credit and encourage more creative exploration.

And finally, and ironically, the respective digital product ecosystem doesn’t attract as much attention and investment as others (venture capital anyone?).

What we need to do

I have argued elsewhere that in order to amplify and accelerate innovation cycles we need to build “supermind utilities” — possibly as public, or partially open-sourced, goods so that the global community can access them. They could be financed by governments, private individuals, or corporations. Over time, the return on such investments will attract more private capital, crowdsource contributions, and help develop business models that eschew advertising and make money by stimulating our pre-frontal cortex, not our amygdala. (The potential promise of some web3 technology could help, as and when it gets out of its current hype and greed cycle.)

To be clear — it is very likely there’s a solid business case to build commercially viable digital products that cater to a type of “knowledge super users”. The current challenge is to show them (and their C-suite), an easy and exactly quantifiable return on investment. As is often the case, the most sophisticated users, and the companies with the most foresight, will end up leading the pack.

Over 2,500 years ago the library of Alexandria ignited innovation across a chunk of the ancient world, and innumerable efforts have helped build repositories of knowledge over the centuries. The word “university” originally meant “community”, and universities received funding to strengthen those (analog, organic) superminds — helping the respective networks and their knowledge converge. In the 21st century, augmenting the world’s collective intelligence by building such knowledge utilities sounds like a reasonable thing to do.

These superminds will generate a superior intelligence, emerging from the network of knowledge and skills that exists below today’s comparably superficial web-based interactions. They will help us fight tomorrow’s challenges with tomorrow’s intelligence, across:

  • known-knowns: problems whose solution exists elsewhere, so collectively remembering and learning what works,
  • known-unknowns, by creating and deciding on solutions that we struggle with, and
  • unknown-unknowns, by sensing low-signal but high-momentum trends that could quickly turn into major opportunities or threats.

Of course, lots can go wrong. To start with, we will need ways to mitigate our collective tendency to fall for unsubstantiated claims, counter rogue actors, and generally reduce trolling and abuse. But with the right incentives, methods, and capability, it sounds plausible that we will be able to emulate, for instance, Wikipedia and its collectively-enforced quality control.

Every single hour, the Earth receives from the Sun the amount of energy that the entire human civilization consumes in a year. We are getting better at harvesting that power. There is reason to believe that we are “leaving knowledge on the table” in similar proportions, and by harnessing our collective knowledge, we could harvest our collective cognitive power.

Much of our innovation challenges are addressed as “design” problems: typically tackled by small groups of experts, with comparatively limited access to the world’s collective intelligence. Instead, we can turn them into “search” problems, which makes them likely to be tackled by the many experiments that happen in the world every day, and are documented in an increasingly large and accessible knowledge corpus.

Building on today’s technologies and methods, there’s much that we can do about it. Let’s solve tomorrow’s problems with tomorrow’s intelligence. Let’s go build superminds.

This post complements the tech-driven organizational design materials at www.supermind.design and some previous blog posts on designing an AI-augmented collective intelligence. Read them on Medium or LinkedIn if you’re interested in using these techniques in your own organization, and get in touch if you want to discuss.

--

--

Giannigiacomelli

Founder, Supermind.Design. Head of Innovation Design at MIT's Collective Intelligence Design Lab. Former Chief Innovation Officer at Genpact. Advisory boards.