Augmented Collective Intelligence — February 2023 Newsletter

Giannigiacomelli
5 min readFeb 6, 2023

--

Credit: Lexica

Here’s the sixth newsletter (previous ones here) that curates news and stories through the lens of technologically augmented collective intelligence.

Of course, January has been another big month for ChatGPT and large language models (LLMs) are now visible to everyone. I will spare you a curation of the respective articles, and instead, will offer you a small and different perspective.

LLMs are flawed in many ways (say: accuracy, and reliability), and incredible in many others (e.g., flexibility, speed, human “feel”). Much of the debate is now about how to make them better, and if they will get to Artificial General Intelligence (AGI) with some version of Moore’s law. There’s also the standard debate about “will this take my job”.

Where we should also look instead is this:

First: it is amazing how much you can get out of very large models that, at some level, mostly “autocomplete text”. They were trained on a subset of our corpus of knowledge, and use semantics to make (some) sense of the world. They were also trained with software code (which helps with some abstraction and symbolic thinking) and images. But, by and large, they use the reasoning that is implicit in language. Language memorialized at scale on the web is an expression of our world’s collective intelligence, of the myriad natural experiments made by biological machines (us) and our networks (organizations, social networks, economies, societies) — and it seems to embed way more usable logic than we initially thought. That’s the real surprise with LLMs.

LLMs are unreliable, but they can possibly be made less so — for instance by pairing them with search or relying on smaller models fine-tuned with larger specialized corpora (MedPalm has shown that yields more reliable results for medicine, for instance).

And they don’t really “understand” the world. They don’t use symbolic models of the world as we do — models that represent what we see, hear, and feel. But yet again, we could use them to do what they’re good at (provide perspectives, and make knowledge more accessible especially when paired with knowledge graph technology), and leave the encoding of the logic of the world to human networks, which can use the symbolic and abstraction capabilities of people and their collaboration.

These options point at a different, and possibly more effective way, to engineer an AI-enabled intelligent system — by using AI to augment our collective intelligence, and building what some at MIT call a supermind.

In my view the issue we should really focus on is one of governance: the cost of jamming our communication channels is plummeting, the content moderation capabilities of many of our media (including social media) outlets are already stretched, and a society where one can’t trust anything that’s been said would be a very dangerous one to live in. Remember: Russia’s main media outlet for many years was the state-controlled Pravda, which means “truth” in Russian. Think what that scenario could do, with indefatigable machines at their service.

Enjoy this month’s curation, from harder science to smaller brains, from exposing networks of corruption to resilient energy systems in a war, and much more. Share liberally!

Innovation

🎲 Using a Board Game to Plan for a Changing Planet A Māori community uses a board game to collectively arrive at important climate-adaptation decisions.

🧠 Bridging the Innovation Chasm with a Supermind Sociobiologist Edward O. Wilson once said that the real problem of humanity is [that] we have paleolithic emotions, medieval institutions, and God-like technology. A positive exponential future will depend on how effectively that trio handles society. That’s why we need more than just individual brains.

🧬 Silicon Valley’s New Obsession Lots of ferment around how science is done, some of which comes from the extensive application of the collective intelligence paradigm.

💊 How Crypto Can Help Science & Medicine Science and medicinal research could benefit from new blockchain-based ecosystem designs. Again, good read now that the dust is settling.

🔬 Science Is Getting Harder Science may be getting harder, and breakthroughs rarer. Is there reason to believe that our operating model for science, including collaboration and incentives, is not fit for purpose anymore?

🛠️ Tools for Systems Thinkers: The 6 Fundamental Concepts of Systems Thinking A good blog about the primitives of systems thinking: interconnectedness, synthesis, emergence, feedback loops, causality, and the need for mapping.

Future of Work

🤪 Bias Busters: When the Crowd Isn’t Necessarily WiseThe role of CFOs in avoiding collective-intelligence bias and shortcomings.

💪🏾 Decentralized Society: Finding Web3’s Soul Even when discounting the implosion of blockchain discussions, there is possibly something big in this, co-authored by Vitalik Buterin: a possible prospect of solving for both important metadata tagging, and trust, in networks. The ability to add immutable, quality-controlled (by the network), qualifier data to entities. The possibility of composing those algorithmically. Worth reading and digesting.

🏛️ Designing Token Economies Deep blog on how to build token economies, a critical mechanism of ecosystem design that uses web3 technology. Economics and user experience need a thoughtful combination. Good read now that the curtain was called on the first generation of those technologies.

💻 The Collective Intelligence of Remote Teams Collective intelligence doesn’t suffer from remote work — as long as the “who” and the “who” are intentionally designed.

♟️ Collective Intelligence Is About to Disrupt Your Strategy: Are You Ready?What are superminds, and what can they do for organizations? An interview with MIT professor Thomas Malone who coined the term supermind.

Society and politics

🙇🏼 Pandora Papers: How Journalists Mined Terabytes of Offshore Data to Expose the World’s Elites Hundred of journalists using knowledge graph technology to explore a large and fragmented leaked data set.

📈 Can Collective Intelligence Beat the Market? Great podcast on how the use of sophisticated, web3-enabled collective-intelligence mechanisms to build financial-market hedge fund data models.

🔎 A Global Genomic Surveillance System to Thwart Pandemics Surveillance against the next pandemic will include instrumenting interspecies sensor systems, as well as strengthening international healthcare networks.

💁 Anonymous Tipsters, Angry at Russia, Help Detect Sanctions-Busters Anonymous tipsters, angry at Russia, help detect sanctions-busters through an open intelligence community.

🌊 What Should Crisis Leadership Look Like? Crisis leadership in partisan America, through the lens of groups of people who still get their job done, collectively.

💥 Ukrainian solar plant partly resumes operations after bombing Interesting example of networks Vs hierarchies. The distributed renewable energy network appears to be more resilient to attack. Centralized systems based on big fossil or nuclear power plants presumably wouldn’t be as resilient.

Collective Intelligence’s big picture

😶 Why Cooperation Might Have Shrunk Our Brains More on brain size and societies’ complexity. Always intriguing.

🐦 Vocally Mediated Consensus Decisions Govern Mass Departures From Jackdaw Roosts Another form of collective intelligence mediated by technology (voice) we wouldn’t suspect that these life forms would use.

👩🏾 Stewardship of Global Collective Behavior A comprehensive review of academic literature related to collective intelligence, culminating in a plea to make stewardship of collective behavior a “crisis discipline”.

See you in a few weeks, and share this newsletter with anyone who could benefit from it! In the meanwhile, for more new things, you can visit CollectiveIntelligence.media or follow us on Twitter @Augmented_CI.

And if you want to design and build an augmented collective intelligence for your organization, visit the Supermind.Design website and especially the new version of its database, with over 900 real-life examples.

With thanks to Azeem Azhar’s Exponential View members for their contribution to spotting some of these articles.

--

--

Giannigiacomelli

Founder, Supermind.Design. Head of Innovation Design at MIT's Collective Intelligence Design Lab. Former Chief Innovation Officer at Genpact. Advisory boards.