Why You Haven’t Heard About the Most Exciting Field in A.I. Today

I’m a permanent resident of Australia, a citizen of New Zealand, but I vote in California, where I was born and raised.

Every US election — including this current high-speed car chase (casualties everywhere) of a presidential race — the voters of California select yay or nay on a complex litany of ballot propositions. Because it’s California, with a $2.6 trillion economy, bigger than India’s, these propositions can have a rippling effect not just across America but around the world.

Especially when it comes to technology.

Take Proposition 22: Should mobile gig workers such as Uber drivers be treated as independent contractors or as employees? If the proposition is rejected, and Uber is forced to treat its drivers as employees, it won’t be long before we feel the impact of Proposition 22 all the way across the Pacific Ocean, right here in Queensland. Namely, higher fares for Uber rides.

Or take Proposition 24, which expands data privacy laws to…well, even the most reputable data rights advocates seem unsure exactly what it would do. But the impact, yay or nay, will certainly reverberate across the Internet. A paper from Carnegie Mellon University claims it takes 76 working days to read all the privacy polices an average Internet user encounters in a year. That was 2012. Today it’s no better, probably worse.

The point is, California voters (myself included) — info-saturated, time poor, assaulted by tendentious promo material — are making decisions which will likely impact more people outside the state’s borders than within.[1] So much about your phones, your news sources, your digital lifestyle will be determined by uninformed people like me.

The wrong people for the decision-making job.

Some of the most exciting work in A.I. today involves the programming of good decision-making; that is, A.I. which canvases large groups of people, arrives at the best possible consensus for the best possible decisions. Known as Artificial Collective Super-Intelligence (ACSI), the field is producing results that are more cost-effective, more equitable, and more committed to eliminating bias.[2] It has the potential to solve some of the world’s most urgent and seemingly intractable problems.

Technology has always created its own borders, its own political structures, and collective decision making —what we sometimes call “democracy” — can only exist within its tight-fitting confines. Take the US electoral college system, the current bane of America’s presidential race. It emerged at a time when a letter from Savannah, Georgia, took 20 days by stagecoach to reach Portland, Maine, and another 20 to get a reply.

Even today, when web content travels around the world in less than a second, how many people in Wagga Wagga consider it their fundamental right to elect the next UN Secretary General? To the people of Savannah in the late 18th century, the politics of Washington D.C. would have felt even more remote and superfluous. In such a situation, having a group of trusted electors represent you on faraway matters, about things you know or care little about, makes really good sense.

Until it doesn’t. Until the technology shifts and you find yourself outside the wall looking in.

Communications technology shapes democracy. As the technology changes, the democracy transforms.

“The origin story of so many technologists rests on the realisation they could use technology to project their will on the world and make common cause with other people,” explains the author Cory Doctorow, who explores the relationship between culture and tech.

Common cause is fundamental to tech innovation. New technology, just like scientific discovery, is about connecting worlds, transmitting knowledge between the old and the new, those who grasp it and those who don’t, in a common, verifiable language: Take a look at this. Now try it yourself. Do you see what I see? Once the connection is made, a community forms, a “demos,” with a common understanding.

Over 10,000 years ago, the engineers of the Wurdi Youang rock formation in modern Victoria achieved a common cause of sorts — rendering the movement of the sun in a format understood by anyone capable of reading it. The sun, the stone, the shadows were, in a way, expressing intelligence through the carefully designed algorithms of the rocks’ locations. The common cause was to find meaning in the world, hear the universe explain itself, by itself, more accurately than, and without the bias of, the human mind.

It worked; and it no doubt transformed the culture, the political structures, the “demos” of its day.

Few of what we today call “A.I. engineers” would recognise Wurdi Youang as an early incarnation of artificial intelligence. For them A.I. is far more than a rendering of the seasons by a hundred or so rocks on a starry night or a sunny day. For them A.I. is a model that can perform trillions of calculations to discover patterns in large datasets — visual, audio, text — in order to perform tasks we normally associate with human intelligence.

We don’t talk about intelligent rocks; we talk about intelligent people. A few of the hundred or so rocks from the Wurdi Youang in Victoria, Australia. Some scientists suggest it could be the oldest astronomical observatory in the world.

Some A.I. systems, as engineers often proclaim, can even complete a specific task “better than humans.” Which is a banal accomplishment to say the least. What tool is worth discussing, after all, if at the very minimum it can’t do something better than we can?

A.I. is just an idea. In discussions we must either include every artificial augmentation of human reasoning, from Wurdi Youang to the latest smartwatch, in which case it becomes a stock feature of human communication and is hardly worth assigning a platform of its own. Or it must exist in some exclusive realm adjudicated by a supreme council of philosophers, in which case, it will forever dangle like a toy above a cat, just beyond reach of our swiping paw.

Today’s AlphaGo, after all, is tomorrow’s Commodore 64.

All this is to say we often talk about A.I. in the context of machine-driven tasks — smart speakers, smart cars, smart wearables, smart bots and so on — while forgetting that A.I. includes artificial engines (machines) that power real (human) intelligence. Yes, A.I. can creates smarter devices, but it can also create smarter people.

Take for example Tiktok. Okay, fair enough, it’s not creating smarter people. But it’s nourishing a human talent, a kind of suped-up, hyper-social form of digital busking that didn’t exist before. Sure, today’s Tiktok will be tomorrow’s Hollywood. But by designing algorithms that display the right snippet of entertainment to the right people, Tiktok has likely created the most powerful entertainment generator in human history.

And if human intelligence relied on our ability to entertain strangers — which it doesn’t — then Tiktok would be one of the most advanced examples of A.I. in the world today.[3]

Now imagine applying a similar method to human intelligence; a Tiktok of knowledge, so to speak. Instead of a catchy, well-practiced “renegade” dance move, people could share hard-earned, deeply analysed, empirical, scientific insights. Instead of producing super-entertainment, the A.I. algorithms would identify, synthesise and elevate the most useful intelligence possible, a super-intelligence on a scale never before achieved. It could help solve some of society’s most pressing issues — food and job security, climate change, energy sustainability, the entire gamut of the U.N. sustainable development goals.

Both machine learning and ACSI will have their biases. Both suffer the “who decides who decides” problem[4]. Both need to make sure they can communicate the increasingly complex “black boxes” of their algorithms (which is certainly doable). Both have the potential to transform the nature of human labour, for better or worse, and rapidly take society places we might not want to go.

There are important differences, however: First, democratic principles and the elimination of bias are central to most ACSI algorithms right from the start. Just as good science depends on blind peer review, fair debate, diverse viewpoints, objective analysis and so forth, ACSI is human-focused and only as good as its democratic principles. Inclusion, diversity, equal rights — these are baked into the algorithms. They are part of the engineer’s job.

“Renegade” dance creator Jalaiah Harmon on TikTok.

Second, ACSI is better at addressing complex, real world problems which don’t have clear solutions. With traditional machine learning, we use data with pre-defined tags (or answers) to create and train models. We then apply these models to other, untagged data, continually improving the model itself. “My model is getting 92% percent accuracy,” we might say, but this only applies to the specific task we created for it. It’s not a measure of the machine’s thinking ability.

Unlike machine learning, ACSI doesn’t require answers in advance. In fact, the measure of its super-intelligence is based on the quality of the answers (and, too, the quality of new questions) that emerge from its consensus algorithms. For example, given the novelty of the Covid-19 outbreak, we didn’t have enough data to determine correct answers about, say, whether we should keep schools open during the pandemic. But a well-designed ACSI system would motivate the most experienced epidemiologists to share their analysis, and the algorithms would synthesise that knowledge into a recommendation.[5]

It sounds commonplace. It’s what humans do today. It’s what our bureaucracies do. A Prime Minister consults a task force, which solicits knowledge from experts and so on. But we do it poorly, slowly, with all sorts of personal and political biases. Our communications technology has dramatically advanced, but we’re still thinking like we did hundreds of years ago, using the same methods of decision-making; just as today I find myself voting on cryptic propositions drafted (or let’s be honest, paid for) by people on the other side of the world.

Which raises an interesting question: Do we have the ability to recognise the tremendous potential of ACSI and invest in its future development? Or will we continue to limit our thinking about A.I., investing almost entirely to endowing machines with human-like abilities and creating tools that make specific, mundane task more efficient?

Will we realise that A.I. is core to our evolution (has always been) and shows great potential for our future, including the ability to augment how we share knowledge, helping us make smarter, time-critical decisions as a group?

If so, the first step might be to reframe our understanding of A.I.’s existence and its utility. Yes, automating tasks is a useful application for A.I.. But it’s also useful to remember that solving a collective problem is every bit as important as solving an individual’s problem; and that indeed, one solution often helps alleviate the other.

— — — — — — —
1) Interestingly, while Californians may disagree on the propositions themselves, few would argue the proposition system itself represents the will of the people, or that it’s benefited the state in any meaningful way. Most Californians I talk to believe the system is broken.

2) Admittedly, as a developer of ACSI, I’m biased on this point about bias. Based on comparisons between ACSI and Machine Learning, however, I don’t think the cost effectiveness of ACSI is disputable.

3) Given the amount of data it analyses to make its content more sticky, it can be said that TikTok is a global leader in Deep Machine Learning, which is what most computer engineers think of when they think of A.I..

4) The phrase “who decides, and who decides who decides,” is a kind of catchphrase in Shoshanna Zuboff’s book, Surveillance Capitalism, and is used widely today by those who oppose what’s been termed “Big Tech.”

5) Measures of “quality answers” are themselves testable either through control group scenarios (was the A.I. prediction correct?), or objective answers hidden from the system and revealed later.

--

--