How Citizen Science Technology Can Save Us from Surveillance Capitalism
Program or be programmed. Such is the mantra of the media theorist Douglas Rushkoff, who’s been warning us for years (quite reasonably so) about the dystopian dangers of big-tech software and artificial intelligence.
Like most mantras, its very pithiness can both highlight and obscure the action it calls for. “Program,” as in learn to code? “Be programmed,” as in lose all agency? All ability to make decisions for ourselves?
One thing is clear: We’ve been here before, at many times and locations in history. Similar mantras. Similar questions and fears.
Most recently — at least in terms of its impact on humanity — the idea of “program or be programmed” spoke to leading intellectuals of late 19th century Asia. To “program” was to learn the languages and norms of what was called “Western Civilisation” — or a “good idea” as Gandhi famously replied when asked what he thought about it.
Gandhi, who studied law in 1880s London, could be considered one of the most brilliant hackers of his day. He learned how to program in British legal code, and used that code to help spawn an independent India. But Gandhi was not one to promote the idea of “program or be programmed.”
Neither were many of the leading Asian intellectuals of the time. The Iranian born thinker and political activist, Jamal al-Din al-Afghani, felt that learning Western ways only exposed people to Western rule. Even during times of revolt, such as the Indian mutiny of 1857, one needn’t have looked far to find native people — westernised civil servants, engineers, academics — who considered their foreign system of government the most glorious thing in the world.
Nor would we need look far today to see how easy it is for the programmers themselves to be programmed.
In fact, the big tech companies that Rushkoff warns us against are quite happy to invoke the “program or be programmed” mantra as a way to recruit customers (Apple’s famous hammer-throw at the Big Brother mega-screen), or standardise coding language, software, brands; or simply to spawn the next generation of digital masons. A recently observed signboard for one of thousands of training centres that pepper the streets of Hyderabad, India: “Machine Learning, AI and Cloud in 6 months.”
This is hardly the sort of empowerment our digital freedom fighters would envision; and yet they’re right to sound the alarm about the loss of our humanity to big tech platforms — platforms that put shareholders above our right to privacy, above social responsibility, above making the world a better place.
But this is not something to discuss in polite conversation.
During a trip to the US, friends and family members would proudly introduce me to their latest home assistant, marvelling at — and showing off — its ability to understand their voice commands.
“Alexa, what does a whale sound like? Listen to that. Isn’t that amazing?”
A source of status, of entertainment, of bonding over the wonders of technological change. I wondered, would it not be cruel for me to dispel such enchantment? Would it not be better to simply indulge my loved ones with a ”yes, it’s amazing”?
Or is it better to speak the truth, something like, “pretty cool, but you know, this seemingly benign device is actually robbing us or our individual agency, stealing our private selves, threatening our democracy”?
“Yes, amazing” — and now let’s talk about the future.
If learning to program isn’t the answer, what is?
Woven into past eras of colonisation is a common thread: The bigger the data, the bigger and easier the conquest. And woven throughout the history of rebellion against oppressors — be it from activists like the 19th century al-Afghani or the 21st century Rushkoff — are two, even stronger threads:
The first is the stuff of self-determination — autonomy, freedom, equal rights, democracy, liberty, collective decision making.
The second, critical to the Enlightenment Age, is scientific truth.
Let’s not forget, the Internet didn’t need to be the way it is now. There was a fork in the road — a binary of fate, if you will— in the early 2000s. That was when Eric Schmidt, Hal Varian and others at Google began using the Internet to monetise the behavioural data generated by their company’s search engine. The decision ushered in what Shoshana Zuboff, emeritus professor at Harvard Business School, calls the “Age of Surveillance Capitalism.” Not long afterward came the social graph, the who-you-know networks of Facebook, LinkedIn, Twitter and many others.
Eyeballs. Keywords. Cost per click. Thumbs ups. Likes. Followers. And so the Internet became the popularity network it is today.
It could have been different; and just as capitalist-driven.
Instead of a who-you-know network, the Internet could just as easily have become a what-you-know network. Rather than by clicks and eyeballs, the value of information could be determined more deeply, by using metrics that measure its fitness for use (many of the home- and ride-sharing apps do this to some degree). Sure, information isn’t physical. You don’t purchase, say, the diagnosis of an abdominal ailment in the same way you buy a place to live. But information theorists have long known that, using internet technology and a well-designed platform, expertise can be priced and sold on the open market.
And if it saves your life, an accurate medical diagnosis may be more valuable than the most expensive home in Manhattan.
So outlandish does this concept sound to generations that have grown up with the who-you-know model (generations of people who happily trade personal information for the right to communicate), that when I discuss a different model in classes or workshops, I might as well be flipping channels on the inter-dimensional TV cable of Rick and Morty.
Flip. Here’s the Internet obsessed with Kim Kardashian. Flip. Here’s the Internet obsessed with the world’s most brilliant 17-year-old botanist from Nepal.
Yet in fact, info-trading markets exist today; and one community that’s begun embracing them is the citizen science community.
Or more precisely, the citizen science technology community.
Let me be clear: I’m not talking about citizen science, the social movement. I’m not talking about researchers who use citizen science volunteers to help collect and curate their data for them. I’m not talking about yet another app for yet another data collection program (“Oh, but our app is special because our community is recording orchids, not butterflies.”).
I’m talking instead about online, algorithmic systems that reduce the distance — while increasing the speed and reliability — at which large scale scientific collaboration can occur across different cultures and levels of training.
It’s the way that anyone with an internet device, anywhere on the planet, can engage in scientific discovery, not as a “citizen scientist,” but as a new kind of participant — a “scitizen” if you will; because the technology has the ability to erode the boundary between science and citizen. Such technology can bring human minds together, on a massive scale, without sacrificing traditional, long-proven, cross-disciplinary scientific methods, such as supervised training, certification, double blind peer review and so forth.
This technological progress, meanwhile, this method of engaging humans in the empirical act of scientific discovery, is itself a major field of research — a field that demands greater investment. Citizen science, the social movement, has been around forever. When the Hohokam star-watchers of ancient Arizona spotted a supernova in the year 1006, it’s easy to imagine some young citizen scientists helping chisel the records onto the petroglyphs we find today.
But there certainly wasn’t, in the year 1006, the technology to allow the sort of global collaboration of astronomers — from Senegal to Colombia to New Zealand — that helped NASA plan its recent flyby of Ultima Thule, the farthest object ever visited by a human-made device. Such technology, in fact, wasn’t invented until the 1660s, with ships, maps and new standards of navigation.
“With the transit of Mars,” explains Dr. Caren Cooper of the project SciStarter, a web-based repository of Citizen Science projects, “collaborations of people across the globe were taking measurements to compute the distance between the Earth and the Sun.”
Indeed, perhaps the most significant upgrade to info-trading technology — the tweaking of heuristic values such as trust, reputation, influence, factual record, limitations of powers, anonymous peer review and so on — occurred in 1776, with the Declaration of Independence. Science, of course, sits at the heart of Thomas Jefferson’s thinking, with its notion of self-evident truths — “we hold these truths to be self-evident” — inspired by Enlightenment philosophers such as Voltaire, Rousseau and Montesquieu.
The algorithms of the Declaration of Independence — and the resulting US Constitution — are powerful, even today. The system transformed humanity for hundreds of years and propelled it toward new frontiers of scientific discovery. But it, too, was flawed, unscientific. Not all people were included in what constituted “all men,” and as I write these words, the constitution’s very algorithms are threatening to malfunction. America is caught in what the pundits call a “constitutional crisis.” Or what social network developers might call “a gaming of the system.”
If there’s a remedy, it’s not about such things as Twitter verifying accounts, getting rid of anonymous trolls (anonymity can sometimes benefit science). It’s not about Facebook cracking down on fake news, apologising to Congress and promising to abide by its laws (knowing full well its AI division is completely unregulated and more powerful than Congress itself). It’s not about computer scientists building AI to save humanity from itself, or even about more socially responsible media networks.
It’s about decentralised, self-governing technology that can connect any person, anywhere in the world, and engage them in a collaborative endeavour while rewarding them for their contribution. It’s a technology that motivates people not according to who they know, or what they look like, or how popular they may be, but rather to the content of their thinking, the degree of their curiosity, the strength of their commitment to truth.
There’s a hunger for such technologies — especially from remote, vulnerable minds which would otherwise be exploited by the big-tech Internet companies so busy surveilling them. I’ve seen this myself. I’ve seen how a well designed citizen science technology can spark young, neglected minds, from poor communities, into actions that benefit both themselves and the societies they live in.
So where does that leave us? Program or be programmed? If “program” means helping develop a technology that promotes the language of science, an appreciation of beauty, curiosity, self-reflection, independent thinking, imagination, discovery, a well-rounded education, then citizen science technology (not the social movement, not yet another citizen science app) may indeed help us overthrow our Big Data overloads.
And with it, a new frontier of human achievement may start to emerge through the dark.