The Ghost in the Algorithm

The necessary struggle to reject “technology first” and develop an ethical framework for the automated era

Fabio Chiusi
Startup Grind

--

In our era of “post-capitalism”, “post-democracy”, “post-truth”, “post-ideology”, “alternative facts”, “fake news” and “dishonest media”, we’re certain of almost nothing. But one thing almost all of us agree upon is that, if there ever were to be a match between technology and humankind, we would be betting on the former.

It is technology that dominates our age. Data is the new oil; Big Data the new everything: healthcare, cities, self-driving cars, delivery drones, and therefore even warfare, even security are going to be “disrupted” by the “smart” uses enabled by the “Internet of Things”.

Look at market capitalization. Top three are Apple, Alphabet and Microsoft, the Holy Trinity of our connected lives.

Give a man a smartphone, an operative system and a search engine, and he’ll become a superman, an übermensch augmented by technology.

Give him even more, many more connected applications — starting from a marketplace like Amazon, and a social network like Facebook, who seat comfortably in 4th and 5th position respectively — and technology will become part of himself.

We are “inforgs”, argues Oxford Professor Luciano Floridi: we are informational entities, bits as much as flesh.

This comes with an inevitable sense of paranoia, one that is the last remaining “spectre haunting Europe”, and the world. What if technology is the true master? How is it to be slaves bowing to glittering devices, instead of great kings and conquerors? As it turns out after tons of poor, Internet-phobic mainstream journalism, not great at all.

Careers end for an automated bias replicated by an indifferent personal rating algorithm stored in some unknown but absolutely crucial database held by Lord knows who, Lord knows where. Social media profiles and email accounts and cloud services are hacked by the thousands, if not millions, leading to spectacular leaks of personal data, financial data, medical data, and sensitive data of all sorts — including the classified and the intimate — all because of the same decision not to adopt two-factor authentication or end-to-end encryption by default.

And robots are not only going to steal your jobs, but they are going to make decisions in your place, more and more of them; and sure, they are going to be hackable too. They are already.

You are not a gadget”, wrote Internet pioneer Jaron Lanier in 2010, and it still perfectly sums up our basic fear about the “digital revolution”. That it has gone too far, too quick; that it will continue to be so — there’s no stopping technology!; and that we have no clue about how to turn the tide in man’s favor.

Forget Trump’s “America First”, or Zuckerberg’s “People First”: the true ideological motto of our era is “Technology first”.

Every time a new problem comes to the fore in the social or political arena, it is first and foremost — if not only — a technological problem, to be framed in a technological scenario and fixed with a technological solution. Political scientist Evgeny Morozov even has a name for it: “solutionism”.

Every time politics or the media want to be “new”, there must be something technological involved: participatory platforms, online referenda, presidential tweets, Facebook Lives, 360 videos, immersive VR reportages, and everything viral.

It is as if politics could only give second-best answers. As if humans could only be less efficient, less “smart” than machines, and therefore inferior. Some tried to compete by becoming quasi-machines themselves: their movement is called “Quantified Self”, and it basically aims at making a measure out of every human volition, until numbers and thoughts tend to the same utilitarian optimum — the intake of calories, the amount of drank water, your heartbeat, all constantly monitored by devices, all digitally perfected.

But that’s nothing less than our own, contemporary “opium for the masses”. Technological determinism may be comforting for those who radically distrust human beings, but it hides a crucial fact: that each technology has a human creator. And that therefore it is those human creators who constitute the true masters of our era.

The digital may have revolutionized everything, but it did not let a computer or a robot sit among Forbes’s richest. Bill Gates, not one of the AIs who terrify the Microsoft guru, is the wealthiest man in the world; Mark Zuckerberg, not its News Feed algorithm, climbed to the fifth position, translating the end of our privacy into a 56 billion dollars net worth.

It is man who dominate technology — it’s simple, and it’s true. It is human programmers who decide what fundamental ethics should a self-driving AI possess and use in case it is needed to prevent an accident automatically. Even with no human intervention, there is human intervention. There always is.

Algorithms decide who we should most easily interact with, you might reply. Who we actually even see in News Feeds or search results. They manipulate our emotions, as in the ominous Facebook experiment that induced bad feelings in ignorant users just to check whether, on average, they would engage more out of depression and frustration rather than happiness and serenity.

News articles seek more and more just about the same: to get more clicks, and to get them now. It is automated bots that serve us with political propaganda, networked activism or sadistic beheadings in the same unwavering manner. Online rating systems decide who we are, what we deserve, what benefits we should be allowed, even what dreams we should be dreaming. “Class” is no longer a political struggle: it strictly belongs to the jargon of information theory.

After all AIs beat poker pros now, because they are even starting to learn how to cheat — we taught them, and they are fast, creative learners.

But again, there is a human behind all of this. Call it a programmer, a data scientist, a digital or social media marketing strategist, a storyteller, an alternative facts-checker, a SEO specialist, a deep neural networks researcher — it’s still a man. And a man, every man, has ethics. Each and everyone of us is a political actor, with beliefs, prejudices, a peculiar view of the social order and some kind of exposition to ideology.

But if every machine has a human creator, if every automatic decision is actually the result of a set of conditionals written by a human being with a precise ethical and political stance, then machines have ethics too. They could not know about this, and the stances may be not intentionally passed on to the digital replicas, but they would still have it.

Here’s something the “digital revolution” is about: it is about discussing how to better incorporate ethics in intelligent machines, or in any machine who is entitled with significant choices in our lives. It is a revolution in ethics and politics and sociology as much as it is in technology. The important finding from our reasoning, though, is that those non-technological aspects of the technological revolutions are, in fact, their true constituents. Human thought may have not changed with every new device and processor, but each revolution in devices and processors brought about important ethical decisions about the role of technology in society.

It is these decisions that we have to better investigate. What we should be asking — not to the machines, but to the human ghost in the machines — is not just, for example, how to secure IoT devices, but why we should be wanting smart cities, smart homes, smart refrigerators, smart roads and the like in the first place, and why would connecting everything be a good thing for society as a whole even if the smart objects could pass the cybersecurity test (spoiler: they can’t).

We should not be asking what it is safe to share with “friends” on social media, but why has the act of “sharing” become so dominant and pervasive in every human experience. Not how to embed more serendipity in our filter-bubbled News Feed, but how exactly we are indoctrinated with our own social media propaganda (as in Pariser), why we are ignorant of the criteria of content selection, and why we should be accepting such an unregulated activity by a private, for-profit entity at all. Not whether we stand with taxis or Uber, but what it means to live in a driver-less society.

This would entail unpacking what Frank Pasquale calls “the Black Box society”.

Open the algorithms, make them transparent, and be clear about it: these imperatives, were they to be truly enacted, would reveal how deep into ethics our technological discourse is actually situated. Discussing such kind of issues would hardly require a PhD in Informatics or hacking skills. It would definitely require, though, expertise in philosophy, sociology, psychology, political science, economics, and the law. These are the masters of technology. These explain — or try to explain — what goes on in the mind of those who coded, designed, marketed and narrated the details and uses of those technologies. When they don’t, as for most proprietary algorithms of the GAFA-led economic era, it is again for a very human reason.

Profit is a very human reason. And profits grow when technology is king — for technological companies, at least. If technology is an indifferent force of history going about its way irrespective of how humans try to plan and modify it, then every technological progress becomes a fact of nature, and it is us who have to adapt to it.

Ethics is sacrificed from the start: thou shalt not question that this particular instance of technological progress is actually progress for us humans, reads the First Commandment of Silicon Valley; because it is progress by definition — and if you fail to see it, you are the problem. Ethics is therefore left as a sort of darwinistic guide for human slavery, in which all you can decide is how to better adapt to a new gadget, social network, or online service — but never ask yourself whether those innovations are actually worth the human effort, or even whether with just some slight modifications the amount of social benefit would have been enormously greater.

Think of Facebook: it would not be difficult to make it much better for democracy. Post in the News Feed should include news articles that embed the exact opposite of the political values the Facebook algorithm deduced for you. Criteria for the “trending” section should be public, and intelligible to every new 13-years old — or 70+ — subscriber. Experiments with the News Feed should not be possible without explicit informed consent — no, mere acceptance of the ToS should not be considered “informed consent” by an experimental subject.

Facebook connection should be secure, but https has been rolled out starting from 2011, whereas Facebook was born in 2004, when the technology had been around for a decade already. Our data on the platform should obviously not be used for surveillance and tracking of protesters, and yet — even after the alleged “Facebook revolution” in Arab countries in 2011 — this has been made into explicit policy only in March 2017.

This does not mean Facebook is unresponsive to user requests. It is. Zuckerberg is actually known for his adaptation skills — think of how nuanced his view of fake news on the platform has become, and how not nuanced it was in the beginning — and he’s quite good at politics, so good in fact that many speculate he will run for President sooner than later. The problem is that with no conflict comes no change. And users have come to accept too much, and question too little about Facebook and how it developed while it was conquering the world.

Is it because many of us are imbued with an ideology that assumes that technological change should not, and cannot, be hindered by politics? Probably. Is it because being on Facebook is much easier and more seductive than reflecting on what it means to live a Facebooked life? Absolutely. My point here, however, is that what we are collectively failing to see is the gigantic transfer of power in the hands of human actors who are not traditionally concerned with deep ethical questions — programmers, engineers, designers, data scientists etc — and yet are now at the very heart of the most fascinating and pressing ethical dilemmas we socially and individually have to face.

Technically skilled people with no background whatsoever in ethics are nonetheless quickly becoming the masters of ethical decisions. Decisions that are actually being made, on a daily basis, without any of us knowing what it is that motivated them, and how they are effectively implemented.

Something is happening, however: such a shift could have not gone completely rogue. Research centers, Institutions, experts, even press coverage are more and more dedicated to the thorny issue of ethics in AI, and in fields as diverse as the workplace, the attention and sharing economy, copyright, surveillance, targeted profiling and propaganda. But we are barely scratching the surface. And we have not yet figured out how to make use of ethics as a tradition of thought to inform policies that better serve the public interest of the multitudes of users, rather then the private one of the selected few who happen to shape their experiences.

Do we want our government to be “data-based”? Would we want it even if it means less democracy in the name of more efficiency, as in Parag Khanna’s “direct technocracy”?

Do we want our cars to be self-driven, even if it means that someone, somewhere, will always know exactly where we are headed, or can hack into the engine remotely? How would the public react to a terrorist act like the one claimed by ISIS in Niece, but in which the killer truck is self-driven?

Voice computing is great — but what about having all of your conversations recorded and stored somewhere, for any jury to hear? It is already happening with Amazon Echo. Is the additional comfort worth the sacrifice in terms of potential loss of freedom?

Algorithms should be fair, not objective — like us. But is their unfairness threatening the very notions of democracy and individual rights? As we still lack a comprehensive picture of their impact on our lives, it is hard to tell whether what we need is mandatory legislation to moderate their power on us, or a milder approach based on self-regulation is enough.

Even with a clearer view of the negative effects of AI domination, it won’t be easy to understand in which precise cases we should resort to the former and in which to the latter — especially since one-size-fits-all solutions are not in sight, and it is more than plausible that they can’t possibly exist at all. As EFF’s Kurt Opsahl recently argued at RightsCon, it just takes a data ethics code of conduct to make us legitimately wonder whether its tenets would pose excessive limits on the freedom of expression of programmers.

So how can abusive algorithms become human rights-compliant? What kinds of subjects should monitor their compliance and enforce those requirements? Do we need corporate “data ethics” officers, institutional boards of experts, independent auditing authorities, transparency-enabled crowdsourced public opinion watchdogs — or a mixture of all this?

Finally, what happens if and when AIs actually gain autonomy to a degree that makes their decisional intelligence effectively comparable to that of humans? Should we be talking of “robot rights”, then? Should autonomous algorithms have their own ethics? Would we be able to recognize it in case they develop a moral system by themselves? And how to make sure their moral principles are actually compatible with those of human judgment about fairness, justice, prejudice, impartiality — even truth?

We have no answers to these questions and, what’s worse, we have no ideology backing a critique to their being not even questions at all. All we are left with is Silicon Valley ideology, in which the only admissible ethics is the utilitarian principle: if it’s faster, cheaper, easier, smarter, then adopt it. In which it is markets who regulate technology, not bureaucracies. And in which most of the times profits get in the same hands: those of technology creators.

We should not be accepting this anymore. We should be trying to articulate an ethics for the automated era. Not just philosophers: programmers, designers, data scientists should cooperate in the effort. There’s a long way to go. But we have a starting point: stop saying “Technology First”. The rest will follow.

--

--