Coding and Ethics — Why The Tech Market Needs Philosophers

Thiago S. B.
Digital Diplomacy
Published in
5 min readSep 5, 2020

“The saddest aspect of life right now is that science gathers knowledge faster than society gathers wisdom.” Isaac Asimov

Nowadays, in a sudden emergency on the road while you’re driving, you’ll have to trust your instincts when choosing, in a split second, to steer to the left and hit a cat or turning right and hitting a wondering child. With the introduction of self-driving cars on our streets, the programmers that build the code for these machines, we’ll have to tell the computer driving the vehicle which direction should the car turn, and also, which target it ought to hit.

Some clear moral and ethical implications already come to our minds when thinking about this, because, when our instincts decide for us, it’s natural not to blame someone for the consequences of a mid-jump-scare act. But, for example, how to define which poor pedestrian, animal or object on the sidewalk it’s more or less “hittable” for the Tesla that’s being programmed? Do you tell it to hit the victim that’s less likely to be killed by the accident, like a motorcyclist using a helmet?

Well, we still don’t have the answer to this question. And before you start trying come up with an easy solution in your mind by, let’s say, steering the wheel to the target that’s most likely to survive (e.g.: a well-protected bus driver) try to really get the consequences of what’s being defined in the program: you’re basically making safety concerned people less safe, and causing a social pushback on personal security, especially when self-driving cars become omnipresent on the roads.

In this article, we are not going to talk about biased and others wrong-doing algorithms because discriminative outcomes on those cases are usually not moral dilemmas, but just involuntary materialization of prejudices that exists in our society.

Needless to say, like most lawyers, medics and CEOs, programmers will need to have an ethical background for their work — or worse, no one has the background for those answers yet. As more disruptive technologies appear in the hands of our tech overlords and presidents, more clearly we’ll see the lack of established moral principles in humanity’s playbook.

The infamous trolley problem that has become a meme (and even a Twitter page) plays with the moral dilemma of whom should be killed or saved by the person that controls the lever.

These discussions become very critical when we start talking about workforce automation and A.G.I. (artificial general intelligence) that, in the future, will control everything from your pacemaker to autonomous killer drones. That’s why people like Max Tegmark created the Future of Life Institute (FLI) for the mission of: “catalyzing and supporting research and initiatives for safeguarding life […]”.

The core team and volunteers of the institute are composed not only by natural science researchers and engineers, but also, philosophy and psychology brainiacs such as Nick Bostrom, author of Superintelligence: Paths, Dangers, Strategies. Max and its partners are actively trying to assemble an organization that’ll establish certain guidelines for future companies and solo adventurers not just in the winding realm of A.I., but also on climate, biotech and nuclear areas.

Of course, those kinds of technological moral gnomes not only exists in the programming world and are not a well-established consensus around the world. When Paul Berg organized the “Asilomar Convention on Recombinant DNA” (or just Asilomar Convention) in 1975, he was praised in history because of the ground-breaking consensus he and his colleagues were able to achieve on such delicate and powerful “god-like” methods humans put its hands into. But, at the time, of the 60 foreign scientists that were invited, none of them were Korean, Indian or even Chinese. As of today, the ethical discrepancy between the East and West is yet a great thing the scientific and technological world is facing. Currently, this difference can be seen in gene editing debates between China and western scientists.

(Video CBSN) Chinese gene-editing researcher Lai Liangxue laughs when asked about how Americans will hinge on CRISPR technologies because “that’s not for us [humans] to do, that’s for a higher power, God”.

Jumping to the corporate world, “practical philosophy” has been coming in handy for some big companies trying to make it right for their mistakes by hiring ethics professionals to re-write their Code of Conduct. In 2010, BP oil industry hired Roger Steare, a self-entitled corporate philosopher, that works by giving ethics seminars and asking pertinent questions to the executive board’s company. In this case, followed by BP’s huge oil spill scandal, Roger incorporated an ethical decision-making framework in the corporation’s code.

But if you’re like most people, you’d normally trust government regulations, especially on such topics that the majority of the population just doesn’t have the qualification to have a word about. Well, turns out that you can’t rely on them either (at least not the current American Congress).

In the past years, we’ve seen names like Facebook’s Mark Zuckerberg, Amazon’s Jeff Bezos, Apple’s Tim Cook and others big tech heads going to important hearings to meet by the Senate’s antitrust committee and testify explaining the suspicions people had on their companies — especially one reunion that had all of them simultaneously. And, the only thing that was clarified at those meetings, was the complete lack of technological familiarity that most members of the congress had, a legit boomer show. Well, you couldn’t expect more from 60’s political gargoyles, but in the context of the problems we face today, things like this show how tragic the situation can become, largely because those are the people running a very important country for the world’s code of conduct on the subject of innovations.

As of today, we are in the midst of a technological race, private companies and governments are reaching the pinnacle of over-innovative and disruptive technologies, just like we witnessed with atomic bombs and rocket science in the past. Then and now, most humans wouldn’t comprehend it’s gearing, how it’s reasonable for us to understand it’s moral consequences? Well, just like we hire consultants for beer-making when our industrial chemical machinery doesn’t work properly, we should hire philosophers for our eventual professional trolley problems, or face the consequences.

--

--