Ethical Technology: How machines are reminding humans that they should care

Today’s algorithms are empowering computers to collect, learn from and analyze big quantities of data. This will change society and the way humans behave.

Mercedes Thomas
HackaMENA
18 min readDec 26, 2018

--

Universities are now trying to catch up with the current fast development of Machine Learning. Future professionals on the field need to understand the potential consequences of the changes upon us.

“From racist ads to Cambridge Analytica, technology’s ethical deficit needs disrupting fast — and we have the power to do it” Lizzie O’shea, The Guardian.

Ethical technology is a concept in which the use of computers and moral ethics come together: how computers and technology are both being handled together. Whether the result and purpose of a machine are making a positive impact that helps the world move forward, or whether it is just a tool for the benefit of a few.

Samples of unethical tech and its consequences

Uber’s massive growth within months after its launch made the company the epicenter of many legal battles and scandals. In 2014, its revenues went from under one billion to about $40 billion.

An Uber user then sent a congressional inquiry regarding Uber’s privacy practices. He declared that the answer to this letter was disappointing and lacking direct answers.

Consequences of the misuse of privacy policies in Uber went from the state of Nevada is the first in the US to shut down Uber, to hundreds of taxi drivers in Europe going on a strike and closing down main streets in protest.

Last year, RadioLab played around with a new type of technology that allows editing video: “a wonderland full of computer scientists, journalists, and digital detectives forces us to rethink even the things we see with our very own eyes.”

One section was dedicated to a neural network that spent hours studying President Obama and ultimately being able to create a ‘fake’, videographic material out of thin air.

Even more, when one of the researchers who took part of this experiment was asked about this project, she answered that it is only her job to create, not to consider the consequences of whichever project she is assigned.

The problem of this type of thinking is not the engineers’ fault, it is the lack of education and conversation in the public discourse regarding these things. It is everyone’s responsibility to give the best use to the devices we either use or have the power to create.

The core issues rely on how computing power is used and whether it breaches any moral values, privacy or beliefs of any individual, group or entity.

One of the current struggles is separating right from wrong: someone’s good might be someone else’s bad.

There are hundreds of questions that arise from this single topic, just as many tech fields there are: covering Artificial Intelligence, healthcare, science, physics, VR, AR, IoT and the list goes on. So let’s drill it down into the 5 Ws: who, what, why, when and how.

WHO

Who gets to define these ethics? Or, who are the right people?

We have plenty of options: experts and organizations that could feel like the right choice: scientists; investors and entrepreneurs; lawmakers and governments; the public and external companies.

1. Scientists: They spend years and decades searching for solutions with a practical objective: to upgrade life as we currently know it. They continuously seek to gain knowledge and experience.

There are many inventions across history that strongly proof how in-depth research brings amazing creations that improve life. Today, we can name the basic ones, from daily things like smartphones, computers, microwaves and vacuum cleaners, to airplanes and WiFi.

Scientists have also made a great contribution and impact: take for instance the Polio vaccine by Jonas Salk, who made +28 million cases around the world to drop to 22 today. Today, we also have the birth control pill, the Pacemaker (device to control heartbeats), bulletproof vests, Coronary Bypass Surgery, MRI, DNA Fingerprinting and the HIV Protease Inhibitors, just to name a few.

According to Forbes’ list of 50 groundbreaking scientists, there are several who are changing the world as we see it with the power of technology and engineering.

Just to name a few, Abe Davis worked on developing an algorithm that can extract audio from a silent video. Bertolt Meyer used technology to dismiss stereotypes that surrounded people with disabilities. Hugh Herr created bionic limbs for amputees, including himself. Nat Turner and Zack Weinberg developed software that connected cancer patients to cures.

But what if now, with all the advanced resources, science has gone too far? Take, for instance, the recent story of ‘designer babies’.

This latest movement in the biotech field has raised many eyebrows. If we are already playing with the human natural life cycle, what will come next? According to this new scientific movement, we can now design our own baby, the same way you could design the look of your living room or the outfit to wear for a special occasion.

How are relationships and hierarchies across different cultures going to be affected by introducing ‘artificially altered’ babies? If it is already possible to choose the looks of a human being, the next step could be to also alter the brain.

The need to create the perfect human being could become easily a top priority among science and eventually, the public. Then, would older generations be marginalized? Same way older generations of machines become irrelevant as new upgraded devices come out to the market, would older, outdated versions of human beings, become irrelevant as well?

Most likely, whenever this human editing starts being ‘in the market’ it will initially only be financially able to the rich, inclining to create an unbalance in society and human social classes as we know them.

Currently, the CRISPR is banned in Europe and only legal in China. Will this new type of human being be compared to the Marvel comics of our childhood? Superhumans vs ‘normal, organic’ humans made in China. How would we teach ethical and moral guidelines to these new humans when their origin is already edited against what we know so far.

This would be the type of ethical questions that experts working on human editing and human health, should keep on top of their minds, in order to avoid losing control over the situation. Today’s advantages in the field could either become tomorrow’s blessings or nightmares.

Another question is who is funding science today? Actually, in one way or another, everyone. The majority of the scientists’ funds come from governmental grants, big companies doing research (fx, nationwide Health institutes), and also non-profit foundations (fx, the Breast Cancer Foundation).

In consequence, the public indirectly supports funds by purchasing products, taking services and by paying taxes and donating to charities. This funding, regardless of its source, creates biases.

Scientists are obviously dedicating a major part of their time on human editing and biology. If they are going to be the first ones to create new rules and affect the human body, shouldn’t we all have a say? It is great to have reached such an advanced time in the research fields.

On the other hand, what is the point of having all the advanced technology in our hands to discover and access new aspects of life, if the discoveries of a few will most likely affect many and not for everyone’s best interest?

Scientists, as disclosed later on this article, need to be involved on ethical courses, especially when sponsored researchers are not destined to improve human life as a whole, but for the advantage of a close group. Human morals are crucial in the field of science and must be taken into consideration more seriously before drastic measures have to be enforced or undesired results come up.

2. Investors and entrepreneurs, who either create and put in the market devices that aim to make life easier and fairer for everyone, or just seek to be the next big giant with their own agenda.

Entrepreneurs are sometimes limited by the investors that are willing to sponsor them, which makes it harder for these independent minds to get out there and compete with the bigger companies.

However, as more tech is creating a new way to see life in many aspects, it is also allowing people to widen their expectations and possibilities. In this year 2018 alone, hundreds of inventions coming from entrepreneurs throw some light on how one can make a positive example in the industry: there is a floating AI assistance for astronauts in space, solar-powered planes, gravity jet suits, AR and VR glasses, exoskeletons and rescue drones.

Startups also are working on developing ethical approaches: Factmata hopes to build a ‘better media ecosystem’ by pairing AI with the human vision to help media companies analyze, detect and tackle fake information and hate speech online, by using algorithms that hand over the control to the public, inviting users to correct mistakes.

3. Lawmakers and governments, whose aim is to protect the public, and hence should become familiar with how tech is continuously updating, how it is influencing society and life, and how to plan out for the future in order to make the best out of it.

This topic has been gaining more transparency over the last year. For example the newly established European GDPR regulations or the American ProPublica (a newsroom aimed to expose abuse of power of the public trust by the government, businesses or any other organizations).

On the other hand, The US Department Of Defense, for instance, is investing in explainable AI (XAI), with the purpose of producing “glass box” AI models that are decipherable in real-time by pointing out whenever AI’s performance might be untrustworthy.

The concept itself sounds great, but because it is developed by such an official organization, the products may be limited once it is released and the investment could go into other different directions. So we still have yet to see how this is implemented and who would have access to XAI.

However, government entities are often accused of being outdated when it comes to new technology developments, making them unable to grasp the ethical implications and the actions needed to provide a value for the society. Take for example the scandal where the government asked CEO of Google why when searching the word ‘idiot’ the word ‘Trump’ came as a result.

Everytime you visit a feed on social media, the content you see is not there by chance. Even though each platform will prioritize differently content-wise, they all have a common goal: you to spend as much time as possible on them.

But is that what you want? If the answer is not, then you first need to get to know them, so that you can control the type of content you want to see, and finally, the amount of time you are willing to spend on each of them and overall, in front of a screen.

This takes us to the next social group, which is also the majority: 4. the public, or in other words, the consumers, who are actually using the technology itself. Lately, this group has been taking more control over how their data is being stored and used.

In order to plan for the future, the first natural instinct is to learn from the past. When thinking of the term ‘machines’, people do not relate to any feelings. There is a neutral reaction. You can’t blame machines for undermining someone’s integrity or privacy, it is how a user makes use of that machine that will affect other people.

“We become what we behold. We shape our tools and afterward our tools shape us”. McLuhan, ‘Understanding Media’.

However, machines are not neutral. We define and carve our life according to how we choose to use them. Today’s gadgets will affect everyone’s privacy, safety and well-being in one way or another. The public should learn that it is time to take a decision on whether they take the lead by learning and becoming aware of what tech can actually do and where is it heading to, or let others choose for them.

There are many resources to learn from: books like ‘Future Ethics’ by Cennydd Bowles, to organizations that directly address the importance of awareness on ethical techs, like the Online Ethic Centre.

Of course, every single person should have their own say and vote on how they want technology to affect them, their families and their future, but it is also very important to inform and be educated before taking action.

5.Major companies and businesses. Major companies inspire people’s confidence as they represent successful groups that serve the public, as well as having the power to raise a voice and take the lead into innovation.

For example, take Google’s case study: their project DeepMind, where they set up their own moto as “helping society understand the impact of AI technology”. This is an example of taking the lead into introducing advanced, futuristic technology that will be part of all our lives and making such topic an easy one to understand for everyone.

On the other hand, Google’s 2018 diversity report (US based), shows that its own diversity improvement over the last year hasn’t made any big impactful changes: women make 30.9% of their workforce and are less represented on lead or tech roles. Also, black and Native American employees rate have grown less than 1% over the past four years.

Human bias will not always be accurate or fair. Same way, an algorithmic bias that uses artificial intelligence can also error.

There is also LinkedIn’s new feature where they use AI aiming to help companies to find groups of candidates with more diversity and to tackle the gender gap. This new tool was announced shortly after Amazon had to shut down their own AI tool because it showed a bias against women.

“Say I search for an accountant, and there are 100,000 accountants in the city I’m looking at. If the gender breakdown is 40–60, then what Representative Results will do is that no matter what happens in our AI, the top few pages will have that same 40–60 gender breakdown,” said John Jersin, VP of Product Management for LinkedIn Talent Solutions.

The current development of diversity in hiring throughout companies, especially in the tech industry, is stiff. So, can AI make the gender gap at work disappear?

These projects initiated by big companies where they involve machines directly and allow them to take a decision may be started with a clever, honorable approach. To advance humanity and amplify possibilities.

Nevertheless, machines don’t feel. They learn from the data that we feed them. In order to tackle issues like diversity, the people who are feeding the data to the AI should also be diverse and coming from different backgrounds.

Who should implement ethics and teach machines how to ‘feel’?

It is important to keep constant research and stay up-to-date in order to properly analyze and understand the repercussions new or upgraded tech can have in our lives. Initially, it makes more sense to think that to let a leading expert tech company, such as Google, IBM or Microsoft, to take over and define what is ethical tech and so forth. However, it would always remain unclear somehow as to who exactly is taking the lead in these organizations.

Ultimately, the answer lies within machines: tech affects everyone equally, so the wise choice is not about which particular group of people is right for the job, but more the fact that knowledge is power and in order to take the right steps, everyone needs to be aware of how tech affects people, especially when it comes to personal closure.

Today a like on Facebook can reflect on who you are, whether you get that job interview, whether people around you will like you. The people you follow online also creates an online image about your personality, whether is true or not.

Other people with malicious intentions towards you can find you, visit your friends' profiles and impersonate you. And the personal data you enter when creating a profile online could and would be accessed by others.

Even more important, is making good use of the new information discovered in studies, in order to let everyone make their own decision. Understand the ups and downs of introducing new technology into their lives, and good and security practice codes to follow.

How can we ensure computers are doing the best for humanity?

ARTIFICIAL INTELLIGENCE

AI will not only impact the global economy by trillions of dollars, but it will also take the lead into creating big leaps forward for the world as we know it. Whatsmore, most likely we can all agree that such a powerful tool deserves to reach the heights it should.

But how can we ensure that it actually becomes a source of positive change that will solve global problems?

Machine Learning is allowing computers and daily devices to complete daily tasks for us: from self-driving transportation to scanning CVs and recruiting for us, as jobs become more obsolete and technology more predominant, it is more crucial than ever to restructure the education system and the recruiting and hiring process.

Even though AI can perform exceptionally compared to humans in some tasks, it still lacks common sense and intuition. AI can collect, analyze, provide inputs and recall information learned over time, but humans can relate and sympathize with other individuals, situations or groups.

Many companies will still rely on artificial intelligence to make decisions for them, affecting the job market scenario of the upcoming decades.

The biggest opportunity in artificial intelligence is “…not machines that think like us or do what we do, but machines that think in ways we cannot conceive and do what we can not.” Says Dr. Radhika Dirks, CEO of XLabs.

AI is created upon information and data fed to it by humans, limiting its effect upon this data. Unfortunately, society is biased and eventually will act upon which outcome is the most beneficial for each situation.

This leads to algorithms providing search results and information based on prejudices and cultural preferences.

Timnit Gebru, founder of Black in AI (BAI) — a place aiming to brainstorm, collaborate and discuss initiatives to increase the presence of Black people in the field of Artificial Intelligence, attended her first ever AI conference where only five out of a thousand attendees were black.

She made a strong statement pointing out that caucasian white male engineers from high socioeconomic backgrounds alone won’t be able to solve human issues that they have never experienced.

Another case comes from Dr. Fei Fei Li, the very well-known scientist running Stanford AI Lab, stated that research has shown time after time that diverse teams create the most creative solutions to problems.

“AI will change the world. But who will change AI?” reads the motto of ‘AI4All’, a nonprofit that aims to bring women and people of color into the field, founded by Fei Fei.

Even though these two examples are very inspirational, there is still another current obstacle for AI: big companies are buying smaller ones.

Nicely enough, some big names like MIT and IBM are partnering up on a 10-year, $240 million investment joint research to advance AI hardware, software and algorithms. Also, nonprofits and companies like OpenAI are taking a stand, drawing and defending the path to safe general AI.

All in all, it can be said that where there is data, there is hope. Using the power of AI, there are already companies and startups attempting to solve today’s humanity’s crucial problems.

WHAT

What’s the importance of ethical technology?

“Silicon Valley has an ethos: Build it first and ask for forgiveness later.” Says Natasha Singer, The New York Times.

In this era where it feels that everyone needs to know how to code, today’s universities teaching and shaping the future technologists and scientists of tomorrow, seem to realize the lack of morals taught to empathize the potential harm technology can bring to humans.

“We need to at least teach people that there’s a dark side to the idea that you should move fast and break things,” said Laura Norén, a postdoctoral fellow at the Center for Data Science at New York University. “You can patch the software, but you can’t patch a person if you, you know, damage someone’s reputation.”

Nowadays, educational institutions should focus on implementing the understanding of ethical issues related to technology, so as to be recognized and accredited by institutions like ABET, a global accreditation group for university science and engineering programs.

Compared to doctors or any other professionals that are responsible for other people’s well-being, software engineer students’ “daily interaction with physical harm or death or pain is a lot less if you are writing software for apps,” said MIT Media Lab’s director, Joi Ito.

WHY

Why is everyone’s duty to care about ethics being implemented in the tech industry?

Traditionally, engineers never had the need to reflect on how their work will affect others (negatively). Ethics have been someone else’s concern while their job remained on creating stuff and fixing bugs. Now, coders need to be able to see the unseen, plan ahead and understand how their code will affect others.

Again, Google and Facebook have been under the spotlight in different cases were algorithms will in one way or another, target or encourage racist behaviors: like Google advertisers targeting people searching racist phrases or Facebook enabling advertisers reaching ‘Jew Haters’.

WHEN

When is the right time to start taking action?

The future of tech is for sure far from being fixated. Addressing the problems will surely not be enough, people with the power, knowledge and/ or the influence to make a change should bring these problems and respective solutions forth.

We can all point at each other when it comes to addressing responsibility, but in the end, there is a very logical fact: whether you are a scientist, an entrepreneur, a CEO from either a tech or non-tech company, an engineer, or you consider yourself simply someone who enjoys technology in your daily life, we are all responsible in one way or another.

HOW

How do we implement it on the real, daily life?

It is more about whether you want to make a positive impact or not, but also holding into account that now more than ever, are all connected and form a big community across the world. Information travels at the speed of light and can reach almost every corner of the world.

One of your tweets can make someone’s else day on the other side of the globe, or a like on Facebook can reflect poorly on your professional career.

Self-driving cars, food and shopping delivery apps and biotech researches are all great and innovative ideas, but they might not work out for the best of everyone.

Big bodies and entities with a bigger network and an international voice, such as governments or the United Nations should carry the message.

Engineers are the core of machines, they make them ‘think’ and ‘do’, but now we live in a different era where it’s also important for developers and coders to make them ‘feel’.

And also, every single person who has access to at least a machine in their lives should reflect how far does the ‘butterfly effect’ go when we choose to use technology.

We are all now under a common umbrella: to fight trolls and bugs. We don’t need better engineers, better leaders or better products. We need more education.

It is ironic how nowadays machines are the ones reminding us that we are moving without reflecting on the changes happening. What’s the point on rushing forward when currently the persecutions of our digital lives are making things messier?

Perhaps if we take a couple step backs to look at the whole picture we could realize that the solution to the lack of ethics in technology is coming from the lack of care and compassion from humans, not machines. It is the result of the machines’ actions that we are only starting to grasp the idea that we might be going too fast.

Dieting is not for everyone, but online dieting or digital nutrition is a different term.

Today we, humans, are feeding ourselves data: music, TV, games, movies, news. So the same way we care about our health: the number of sleeping hours we get, the food we eat, the type of clothes we wear, the people we hang out with, etc. It is also time to care about the technology we are feeding to ourselves.

Booklist — feel welcome to add up any other you think is missing!

The good news is that individual shoppers and the public altogether can take control over how technology affects them and become an ethical tech consumer.

The first step that people are taking, and that any individual can opt to take as well, is to find a new service source. If an app, product or company is not respecting their privacy or standards, they move to another one. Also, they search for forums of people who have gone through a similar situation and see how they have moved on.

Next step would be to slow down the tech consumption. The same way people cut out sugar off a diet: reflecting on the personal tech consumption and what to you either give up or change will make the difference.

Whatsmore, doing research for the background story, helps to find out what’s the business model of it, who makes it and how are they selling it.

All in all, think about your friends and family. If you have ever used Facebook, you know that when you hit a like on a page, it will reflect on your profile, and the people who are within your network will eventually bump into the things you have “liked” and might follow up or judge you upon it. Also, some of these pages and sites that you like, get access to your network and may somehow manage to find them and contact them.

At the end of the day, do remember that it is always your choice to leave. If you aren’t feeling confident or safe using a certain app, product or service, leave it and pass on the message to your circle of people.

--

--

Mercedes Thomas
HackaMENA

Content Creator | UX/ UI Designer — Born in Ecuador. Raised in Spain. Grew up in China. Then UK, Egypt and now Germany.