Ethics in Technological Innovation

The age of hyper-innovation

Why should innovation concern itself with the notion of ethics? Ethics is about everything that can promote or hinder human wellbeing. Therefore, innovation and particularly technological innovation has the potential to fundamentally change human welfare, by either perpetuating existing inequality, or creating a better future for all.

How much is too much?

As we globally move to a more liberal mind-set, with a focus on individual rights and liberties, free choice, and free enterprise, the question arises, should societal values supersede those of individual choice? In other words, should the choices offered by the technological advances, be open to individual choice, as long as nobody else, apart from that particular individual, is impacted?

All together now

The ethical issue that we are facing, is that of individual technological innovations, but also linked systems, which creates a more complicated mixture of risks and benefits that require assessment. The management of innovation, as creators, is one that we should approach with the utmost care and respect for future generations, and indeed for the future of humanity.

I think therefore I am

Neuralink Corporation is a neurotechnology company developing implantable brain–machine interfaces (BMIs) and founded by Elon Musk and others. Musk explained that the long-term goal is to achieve “symbiosis with artificial intelligence” but in the short term, has the goal of you using your Neuralink to operate your iPhone. Undoubtedly, such technology will have enormous benefits for people who are physically impaired, either through accident, birth or disease. As a science, brain computer interface has existed for several decades already.

By itself, a brain-machine interface, the purpose of which is to restore sensory and motor function, and assist with the treatment of neurological disorders, is supportable. The important ethical question that arises, is whether it is morally acceptable to drill into the brains of humans and animals in an attempt to merge them with artificial intelligence? Also, in the future, the words “I’ve been hacked” could have a very different, very personal meaning, to what it means today. Other issues include animal/human neural interfaces, subliminal messaging or even something as tawdry as creating “advertising space” in chips, for a hostage consumer.

The consequences of a neural chip being implanted into a human brain, also require reflection on the unintended consequences of such brain implants, but also on the future of what it means to be human. At least we can look forward to not asking “now where did I leave my phone…?


CRISPr and similar gene engineering and editing tools, open up a Pandora’s Box of ethical and individual choice issues. Of course, we understand the technology and the reasoning that allows gene editing of babies who have genetic diseases or conditions, in utero, and which is already being done.

With gene editing innovations, why should it not be a choice of parents, to do everything they can to ensure that their child succeeds in life, including elevated intelligence, stunning good looks, perfectly symmetrical features, and above-average height? After all, we already can choose the gender of our children before they are born, if we want. Is it any different to the previous choice detailed?

A possibility of gene-editing technology could also be that of human/animal hybrids. The idea of being able to breathe underwater, like a fish, without any cumbersome SCUBA gear, or having to snorkel, could be appealing to some people who love the ocean. They are not hurting anyone, will have a pair of hidden gill flaps behind their ears, and can indulged in their passion for viewing sea-life.

It is however, very easy to go down the slippery slope of what is acceptable, and what is unacceptable based on personal moral values, or even current societal values.

A similar case can be made by someone who enjoys cage fighting and decides on a gorilla/leopard modification, either genetically or through physical augmentation — is our revulsion at this idea based on our personal moral values, or societal values or that we don’t support the individual’s right to choose? And are we equally repulsed, or maybe less so, than 50 years ago, of the idea of injecting litres of silicon into our bodies, or using performance enhancing drugs or steroids to bulk up performance and bodies?

Did we not, as short as 54 years ago, believe that a person is alive, because their heart is beating? When Dr Chris Barnard performed the world’s first heart transplant, we could then safely say that a person is alive, even though there is another person’s heart beating in their chest. 3D printed hearts are already in development and should be available within the next 10 years. Does having a 3D printed heart in your chest mean that you are not human anymore? How about a 3D printed brain with a neural interface to store the “you” software after you have died?

What about societies divided into those who are linked to a neural network, and those who are not?

The question that we should be asking is allowing genetic manipulation merely as vanity, or to enhance superiority, or that which adds positively to the wellbeing of humanity?

Body modifications have been around for thousands of years. Body modification itself, is perhaps not the issue, but whether allowing human beings to be modified to the extent that they no longer sufficiently human, is maybe what requires some debate. That debate in itself, could raise other issues, such as is it ethical to discriminate against someone (something? some “it”) who we deem to be insufficiently human?

In addition, we need to consider whether the use of the technology will perpetuate inequality, or promote human welfare.

Creating ethical tech

To create ethical technology, we need to appreciate that human bias exists. Human prejudices and bias being included in AI systems are often referred to as algorithmic bias. This bias can take the form of inter alia gender, race, and age discrimination. For example, Amazon stopped using a hiring algorithm after finding it favored applicants based on words like “executed” or “captured” that were more commonly found on men’s resumes.

On June 30, 2020, the Association for Computing Machinery (ACM) in New York City called for the cessation of private and government use of facial recognition technologies due to “clear bias based on ethnic, racial, gender and other human characteristics.” The ACM said that the bias caused “profound injury, particularly to the lives, livelihoods and fundamental rights of individuals in specific demographic groups.”

This human bias does not only extend to discrimination against human beings, but also issues like sampling bias, temporal bias as well as bias against outliers.

Strategies for creators to deal with human bias include an inclusive design process, with consideration for diverse groups such as race, class and culture. Predicting the impact of the AI system now, and in the future, should include foreseeability testing. User testing of system should again, be as representative of diverse groups as possible.

Researchers have developed technical ways of defining fairness, such as requiring that models have equal predictive value across groups or requiring that models have equal false positive and false negative rates across groups. This methodology does however lead to a significant challenge — different fairness definitions usually cannot be satisfied at the same time.

Many creators are finding using the STEEPV (Social, Technological, Economic, Ecologic, Politics and Values) framework useful for detecting fairness and non-discrimination risks in practice.


At the rate of which technology is advancing and the various choices it brings, there is no doubt that human beings will have to reassess their personal values systems, how we want our societies to function, and what we value, in terms of our ethical frameworks as societies.

Technology and innovation do not have moral or ethical qualities. As creators and users, we get to decide the ethicality of what we create, and how we use technology and innovation. We have the power to decide if a particular innovation is contextually right or wrong, and also ensuring that our creations are as free from bias as possible. It is a weighty responsibility, that decides issues of the future including equality, choice, fairness, privacy, security, accountability and the ethical foundations of our human society, with its rapidly evolving technological advancement.



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store