AI Ethics: What You Need To Know

Constellation AI
Constellation AI
Published in
6 min readNov 6, 2018

Future-proofing innovation, with Joanna Bryson

School of AI Director, Siraj Ravel, recently Tweeted: “I’m not afraid of super-intelligent AI, I’m afraid of humans using AI to exploit other humans. Technology is like fire, it can be used to either burn us or give us warmth”. We are in agreement: this is a pivotal moment for AI ethics. As technology evolves at exponential speed, who exactly is monitoring the frameworks to protect us from harmful misuse of AI? The UK Government “considers the economic, ethical and social implications of advances in artificial intelligence”, but is their strategy comprehensive enough?

Although basic liability has been established; legal precedent and regulatory competence are still defining issues. So, the responsibility for AI and tech companies to safe-guard products that might question human rights, safety or privacy is essential. Artificial intelligence creators need to keep the ethical implications of their product on society at the forefront of their process.

We met with Professor Joanna Bryson — AI ethics authority, robot builder and general artificial intelligence mastermind — to highlight key areas of focus on the ramifications of our emerging innovations. Here are our takeaways.

CYBER-SECURITY is paramount.

Data is deeply personal. We would not want others to access the digital ‘model’ — such as our Facebook or Apple ID account — that defines us: therefore, it must be fortified. There are obvious concerns in collecting huge sets of information. As Joanna commented: “Data needs to stay on its owner’s machines”. What does this mean, practically? Legal frameworks with a heavy focus on AI ethics must protect this data from being bargained as an asset. Companies need to carry out a risk-assessment, reinforce their servers and take every precaution to ensure their cyber-security is defended.

Understanding PSYCHO-SECURITY

“As we’ve seen, a small number of influential people can alter an election”. Joanna used the Cambridge Analytica scandal as example of breached psycho-security: “Predicting people with AI means there is a danger of them being manipulated. Dissuading people from voting is persuading the vote with disenfranchisement, and the dangers of this are huge”. Questions surrounding who owns the user’s data are necessary in the groundwork of protecting the user; and so is the psychology behind how you are using it. So, the question must be: is there an ethical filter? As people build their own psyches in to general models; Constellation AI recommends that a responsible principle would be for companies to work with psychologists and other professionals on how their AI responds to those models.

Joanna Bryson visiting our London HQ

Communicating TRUTH

In June, Google released a list of ethical principles for their future use of artificial intelligence, following The Pentagon’s decision not to renew a contract using their technology for military drones. Their ethical framework, however, raised questions as to how Google was already safe-guarding their technology across all sectors. Google Duplex recently showcased an uncannily life-like AI assistant that could make reservations on your behalf. The agent wasn’t up-front that it was, in fact, AI, and it left many feeling uneasy. Was this a breach of trust? The internet search behemoth is widely regarded as having the world’s most advanced AI, and the move to selectively highlight only some of its ethical complexities might just be too little, too late to maintain trust. What can we learn from this? The ethical structure needs to be in place from the beginning. It’s an important conversation, and Constellation AI are taking the steps to build that foundation.

Actively removing BIAS

The recent news story of Amazon’s abandonment of its hiring algorithm shows us the potential impact of machine learning, when societal implications aren’t fully considered. When Amazon trained the algorithm on a decade of their own hiring data, the model became biased and ranked female applicants lower than their male counterparts. “What people call ‘bias’, can be simply a mimicking culture”, Joanna told us. “We need to understand the processes of equality. If you are making a digital reflection, you will also be making a digital reflection of your bias”. Equality is improving (whether on a small or a great scale), but by training algorithms on out-dated sources, you are training the mistakes of the past. There is historic bias in the way data has been built and the way systems have been modelled. Even if subconscious or unintentional, it’s in the data you choose, it’s in the way that you create a model. If it’s just white men–or any homogenous team–creating AI, the results are likely to reflect that.

What does this mean for the future of equality in machine learning products? It may not be possible to remove bias altogether, but tests for implicit association can filter and catch it. By uploading data from less biased sources, with different models or representations, we may achieve better results. As Joanna told us: “With our explicit minds, we’re negotiating contracts with each other for what we want from the future. This is why AI should not only reflect our past.”

Our founder, Tom Strange, believes that when it comes to technology, humans must remain accountable: “The effectiveness of all principles depends on the people implementing them: whether they are being proactive or reactive and whether they are putting people or profit first. Technology simply amplifies existing conditions, so, therefore, we must look to the humans behind the technology, not concentrate on depicting the potential negative consequences of artificial intelligence”.

“It’s very easy to make inappropriate data selection. A more diversified team working to solve a problem is likely to be more rigorous in removing bias; however, sometimes the nuances are hidden. Identifying challenges in data, or even asking the right questions to oneself when making dataset selection, is not always clear or easy. There is no question that as we continue as a global society to apply intelligent systems to the challenges we face, there will be unintended consequences. What we can do, is respond with transparency, accountability and proactivity to those consequences: to be trusted to do the right thing, to challenge industry standards and to consistently work on improving them. That’s our aim”.

City AI Global Ethics Lead, Catalina Butnaru

We’ve taken further steps to ensure we’re on track, by harnessing the expertise of City AI Global Ethics Lead, Catalina Butnaru, to help us compose our framework. If you want to take a closer look at applying ethical principles into the design and development of artificial intelligence, HAI provides a great foundation.

Technology must be human-centred and held accountable with both laws and societal values. Ethics must be built in to the DNA of a business. We don’t have all the answers (and there is a lot to learn), but we’re making sure we ask the right questions.

Joanna J. Bryson is a trans-disciplinary researcher on the structure and dynamics of human and animal-like intelligence. Her research; covering topics from artificial intelligence, through autonomy and robot ethics, and on to human cooperation; has appeared in venues ranging from a Reddit to Science. She holds degrees in Psychology from Chicago and Edinburgh, and Artificial Intelligence from Edinburgh and MIT. She has additional professional research experience from Princeton, Oxford, Harvard, and LEGO, and technical experience in Chicago’s financial industry, and international management consultancy. Bryson is presently a Reader (associate professor) at the University of Bath.

Constellation AI’s app imi is currently in beta testing stage. Sign up here for the wait list.

--

--