Surveying the AI Ethical Landscape

AI LA Community
The AI Collective
Published in
7 min readJun 13, 2019
Design + Photography by Jorge Raphael

If there is one good thing that’s come out of Facebook’s data privacy issues with Cambridge Analytica, it is that it seems to have acted as the last straw by which to take ethics seriously in the technology world. Today, it is clear that there have been far too many unintended consequences, and ethics must be an explicit part of our process and approach moving forward.

This year, Gartner, the market research company, named digital ethics as one of the top trends to look out for. When it comes to Artificial Intelligence (AI), ethics is a hot topic. This is a good thing, but it doesn’t make the conversation any more clear.

That is because there are a host of ethical dilemmas to tackle, right now and in the future, which are challenging us both technologically and personally. In this article, we’ll aim to survey the AI ethical landscape in an effort to understand our own individual roles in the discussion and how we, as Los Angelenos, might help to bring forth an ethical perspective to Artificial Intelligence.

AI Problems Today

Today’s ethical dilemmas in AI are primarily related to machine learning algorithms that are trained on often unintentionally biased datasets to reach narrowly clear objectives. This is the AI that can do things like recognize faces, recommend a product, get to know your preferences based on previous behavior, or identify a candidate for a loan based on automated risk assessment processes.

This type of AI is in itself bringing a host of issues. AI Now, a research institute examining the social implications of artificial intelligence, pairs them down to four issues:

  1. Civic rights and liberties
  2. Bias and inclusion
  3. Safety and critical infrastructure
  4. Labor and automation

As Reid Blackman put it, “AI products are being built and deployed today and it’s moving faster and faster. It’s not waiting around for culture change.” These issues are here today and they need to be dealt with.

To that end, there is a lot of talk about developing a “code of ethics” for AI, and there are many organizations working to develop best standards and practices. Within product teams, designers are also looking to develop human-centered practices that allow AI teams to build more responsible AI.

So there is a lot of positive talk and development here. But the biggest problem is not just in identifying the problem or in developing standards — it is in bridging the gap between the theory and the practice of making AI products.

Kate Crawford, from AI Now, says it best: “What we’re seeing now is a real air gap between high-level principles, that are clearly very important, and what is happening on the ground in the day-to-day development of large-scale machine learning systems.”

The realities on the ground are different. Small AI startups don’t always have the resources to hire designers, let alone to think of ethical processes that allow us to build human-centered practices around their tech. And tech giants don’t always have the right motives. We might just have to resign ourselves to the fact that, in a free market economy, measurable business objectives often trump ethical reflection inside the world’s largest product teams.

This is why the AI ethical discussion is benefiting from pressure from the public sphere, and it’s also involving policy-makers, researchers, start-ups, and third-party organizations. This will hopefully exert pressure, and impose standards and regulations onto tech giants that hold the brunt of the datasets by which many machine learning algorithms become more intelligent and make more informed decisions.

AI Problems Tomorrow

As if this was not consequential enough, none of today’s machine learning problems are as existentially alarming as a probable future in which AI becomes more cognitively capable than human beings. This future is generally linked with “Artificial General Intelligence” (AGI), also known as “superintelligence.”

The problem with AGI is palpable enough that in 2016, Nick Bostrom, one of the fathers of AI, said, “There is, perhaps, a 50% chance that humankind will be annihilated this century.” Echoing this sentiment, this year philosopher Sam Harris gave a TED talk on AI in which he said, “The gains we make in artificial intelligence could ultimately destroy us. And, in fact, I think it’s very difficult to see how they won’t destroy us.”

If this is at all true, it makes today’s problems with data bias insignificant in comparison. But that is not to say that they are not linked.

AGI is nowhere here today, yet very smart people are hard at work exploring the potential consequences and ensuring that we have the right principles in mind. In considering an AI agent more “intelligent” than us, everything from the agent’s moral status to a better understanding of consciousness to back-to-basics philosophical and ethical theory is being utilized. One of the positive side effects of all of this is that the liberal arts, social sciences, and philosophy, are becoming more useful to humanity’s present troubles once again.

On the downside, as of 2017, it was estimated that fewer than 100 people in the world are working on how to make AI safe. So, while this issue is getting attention, tangible output and human effort is still weak at best.

Our Own Responsibility

Behind all of the technological issues, unintended consequences, and the new relationship between human beings and an increasingly smarter and more automated technology, the issue of AI ethics is a very human issue.

In one possible future, rather than replacing the human, automated technology will enable on human augmentation — a process in which human beings become more and more empowered, beyond what we have already gained with the advent of social media technologies and a connected world. And, as the old adage, “With great power comes great responsibility.”

After all, AI is being programmed by our own understanding of reason, and it is up to us, as a society, to decide whether the future we are building is going to be one that makes us better and more excellent human beings or if it will be our very own creation of the worse of our dystopian movies. I hope that we can choose for the former and, as Brian Green puts it, to “otherwise make gainful employment out of what humans do better than AI: loving one another and creating beauty.”

The Role of AI LA in this Discussion

I am excited to be talking about this issue on June 20 at Phase Two in Los Angeles with a fantastic panel of human beings tackling this issue from multiple perspectives.

On the panel, we’ll have Ammon Haggerty, a product design practitioner and the co-founder of Formation, an AI-powered marketing platform. We’ll also have Aaina Agarwal, a lawyer by training, working with the World Economic Forum on multiple initiatives to connect AI policy-making towards best practices for corporations to adapt. Peter Eckerlsey, the Director of Research for Partnership on AI will also join us, who has been researching how to translate ethical concerns into mathematical constraints throughout his career. And last but not least, Brian Green, a professor of Ethics in Santa Clara University, who has been contacted more and more by the likes of Google and the UN to help with ethical issues in AI.

We are excited about this topic and the potential output from putting together a panel with four different perspectives to cover the massive landscape behind this important topic.

We hope this discussion will give everyone a better glance of what we are dealing with when we say “AI ethics,” and we are excited about establishing a foundation on which to build over time.

I joined AI LA not simply to learn about the technologies being created, but because of the sense of the impact that automated technology has upon the world today and in the future. I became even more excited when establishing a shared vision to ground AI LA in human-centered practices that acknowledge the human spirit and the best expression of our values. We can’t wait to continue to share this journey together and become more ethically aware as a group and a community, so that we may build better AI for us today and for the future of humanity.

Buy Tickets to our June 20th Symposium before they sell out!

— -

Arturo Perez is the founder and CEO of Kluge Interactive, a product design studio based out of Los Angeles. Arturo is interested on how to create a human-centered approach to AI. A long-time philosophy enthusiast, he has been giving talks on the relationship between Philosophy and Design for years. A Director of Creative Strategy at AI LA, he is looking to understand the relationship between practice and theory when it comes to AI Ethics, and where design can play a role.

--

--

AI LA Community
The AI Collective

We educate and collaborate on subjects related to Artificial Intelligence (AI) with a wide range of stakeholders in Los Angeles #longLA #AIforGood