4 Key Insights from a panel on artificial intelligence and ethics

Judy Cubiss
SAP Innovation Spotlight
4 min readFeb 7, 2020

It is becoming evident that the potential use cases and benefits of artificial intelligence are incredible — but there is a also a growing awareness that there are huge potential risks associated with it. There have been examples of the potential upside and downside in the news recently.

Artificial Intelligence is playing a role in fighting the coronavirus — technology company firm Alibaba is partnering with Beijing’s Global Health Drug Discovery Institute, and they plan to use Artificial Intelligence (AI), to develop an open-source platform to track the virus as it spreads. AI could help with shortening the time of coming up with predictive illustrations of how the virus could mutate, help understand the genetic makeup to design vaccines. Companies like Facebook and Google are using algorithms to remove inaccurate information and ensure the most reliable information is prominently visible on searches.

Yet, historian Yuval Harari at Davos 2020 gave a stark warning that the AI revolution could potentially create huge inequality not just been classes but also between countries, creating pockets of extreme wealth and a very distorted balance of power. For example, history shows us that corruption and exploitation is more likely if one nation has far more insights and information on another nation.

It is against this backdrop that SAP’s Business Women’s Network hosted a panel at SAP’s Bay Area Developer Kick off Meeting (DKOM) to discuss Artificial Intelligence and Ethics. Subtitled pushing the edge while keeping within boundaries, it was no surprise that the session was standing room as attendees listened to three different perspectives from Rogerio Rizzi, Senior Vice President, SAP AI Ethics Steering committee, Elvira Wallis, Senior Vice President and Global Head of IOT at SAP and Cynthia Wood: Senior Data Scientist. The panel was moderated by Jenny Lundberg, Senior Director Developer Relations & BWN Bay Area Chapter Lead. You can watch the entire panel discussion here.

Jenny Lundberg Bay Area BWN Chapter Lead, Elivra Wallis, Global Head of IOT, Cynthia Wood, Senior Data Scientist, Rogerio Rizzi, Senior Vice President, SAP AI Ethics Steering Committee,

There were 4 key areas that stood out to me in their discussion.

Values: The values of a company and each individual will be a critical component to determine the type of impact of AI. Development teams will need to assess every use case of AI, what the intended or unintended use could be as well as which solutions touch on sensitive data. This is another situation where technology is ahead of regulations. So leading edge companies, like SAP, cannot afford to wait for regulations but will need to be able to articulate a position based on their values. This is important as companies are already being entrusted with sensitive data and they need to ensure that nothing unethical happens with that data. In addition, not all customers do not have a good understanding of the flow of data and possible implications so as technology providers, companies need to take a lead.

Regulations: The panel agreed that regulation is coming, and companies need to be proactive so that innovation is not hampered. But there are many different perspectives and norms in the world — one size does not fit all, for example face recognition technology is widely used in China in scenarios that would not be acceptable in Europe. Europe already has regulations on the books for privacy, the US is adopting a lighter touch approach but has issued guidelines and China very little in process. So as regulations develop, it will be important that the software is flexible enough so that it can be constrained or loosened as thinking and regulations evolve. However, within any regulation there needs to be a mechanism that allow people to innovate responsibly.

Transparency — the panel acknowledged that transparency is not always easy with AI. They agreed it is important that developers work to ensure the that their models can be explained and especially important that they work to ensure that there is no bias of data, this includes the source and collection method. This is not an easy problem as each situation is different Rizzi explained, it will have to be a collaboration, an ongoing discussion. We should treat the process like a garden that needs ongoing pruning and maintenance.

Individual Responsibility — Wood emphasized that developers will need to question everything when dealing with AI, what is the application and what can go wrong, and really think through the consequences of the AI models. This does not only include privacy and discrimination concerns but also the impact on the environment and carbon footprint. Wallis added that companies will need to hire people in this area with critical thinking skills, people who will tell the stories about challenging situations and continue to ask questions. This will require different skill sets and training, and create new opportunities for collaboration for example between engineering teams and social scientists.

Wood summed it up — In this new age developers have super powers — they get to decide if their software is going to make everyone’s life better. No wonder it was standing room only

Standing room only at the panel

--

--