Ethical Dilemmas. Digital identity in modern society

edutech2035
edutech2035
Published in
3 min readOct 22, 2019

The more tasks we delegate to machines and artificial intelligence , the more we have to deal with “soulless” algorithms. Artificial intelligence adds a new dimension to these issues. Systems using AI technology are becoming more autonomous, and the motives behind interactions of the digital and real world are getting more complex. Is there a global threat here? With respect to the morality of an individual, can we assert that AI should have moral reasoning? Is it even possible to formalize ethics and build an absolute code? These questions are discussed by experts of 2019 Open Innovations forum.

Open Innovations

Steven Crown, Microsoft Vice President:

My biggest concern with AI today is making sure that we continue to have human beings who actually do deep ethical and moral thinking engaged in the outputs of AI whenever it’s used to affect real human beings’ experience. One thing I like to point out: the very notion artificial intelligence is a bit of misleading. AI machines do not decide what they do. It is actually calculating intelligence. It is finding patterns, determining probabilities and generating the response. And ultimately human beings have to decide whether to implement or not to implement. And there is a real risk to people forgetting the human role in this process and we know from scientific studies of psychology that it is really easy for human beings.

Kay Firth-Butterfield, World Economic Forum, Head of Artificial Intelligence:

The moment that keeps me up at night is the need for companies, countries, civil society and academia to work together in multicultural community to deal with issues concerning artificial intelligence.

I believe that AI is likely to help us move forward as human beings. But there are downsides of artificial intelligence. And we need to think about governance (with a small g) of artificial intelligence. We need to do something about regulation of facial recognition and technologies. Let’s build a firm foundation to grow ethical AI.

Andreas Steininger, Ostinstitut/Wismar-Beiten, Professor:

I don’t have any fear concerning AI. I need to point out that we have to distinguish between strong AI and weak AI. And that what we have right now is weak AI, AI that does not take any decisions as human beings. The kind of AI that we know now is big data, huge accumulation of data. It might be also very dangerous to have a company that owns big data. Finally at this stage we don’t have any artificial intelligence which is able to take any decisions. I’m doing a research in Germany with some colleagues and we are trying to develop a program which is able to take decisions. The most important problem is that the computer and the program are not able to understand the circumstances of a case.

--

--