Thoughts on AI and ethics — on the way to singularity

Kamil
3 min readDec 2, 2017

--

From: https://cdn-images-1.medium.com/max/1600/1*tlW-s8MwwQ_Yyydvrh-D1g.jpeg

Movies and novels tend to present AI as an evil system/robot that wants to take over the world and become superior to human. However so far there is no threat from the army of cyborgs on the horizon. The concept of AI has been theorized and adopted as a discipline of science in 1950’s and there’s been a huge progress in its development. Companies like Google has driven their employees to develop AI systems to help them analyse large tracts of information for purposeful consumption, but what does the future hold?

As we are facing challenges such as food security, scarce natural resources, water shortages, automation of labour we start to see the utility of such systems and look for the way they can improve our lives, even further. Experts in Machine Learning and AI agree that there are specific milestones to be reached, however they are not sure when this will happen. 352 researchers responded to the survey (it is 21% of the 1634 authors that were contacted and who participated in NISP and ICML conferences in 2015). So we can safely assume that the information comes from people who know what they are talking about. The questions of the survey related to the timing of specific AI capabilities (such as folding laundry, language translation), superiority at some specific professions (such as truck driver, surgeon), superiority over humans in all tasks, and how AI impacts social life. Key findings are quite interesting, for instance respondents believe that there is just 10% chance (median probability) of AI performing better than humans in all tasks 2 years after human level machine intelligence (HLMI) is seen. Respondents shared their thoughts about the impact of HLMI on humanity in long term — with median probability of 25% for “good” consequences and 20% for an “extremely good” aftereffect. However, the median probability of a bad outcome and catastrophic outcome (i.e. human extinction) were accordingly 10% and 5%. Almost 50% of respondents believe that societies should prioritize research in order to minimize the risks of AI, with just 12% wishing for less research in this field. Other interesting findings are the median estimates for artificial intelligence achieving the same performance as human (in years from 2016) in the following tasks/professions: surgeon — 37, write New York Times bestseller — 32, track driver — 11, assemble any LEGO — 6, all Atari games — 7, fold laundry — 5, Angry Birds — 3.

It is very interesting how quickly the developments of AI take place and how it becomes more and more present in our daily lives ranging from automated cash registers and accounting systems, to customer service call preparation. There is a lot that can happen over the next years and decades and it is particularly important to agree on the set of ethical rules that the machines will follow. There is Ethics and Governance of Artificial Intelligence Fund, which conducts projects to advance the progress of ethical AI and there are many other initiatives like that such as Media Lab Moral Machine Platform, OpenEth, Council for Big Data, Ethics, and Society. Another interesting subject with regards to ethics is the modern trolley problem, should the autonomously driven car protect the pedestrians or passengers? Should they be utilitarian in their ‘actions’ and calculate the minimal potential death toll (i.e. Mercedes took the approach to protect the passengers at all cost)? The problem is that ethics started in the ancient world (if not earlier) and people still create new rules and paradigms for ethical behavior, constantly coming up with better solutions — does it mean we will ever be able to take the full advantage of HLMI or even more advanced AI? Perhaps we should just somehow allow machines to decide on ethics for us.

References:

https://arxiv.org/pdf/1705.08807.pdf

https://ai100.stanford.edu/sites/default/files/ai100report10032016fnl_singles.pdf

http://people.ischool.berkeley.edu/~hal/Papers/2010/cmt.pdf

https://medium.com/artificial-intelligence-policy-laws-and-ethics/the-ai-landscape-ea8a8b3c3d5d

--

--