Legislation and Ethical Guidelines for Intelligence Technologies

Principles for a more enlightened and civilized society

Grant Munro
Grant Munro
11 min readFeb 16, 2018

--

Recent technological advances to augment human intelligence (aka Intelligence Amplification or IA) can potentially allow us to make our cities and citizenry smarter than ever. However, their corruptive and disruptive impact on health suggests the information technology (IT) industry must establish an ethical framework to ensure our future generations get the most from life. To mitigate risks, a number of organizations have introduced various codes of ethics. Despite this positive move, most codes focus on enabling public access to data and professional integrity to the exclusion of all else. While both domains are important, we argue that they do not nurture the kind of intelligences humanity needs to thrive and prosper. To address these blind spots, this paper draws on recent evidence that three human factors (chronobiology, collaboration, creativity) are vital to humanity’s future, and that harnessing them will ensure our IT professionals design more life-supporting systems. The 3 “Laws” presented as Legislation and Ethical Guidelines for Intelligence Technologies (LEGIT) aim to stimulate critical debate on the subject and nudge the sector to take practical and meaningful action.

The future of AI

The idea of artificial intelligence or AI has been around since the 1956 summer workshop at Dartmouth University. The workshop was convened by John McCarthy, who coined the term “artificial intelligence,” and attended by a raft of AI pioneers including Claude Shannon, Herbert Simon, and Marvin Minsky. This seminal event defined AI as a set of methods that could provide machines with the ability to achieve goals in the world. Attendees of the workshop believed that, by 2001, computers would implement an artificial form of human intelligence (Solomonoff, 1985). Recent advances in neural networks modelled on the human brain have resulted in striking breakthroughs, most of which involve a machine learning technique known as deep learning. Deep learning uses a photograph’s pixels as input variables to predict variables without needing to understand underlying concepts, just as standard regression model predicts a person’s income based on educational, employment, and psychological stats. Such algorithms are now found to beat humans at games of skill, master video games with no prior instruction, 3D-print original paintings in the style of Rembrandt, grade student papers, cook meals, vacuum floors, and drive cars (Guszcza et al., 2017).

Due to more effective algorithms, computing power, data capture and storage, real-world AI applications have exploded in the last decade. AI systems are already built into everyday technologies like our mobile devices and voice-activated personal assistants to help us manage various aspects of our lives. AI is also being used within legal, financial, and workplace sectors to predict behaviours map leisure preferences (Campolo et al., 2017). In addition, thousands of digital health apps are being developed to help track our daily activities and prompt us to make healthier lifestyle choices (Topol, 2015). The problem is that AI algorithms only work when data used to train them are sufficiently reflective of the environment in which they are used. In other words, when routine tasks can be encoded in big data sets, algorithms have become brilliantly adept at outperforming humans. Yet when given a more novel task that requires conceptual reasoning, even the most powerful AI still cannot learn as well as a five-year-old does (Gopnik, 2017). This is because AI is founded on computer-age statistical inference — not on an approximation or simulation of what we believe human intelligence to be (Efron and Hastie, 2016). This kind of narrow type of machine learning is far from the vision outlined at the Dartmouth workshop in 1956, or indeed expressed in AI fictional characters such as HAL 9000 in Kubrick’s 2001: A Space Odyssey.

AI’s narrow machine learning is also doing very little to augment our own cognitive capacities. Recent government drives towards automation has meant people increasingly work and live in a 24-hour Society. These 24-hour lifestyle changes have placed huge demands of flexibility on the human body (Kreitzman and Foster, 2011). Instead of living diurnally (active in the day, resting at night) people are living in an always-on “now,” where priorities of the present dominate. Living in this state of what Douglas Rushkoff calls “present shock” means people have developed a distorted relationship to time. Financial traders no longer invest in futures but instead expect profits from computer algorithms. Citizens have no historical sense of how their governments function and demand immediate results from representatives. Children txt during an event to find out if there’s something better somewhere else (Rushkoff, 2013).

This is not to say that the idea of AI does not have great potential. Automating mechanical tasks has transformed society for millennia and is likely to continue to do so into the future (Innis, 2004). What needs to be carefully considered are the practical ramifications of 24/7 AI systems on people and society. Mobile phones are already negatively impacting the mental health and wellbeing of children (Carr, 2011). Meals consumed at night increase our risk of heart disease. Long-term shift work is sparking a raft of reproductive problems such as risks of miscarriage, retarded foetal development, and spontaneous abortion. Sleep loss is also triggering an epidemic in obesity, gut disorders, and drug addiction cycles as people try to maintain regular function (Kreitzman and Foster, 2011). Increased work-related accidents, number of sick-days taken, and family and marital stress are just some of the factors that will negatively impact our ability to succeed in the coming decades.

The reason AI systems are so damaging is simple. Unlike algorithmic systems we create to optimise work functions, humans are not computers that run software programs 24/7. We need vital environmental cues to synchronize our body’s biological rhythms to the Earth’s daily and annual cycles. When cues are disrupted due to erratic behaviors (disrupted eating and sleeping), we get ill. As neuroscientist Russell Foster explains:

“All of us in the developed world now live in a ‘24/7’ society. This imposed structure is in conflict with our basic biology. The impact can be seen in our struggle to balance our daily lives with the stresses this places on our physical health and mental well-being. We are now aware of this fundamental tension between the way we want to live and the way we are built to live”.

Figure 1: Intelligence Amplification (IA)

It’s becoming increasingly clear the most promising AI applications are not in algorithmic machines that authentically think like humans, but in harnessing technologies to enable human and computers to think better together, a field called Intelligence Amplification (IA) (Figure 1). IA has huge potential to allow us to make our cities and citizenry smarter than ever. However, recent developments are sophisticated enough to pose great risks if placed in the wrong hands, whether they be corrupt governments, corporations or both as is the case in 21st century politics (Müller and Bostrom, 2016). To mitigate risks, we must establish some ethical guidelines about how the use and deployment of technology can create a more enlightened and civilized society (Berman and Cerf, 2017).

Indeed, ethical guidelines for IT professions have already been established in some but not all countries. Dr. Eike-Henner Kluge authored 11 principles for the American Health Information Management Association (AHIMA) which have been adapted by the British Computer Society (BCS) and UK Council for Health Informatics Professions (UKCHIP). The European Federation for Medical Informatics (EFMI) does not explicitly state any code, but is a member of the International Medical Informatics Association (IMIA) (Samuel and Zaiane, 2014).

Despite various adaptions, all codes converge around four key principles:

  • Public Interest (i.e., the need to maintain regard for public health, privacy, security and wellbeing of others and the environment; and to promote inclusion and equal access to IT)
  • Professional Integrity (i.e., the need to undertake work that reflects professional competence; continue to respect, develop, and share knowledge; and to comply with legislation)
  • Duty to Relevant Authority (i.e., the need to carry out professional responsibilities with care and diligence, in accordance with the Relevant Authority’s requirements)
  • Duty to the Profession (i.e., the need to accept personal duty to uphold the reputation of the profession and not take any action which could bring the profession into disrepute)

Wearable computing pioneer Steve Mann has also spent many years developing a code of ethics on human augmentation which has resulted in three fundamental “Laws”. These include: (i) the right to know when and how you are being monitored in the real and virtual world; (ii) the right to monitor the systems or people monitoring you and use that information in crafting your own digital identity; and (iii) the user should be able to understand the world they are in immediately (Mann et al., 2016).

While the above codes are an important first step toward mitigating risks of human enhancement and AI, the challenge is they focus on enabling public access to data and professional integrity to the exclusion of all else. While both factors are necessary, they do not nurture the kind of intelligences humanity needs to thrive and prosper. To address these blind spots, this paper draws on recent evidence that three human factors (chronobiology, collaboration, creativity) are vital to humanity’s future, and that harnessing them will ensure our IT professionals design more life-supporting systems. The 3 “Laws” presented as Legislation and Ethical Guidelines for Intelligence Technologies (LEGIT) aim to stimulate critical debate on the subject and nudge the sector to take practical and meaningful action.

Law I: Protect chronobiology

All technologies must provide humans with 24-hour temporal reference points to help them measure their progress, ambitions, and actions (Figure 2). Integration of temporal factors in technologies will remind humans they exist in a physical body, and that circadian clocks, which display 24-hour periodicity, control nearly all biological patterns, including brain-wave activity, sleep-wake cycles, body temperature, hormone secretion, blood pressure, cell regeneration, metabolism and behaviour (Kreitzman and Foster, 2011).

During working hours, humans have a basic right to know when and how organizations are tracking their chronobiology, and reciprocally monitor the chronobiology of organizations. During evenings, weekends, and holidays, humans have the right to disconnect from being monitored, and reconnect with people and groups that matter to them, such as family and friends. All human monitoring and communication must be limited to working hours to support optimal sleep/wake cycles and longevity (Kreitzman and Foster, 2011).

Figure 2: Protect 24-hour human chronobiology

Law II: Integrate collaboration

Smart cyber-physical systems offer humans the ability to create and share goods at near-zero marginal cost (Rifkin, 2014). This post-capital shift to what some call the “the sharing economy” or “zero marginal cost society” is estimated to be worth $4.5 trillion by 2030 (Lacy and Rutqvist, 2016). To maximise the potential of this shift and overcome current challenges, organizations will need to reward creative collaboration between citizens and incentivize sustainability (Rifkin, 2014, Lacy and Rutqvist, 2016).

To achieve this, future technologies must integrate radical human collaboration into every stage of the development cycle (Figure 3). Prioritizing creative diversity will ensure technologies are less contaminated by cognitive bias, which will boost human skills and knowledge to result in breakthrough innovations (Page, 2008). Diverse collaboration will also ensure systems are systemic in nature, addressing root causality of problems rather than changing parts of the whole (Snowden and Kurtz, 2003).

Figure 3: Integrate human collaboration at every development stage

Law III: Nurture creativity

The highly desirable metatrait of creativity (aka social effectiveness) is central to determining human physiological, reproductive, and socioeconomic success (Rushton and Irwing, 2011, Cloninger, 2013, Musek, 2007). The three underlying traits that give rise to creativity have various labels, however they tend to reflect common characteristics related to Dynamism (self-expression, openness), Emotionality (self-awareness, self-transcendence), and Stability (self-efficacy, self-regulation).

For humans to thrive and prosper, technologies must nurture creative adaptiveness (Figure 4), to ensure everyone can reap its physiological, reproductive, and socioeconomic benefits (Rushton and Irwing, 2011, Cloninger, 2013, Musek, 2007). Nurturing creative adaptiveness across all levels of society also has the potential to solve many of the 21st century’s most complex problems (De Beule and Nauwelaerts, 2013), and thus mitigate some of the challenges posed by AI (Brundage, 2015).

Figure 4: Nurture human creative adaptiveness traits

Technologist Pledge

As technologist and member of the technology profession:

  • I WILL RESPECT & MAINTAIN the health, autonomy, and dignity of people and communities;
  • I WILL PRACTICE in accordance with the 3 Laws outlined in the LEGIT to maximise outcomes in human chronobiology, human collaboration, and human creativity;
  • I WILL NOT PERMIT considerations of age, ethnicity, gender, nationality, sexual orientation, or any other factor to intervene between my collaborative work with people;
  • I WILL ATTEND TO my own health and abilities to ensure my work is of the highest standard;
  • I WILL NOT USE my technological knowledge to violate human rights, even under threat; and
  • I WILL RESPECT & SHARE knowledge for the betterment of people and technology.

References

BERMAN, F. & CERF, V. G. 2017. Social and ethical behavior in the internet of things. Communications of the ACM, 60, 6–7.

BRUNDAGE, M. 2015. Taking superintelligence seriously: Superintelligence: Paths, dangers, strategies by Nick Bostrom (Oxford University Press, 2014). Futures, 72, 32–35.

CAMPOLO, A., SANFILIPPO, M., WHITTAKER, M. & CRAWFORD, K. 2017. AI Now 2017 Report. AI Now Institute at New York University.

CARR, N. 2011. The shallows: what the Internet is doing to our brains, WW Norton.

CLONINGER, C. R. 2013. What makes people healthy, happy, and fulfilled in the face of current world challenges? Mens Sana Monographs, 11, 16.

DE BEULE, F. & NAUWELAERTS, Y. 2013. Innovation and creativity: pillars of the future global economy, Edward Elgar Publishing.

EFRON, B. & HASTIE, T. 2016. Computer age statistical inference, Cambridge University Press.

GOPNIK, A. 2017. Making AI more human. Scientific American, 316, 60–65.

GUSZCZA, J., LEWIS, H. & EVANS-GREENWOOD, P. 2017. Cognitive collaboration why humans and computers think better together. Deloitte Review.

INNIS, H. A. 2004. Changing concepts of time, Rowman & Littlefield.

KREITZMAN, L. & FOSTER, R. 2011. The rhythms of life: the biological clocks that control the daily lives of every living thing, Profile Books.

LACY, P. & RUTQVIST, J. 2016. Waste to wealth: the circular economy advantage, Springer.

MANN, S., LEONARD, B., BRIN, D., SERRANO, A., INGLE, R., NICKERSON, K., FISHER, C., MATHEWS, S. & JANZEN, R. 2016. Code of Ethics on Human Augmentation. VRTO Virtual & Augmented Reality World Conference + Expo.

MÜLLER, V. C. & BOSTROM, N. 2016. Future progress in artificial intelligence: a survey of expert opinion. In: MÜLLER, V. C. (ed.) Fundamental Issues of Artificial Intelligence. Cham: Springer International Publishing.

MUSEK, J. 2007. A general factor of personality: evidence for the Big One in the five-factor model. Journal of Research in Personality, 41, 1213–1233.

PAGE, S. E. 2008. The Difference: how the power of diversity creates better groups, firms, schools, and societies, Princeton University Press.

RIFKIN, J. 2014. The zero marginal cost society: the internet of things, the collaborative commons, and the eclipse of capitalism, St. Martin’s Press.

RUSHKOFF, D. 2013. Present shock: when everything happens now, Penguin.

RUSHTON, P. & IRWING, P. 2011. The general factor of personality: normal and abnormal. In: CHAMORRO-PREMUZIC, T., VON STUMM, S. & FURNHAM, A. (eds.) Wiley-Blackwell handbook of individual differences. Wiley-Blackwell.

SAMUEL, H. W. & ZAIANE, O. R. 2014. A repository of codes of ethics and technical standards in health informatics. Online J Public Health Inform, 6, e189.

SNOWDEN, D. & KURTZ, C. F. 2003. The new dynamics of strategy: sense-making in a complex and complicated world. IBM Systems Journal, 42, 35–45.

SOLOMONOFF, R. J. 1985. The time scale of artificial intelligence: reflections on social effects. Human Systems Management, 5, 149–153.

TOPOL, E. J. 2015. The patient will see you now: the future of medicine is in your hands, Tantor Media.

--

--