A New Social Contract and AI’s Most Imminent Problems - Douglas Heintzman

Kitty Chio
Analytics By Design
14 min readFeb 1, 2019

Douglas Heintzman is the Practice Lead for Innovation at the Burnie Group, a management and technology consulting firm in Toronto, Ontario. He is also chair of the selection committee for the NSERC Synergy awards for Innovation and a jury member of the Robot of the Year, the first international prize that rewards the best innovations in ethical artificial intelligence and robotics, beneficial for humans across 11 industries worldwide.

Douglas Heintzman delivering a keynote, “Living in an Age of Disruption”, at the 2018 Analytics By Design conference.

For over half a century, humankind has pondered the “singularity”, a doomsday scenario where an artificial superintelligence changes the fundamental nature of civilization in unpredictable and potentially dire ways. Almost 70 years ago, the pioneering computer scientist Alan Turing was asking the question “can machines think?” This question, and the consequences of its answer being “yes”, has not only lead to practical questions such as the need for a “kill switch” as discussed in AI control strategies, but has also been fodder for both philosophical debate and science fiction — often combined in tropes like Asimov’s 3 Laws of Robotics and Hollywood blockbusters like Blade Runner.

“Blade Runner”, sci-fi movie released in 1982

Fast forward to today, technological advances continue to fuel our imagination and challenge our preconceived notions of general intelligence and machine consciousness. We continue to speculate upon the potential of AI technology and the societal revolution that it will inevitably trigger. What happens when we invent an intelligence that is smarter than us? By definition, this machine will be smart enough to invent a machine smarter than it is, and in turn that machine will, again, invent a machine smarter than it is, and so on. How long will it take for an artificial intelligence to be so different from us that we can’t relate to it in the same way that an ant can’t really relate to a human? Is AI an existential threat or is it a huge boon to human productivity and welfare? And finally, what technological, business and socio-political problems should we think through if its advent is an inevitability?

To consider these questions, the ABD team recently interviewed Douglas Heintzman, a thought leader in disruptive technologies and AI. When we asked him about the singularity he was very thoughtful but ultimately dismissive, “There are so many steps between that kind of world from where we are today and we don’t even have many ideas about what those steps are. I suspect we are at least a century away from having to worry about Skynet.” In the meantime, Heintzman thinks there are plenty of things to worry about ranging from ethics to governance. Heintzman shared insights about the immediate challenges and opportunities that business and society are facing, and the need for a new social contract as we enter the Fourth Industrial Revolution. He believes that both our economic and political systems will need to substantially evolve in order to fully benefit from AI’s potential, and that the changes driven by AI will be upon us sooner than most people imagine.

Today’s AI Challenges

“Today AI is great at performing many tasks for which it is specifically trained. That’s great. It helps optimize air traffic, helps doctors diagnose cancer and allows us to ask a home appliances when the final season of Game of Thrones is coming out. In the real world, the problems we face are more compound, sophisticated and filled with all kinds of nuances”, Heintzman said.

“We are still a ways from truly generalized engines that we can submit compound problems to. In the meantime we need to work on the potential weaknesses, functional deficiencies, and the ethics questions those weaknesses imply.”

Today’s AI engines are specifically designed and trained by a selected data corpus to do specific things. Heintzman shared with us that “one the biggest problems that AI faces is the bias that may be hidden in training sets”. When an AI engine has been trained by a bias laden data set — and bias in this context may simply be an under sampling of a particular group — the AI engine will come up with the wrong answer or recommendation or take the wrong action. Creating high quality training data sets can be a tricky business. There is complexity and effort in transforming a massive dataset with fully labelled predictors and responses for use in AI. Following the notion of “garbage in, garbage out”, Heintzman is concerned that AI engines can learn the wrong types of lessons if the sample data used is not vetted to adequately contain patterns of interest, or is simply not large enough. Unless the data is heavily fine-tuned, sometimes massaged with synthetic data points and attributes, the engine’s success is hugely susceptible to any biases inherent in its training data. As Heintzman says, “the AI engine might design a substandard marketing plan, make a faulty medical diagnosis, or drive a car off a cliff.”

Echoing Heintzman’ concern on the risk of persistent bias, McKinsey’s Risk Insights argues that machine learning algorithms used to predict behavioural outcomes are also prone to bias. This is due to its heavy dependence on historic patterns, and the persistence of prejudices carried forward from criteria initially programmed by its human architects. Predictive models generated in this manner fail to recognize new patterns that are absent in historic data, and reinforce the same biases under the assumption that things will function more or less the same as before. For example, a social media recommendation engine that filters news based on existing user preferences will naturally encourage confirmation bias in readers. In turn, this will amplify stability bias in future recommendations.

Deep Mind’s AlphaZero

Though it is no easy feat, it is possible to lessen data biases through not only rigorous data cleansing but also through advances in algorithmic design. Heintzman is especially excited about the potential of unsupervised neural network architectures, such as generative adversarial networks (GANs), that are promising to combat such biases. GANs are very clever algorithms that train each other in a double feedback loop. They create an adversarial learning environment for two independent neural networks, a discriminator and a generator. As Heintzman explained, “The generator tries to fool or prevail over a discriminator and the discriminator tries to detect when they are being fooled. A generator might, for example, be instructed to create a photo-like representation of a flower. The discriminator tries to detect the fraud. As these two networks iterate this process over and over again, the generator gets better and better at creating flower photos and the discriminator gets better and better at detecting frauds. These two engines can train each other very well, very quickly.” A notable example of a GAN is DeepMind’s AlphaZero, which generates self-play games (Go, Chess, Shogi) and in parallel trains networks, all without access to exhaustive data in opening and closing board positions.

Another significant challenge that Heintzman and other AI professionals are concerned about is known as the “black box problem”. Whereas a traditional computer program executes a defined logic tree which can be examined to understand why a certain decision was made, AI engines that learn from large training sets and from experience can be “black boxes”. It may not be completely clear why it made a certain decision or recommendation. As Heintzman pointed out, “This opacity of decision making introduces some significant challenges around the assignment of accountability”.

Ethics, Data Privacy and Rise of an AI Duopoly

The first reported fatal accident involving an autonomous Uber test car and a pedestrian in Arizona raises this question of accountability. Putting aside the question of whether a human driver could have avoided the collision by not watching TV at the time of the accident, and the reality that autonomous vehicles can see better, make faster reactions and are statistically safer than human drivers, the question of accountability needs to be addressed. Was the diver responsible because of inattentiveness? Is the software maker responsible for discarding the sensor reading as a false positive? Was the sensor maker responsible due to a lack of resolution? Is the car maker or car owner responsible for putting the car on the road in the first place? Heintzman argues that “the sensor recordings will likely help in assessing distribution of responsibility in situations like this, but there will be situations where we don’t understand why an AI made a certain decision. Because of AI’s potential in dramatically improving transportation safety in general, we as a society might simply have to accept some degree of unknowingness and embed that in our insurance and judicial systems.” Still, it is becoming evident to Heintzman and other like-minded technology and policy experts that there is a lot of room for growth on the governance and regulatory front, particularly in handling cross-disciplinary complications.

“The complicated truth about China’s social credit system”, Wired

An even more pressing concern for many people is AI’s role in a “big brother” society.

“China is one of the largest investors in the development and deployment of AI”, Heintzman explained.

They are using AI driven facial and behavioural recognition systems to monitor behaviours and actions of various minority populations. The data collected from this type of surveillance is used to quantify social reputation at an individual level. This directly dictates one’s access to certain levels of housing or education.” Heintzman and government leaders are getting increasingly concerned as the public under surveillance ponders, “Is this a world we want to live in?” Heintzman argues that the answer is actually “yes”, at least to some extent. “The convenience and value added to public safety, transportation infrastructure and medical outcomes for example, are very compelling and will have to be balanced against privacy concerns and potential misuse by governments and companies.” To their credit, governments are starting to tackle privacy issues more seriously but anything approaching a unified standard and an agreed upon approach have yet to merge.

After the European Commission introduced the General Data Protection Regulation (GDPR) to give data ownership back to consumers, policy leaders in America and China followed suit. According to the Council of Foreign Affairs, American policies, conventionally known for supporting big tech in a laissez-faire manner, are being re-examined by the nation’s top policy makers due to mounting public pressure related to a series of data breaches and privacy intrusions involving companies such as Facebook and Cambridge Analytics. China’s Personal Information Security Specification was put in effect in May 2018 and is somewhat similar to GDPR but was designed to be less cumbersome for businesses. With such divergent political and economic agendas, new policies are bound to emerge as world powers and tech giants learn to balance data monetization with privacy regulation. Heintzman argues that progress in this area is critical especially in light of the possibility that “the vast majority of AI created value and wealth will flow to two super states, China and the United States.”

“AI Superpowers: China, Silicon Valley and the New World Order” by Kai-Fu Lee

Heintzman isn’t alone in his prediction of an AI driven economy dominated by two economic superpowers. Kai-Fu Lee, the founding president of Google China, also foresees the rise of an AI duopoly in the global political landscape where China and the United States become the dominant players. China is well beyond its historical dependency on IP theft to power its technical innovation. The world’s second largest economic superpower has gained tremendous strength in AI through massive investment in research and development. Moreover, China has by adopted a business model that is based on rapid iterative product design, and is informed by vast market data harvested from the world’s largest population. Lee predicts that China will eventually dominate the markets in South East Asia, Africa and partially South America, leaving United States with Europe, Australia, and North America. Heintzman postulates a possible implication from this situation where the economics of AI may lead to a superstate vs client state reconfiguration of the world order.

“We are already seeing international trade in data. In the not-so-distant future, data will be treated as a commodity. There is so much economic value will flow from AI driven decision making. It is entirely possible we will see a situation where two superstates will agree to fund comprehensive welfare systems of their client states in exchange for market access and access to data.”

How the future of the world’s economic and political landscape plays out, is as yet, uncertain, but what is certain, is that competitiveness between the United States and China will undoubtedly accelerate AI development.

Jobless Economy and A New Social Contract

“AI has been referred as a key part of the Fourth Industrial Revolution and with reason”, Heintzman said. “Vast computing power, digital and mobile connectivity and instant access to the world’s collective knowledge has caused technology to become deeply integrated into our economies, societies, and governments. Disruptive technologies such as AI, IoT, robotics, blockchain, quantum computing, and 3D printing are reinforcing each other and advancing at an unprecedented pace across almost all industries.”

This revolution has the potential to dramatically improve the quality of life but it will inevitably set off a chain reaction with significant consequences to labour markets worldwide. A key consequence, as identified by leaders of the World Economic Forum, is inequality. Heintzman concurred, “The economic benefits of AI will likely be unevenly distributed, leading to inequality.” Inequality has also been a feature of previous technology disruptions. However, Heintzman argues that this disruption will be different in both speed and magnitude. This will lead to unrest due to widespread fear that humans will be replaced by automation and machines, resulting in a “jobless economy”.

“Here’s how vulnerable to automation your job is”, World Economic Forum

We are already seeing fear manifest around the world in anti-immigration and anti-globalization political movements. Heintzman noted that “Most participants in these movements fail to appreciate automation, the real driver of the disruption they are reacting to, and its impact.” He cautioned, “You need to consider the historical context. When we look back at previous industrial revolutions, the world order changed quite radically every time. Both capital and labour were liberated to do more productive and innovative things. For example, as automation and improved agriculture productivity reduced the labour intensity of farming, labour and capital were redeployed to higher value pursuits.” He also thinks the notion of fixed labour displacement is a “lump of labor fallacy”. Labour markets will ultimately organically redistribute itself and find a new equilibrium. If history is a guide, we will ultimately be more productive and wealthier as a collective. Heintzman argues that the speed and magnitude of this disruption “will make it especially hard for policy makers to navigate troubled waters, as unemployment, at least in the short to medium term, increases across large swaths of the population.”

The World Economic Forum pointed out that the fruits of automation will not be distributed equally around the world. It will especially disrupt developing economies. They can no longer monetize their once valuable manual labour force as corporations in developed economies leverage AI and other automation technologies to move production and services onshore. As the inequality in labor markets and economies manifest, the gap of the rich and poor will widen according to the 2018 World Inequality Report. This growing inequality breeds social distress and feeds the growth of extreme anti-authority parties. “This phenomenon imposes a global threat as these actors can express their displeasure not only through terrorist activities but also through cyber warfare”, Heintzman added.

How do we deal with some of these disruptive consequences of AI? Heintzman has some ideas.

“The Second Industrial Revolution, the war that followed it, along with the economic realignment of the 1920s and 1930s brought upon the need for a new social contract. In this contract, Western democracies introduced reforms in public works, social insurance programs, and universal education and healthcare. In a similar vein,, we are going to need a new social contract for the Fourth Industrial Revolution that puts forth corrective guidelines to offset inequality at large, and promotes constant retraining and lifelong education.”

The idea of a new social contract has many advocates including the Global Future Council on Technology, Values and Policy. It has preliminarily proposed two changes in the contract:

  1. Establishment of a new Licence to Operate to enforce companies to take on social and environmental responsibilities;
  2. Creation of national and global Citizens Wealth Funds to finance social services and infrastructure.

Progress in this area will be difficult. It will take some extraordinary leadership. As the World Economic forum made clear, there are many powerful forces, such as large corporations, that have advantageous stakes in the existing system and have considerable influence on policy makers. These forces will resist changes to the existing social contract and, in some situations, have already worked diligently to dismantle it. The new contract needs to recognize that in a world with AI and related technologies, “no man is an island”. The benefits of technology driven productivity cannot exclusively flow to a small elite. As Heintzman told us, “If it does, societies and our global economy will be destabilized. The cost of that destabilization, both in terms of human welfare and wealth, will be enormous.”

“It’s time for a new social contract”, World Economic Forum

So what do governments need to do to prepare our society to be AI-ready and protect the revolution’s left-behinds? In Heintzman’s view, “We must get from this current era to the promise of the next. The future is inevitable. The road ahead will be bumpy. We are on the cusp of a very exciting future. We have choices ahead of us and it is up to us to make the sensible ones”. He hopes that policy visionaries and advisors, as opposed to term-based and interest-controlled policy makers, will emerge to get us over this transition period without ripping the fabric of society apart. And we should begin by designing and testing parts of a new social contract that will ideally address a majority of the problems to come.

We concluded our conversation with Douglas Heintzman where we began, talking about the impact that specialized and generalized AI will have on our lives. Heintzman believes that AI in many forms will pervade almost all aspect of our lives, however, true generalized AI is a ways off. He did concede and said,“We are starting to see global AI networks like SingularityNET emerge. SingularityNET use swarms of AI engines powered by a blockchain reputation engine to tackle more complex compound problems.” Due to the pervasiveness and the pace of development of AI , Heintzman believes that “we need to think through the many ethical issues including bias and accountability before it is too late.” He foresees heavier investments in the near future to better address the way machines should handle user/self-preservation priorities and ethical ambiguities.

The final thought he left us with us was:

“We need to engage in a formal and informed dialogue. The topic of the exciting future that we are heading towards, and its many implications on business and society, need to become a core part of our political and policy discourse. As many people as possible need to become literate in this area so that that discourse is productive and impactful.”

About the author(s)

This article is co-written by Kitty Chio, the Content Lead at ABD, and Michelle Liu, the President of ABD.

Want to learn more about the outlook of AI?

Watch Douglas’s keynote presentation on “Living in the Age of Disruption” at the ABD 2018 conference.

Learn more about AI here.

Follow Analytics By Design and stay tuned for similar conversations with industry experts and thought leaders.

--

--

Kitty Chio
Analytics By Design

Content Writer for Analytics by Design. Say hello @abd_toronto!