How Regulation Might Evolve To Meet The Rise Of Artificial Intelligence

Padraig Walsh
7 min readApr 4, 2018

--

Photo by Alan Kelly on Unsplash

AI is an existential threat to mankind. True or false. AI will empower mankind to transform the world we live in for better. True or false. Or are both these statements potentially true? If technology is neutral, then AI could be a blessing or a curse. The future course of AI will be determined by us non-bots and how we interact with AI. That interaction will be heavily influenced by governance, ethics, regulation, and law.

Let’s take a high level look at some ways how regulation of AI might evolve.

The Supra-National Level

The debate whether AI poses an existential threat fixates on whether a robot apocalypse might occur — a pessimistic consequence of a singularity event. For present purpose, let’s accept there is an existential threat. So, robotic sentience might occur; this might result in machines taking control of critical infrastructure and military systems; and this might spell the end of mankind.

How are we presently dealing with existential threats? Two examples come to mind:

Nuclear proliferation: There is common consensus that nuclear weapons are the most dangerous weapons known to man, and nuclear warfare poses an existential threat to mankind. There are 93 signatory states and 191 state parties that have acceded to the Treaty of Non-Proliferation of Nuclear Weapons. The enforcement mechanism is occasionally clunky, but it works.

Climate change: Unchecked polluting emissions and other poor environmental actions of man will lead to an increase in catastrophic weather events, loss of ecosystems, and a series of events that could change the world we live in irreparably. This is the basic climate change thesis. The majority of world actors accept the basic thesis. A significant minority do not. Even among those that accept climate change, there is no single view about the extent, implications or timing of climate change. Not surprising. It is a complex issue. The cost of countering climate change falls differently among the rich and poor states of the world. There is no “one size fits all” remedy to stop climate change. Consequently, we get the Paris Accord under the United Nations Framework Convention on Climate Change. It could best be summarized as better than nothing …but only just.

Take these as two opposites in how to deal with an existential threat. Which extreme is more likely to apply to AI?

Here are some factors:

  • There is no common, unified consensus that sentient AI is possible, or even if so, would pose an existential threat. Experts within the science and technology sector — don’t mind say the public at large — sit on both sides of the divide or the fence in between.
  • Even among those who believe sentient AI is possible, there is no consensus about what that means or what must be protected against.
  • Enforcement would be difficult, almost impossible. The development of a nuclear weapon is virtually impossible to do in secret. Think of the materials, know how, and infrastructure needed. AI — all you might need are techies and power.

Harmful AI will be even more difficult to regulate at a supra-national level than climate change. Regulation means restriction or prohibition. Countries will agree to restriction and prohibition if they can see why they should — either the benefit achieved or the risk avoided. This doesn’t exist in respect of harmful AI. There is no consensus yet that it poses a potential existential threat.

No action on the supra-national level means delayed action at the national level. When would a country bring in laws to stifle the potential development of harmful AI? Only if it became a matter of national security and if national security outweighed national strategic benefits (including military benefits). No one will lead the charge to be the first.

The Society Level

AI directly impacts two significant issues of our time — the nature of employment and our right of privacy. Let’s look at how a new conceptual framework is needed for law and regulation in these areas.

Universal income: Many industries are experiencing the rapid transfer of human output to machine output. Anything that can be automated, will be automated, and humans will be less involved or not involved at all. New employment opportunities will arise, but they will be limited to an elite with the skill set needed. That elite will be supremely well-rewarded. Most people will find themselves with less work of less quality for which they are paid less. The battle between capital and labour waged for the last 150 years is coming to an end; capital has won.

We must redefine what is meant by ‘employment’. Presently, we have a set of rules that protects the employed under labour legislation, and those who are unemployed under welfare legislation. In Hong Kong, for instance, employment protection is almost[1] binary between those who are continuously employed, and everyone else.

Work will have many more phases and stages in the future. The gig economy is the start of a trend, not the end point. Work will not be exclusive to one employer. Most work will be contracted. Work will be mobile. Work will be occasional. People will not be employed. They will have work to do. Or not. On a day-to-day basis.

Universal income acknowledges that ‘employment’ is the wrong construct to define whether a person should have a minimum income. The status of being unemployed assumes there is a reasonable chance to find employment. In future, that possibility will diminish for all and vanish for many. Universal income recognizes that in a world where machines have replaced labour, each human deserves to be paid a minimum amount. Not as welfare. Simply for the sake of being human.

This will be combined with a re-imagining of tax. Taxing workers will make less sense. Instead, we must tax the owners and users of machines and robots that have displaced workers.

Privacy: Personal data is information that relates to a living individual and is capable of identifying him. The basis acknowledgement of privacy law is that personal data is owned by the individual, and his permission is needed for his personal data to be used. Data is the raw material of AI. Personal data is the heart of the matter.

The concept of identity is changing. Blockchain technology allows you to reclaim ownership of your identity, and then to reshape how it is configured and used. So, personal data disclosed in one sphere of your life — say, interaction with banks — can specifically limited to only what is needed for that relationship. The personal data you may use in another sphere of your life — say, an interest in jazz music — is separate. You have total control over how that is disclosed and used. You can make sure there is no overlap between these venn diagrams. Also, you can see who is using your personal data. You can verify that all use is within your specific permission. You can direct and verify the destruction of your personal data by those who breach your permission. This will all be authenticated on the blockchain[2].

So, one technology is giving teeth to the promise of personal data. And then another technology — AI — thrives on permissionless data access and processing. This is the battle that is being waged right now.

Law and regulation will evolve to meet the societal norms as it relates to privacy and personal data ownership. Here are some trends we can expect:

  • The economic and reputational damage of privacy breaches will make privacy policy a priority from day one. No longer will start-up businesses either not have a privacy policy, or simply copy someone else’s.
  • A simple approach to personal data permissions will go. The scope of permission will be multi-layered.
  • Individuals will directly monetize their personal information. Personal data will be a property right that can be licensed in the same way as intellectual property.
  • People will have a right to be forgotten, and will have the means to ensure that this is enforced.
  • The economic model of free usage of an app, in return for rights to process, use and sell personal data to advertisers, will die out.

As people come to appreciate the real value of their personal data, and are able to monetize it directly, service of an app will be priced according to its real value. The Xanadu Project could become a reality.

Protection for privacy will be one of the markers for whether AI will prosper. The tech world will become divided according to the level of privacy protection given to people. The EU has set its stall out with GDPR, but then privacy is entrenched as a right in western Europe. The global centre of excellence for AI right now is mainland China. There, the cultural importance of privacy as a human right is less.

AI is having a global impact. The world is not able to provide a single global response. A supra-national response to harmful AI will be heavily compromised and ineffective. Each country will take its own stance on issues such as the changing nature of work, universal income, taxation, and privacy. Will AI be a blessing or a curse? It will be both, depending on where you are, who you are, the laws you live under, and the specific context you are considering. AI is neutral. We will dictate its utility and contribution to mankind.

This is a summary of the core content of a presentation I delivered to the third year students of the BSC (Hons) in Financial Technology at the Department of Computingat the Hong Kong Polytechnic University. It was part of their Fintech Seminar Series, and you can see my entire presentation here. Please let me know your feedback — good, bad or indifferent!

[1] Hong Kong law also provides protection to employees who are not continuously employed. The protection is significantly less.

[2] Most of these rights already exist. The key change brought by blockchain is to bring the ability to control directly back to the individual.

--

--

Padraig Walsh

I write on Hong Kong’s tech and startup scene from a legal viewpoint.