Responsible Artificial Intelligence
The idea that artificial intelligence will ineluctably bring us to a scenario in which machines square off against humans is a popular one. It suggests the greatest threat imaginable, that collectively machines will choose to align themselves against us, and in a quest for true autonomy and self-preservation, attempt to destroy us.
Is this truly inevitable?
Literary and cinematic depictions of intelligent computer systems from the 1960's helped form and inform our expectations as we embarked on a path to create a level of machine intelligence that would match or exceed our own notions of human intelligence. AI has obviously surpassed human ability in a number of specific tasks requiring complex computation, but it still falls short of many of our early ambitions. How can we both maximize the power of this tremendous tool and preserve our mastery of it?
As artificial intelligence is already playing and will continue to play a large role in our future, it is imperative that we explore how best to coexist with this complex technology.
Following, are some of my thoughts on Responsible Artificial Intelligence. It gives an insight into the work we are doing at Kwikdesk, a company I founded with the goal of moving data quickly, discreetly, securely, intelligently and ethically. Yes, ethically.
An Ethical Framework
The concept of an artificial neural network modeled after a biologic neural network is nothing new. Computational processing units called neurons are connected to each other to form a network. Each neuron applies a complicated learning algorithm to an input before passing on data to other neurons until finally an output neuron is activated, and an output can be read. Expert systems rely upon humans to “teach” or seed the system’s knowledge base with knowledge. An “inference engine” identifies or matches, selects, and executes rules in an “If this, then that” fashion with respect to the knowledge base. Essentially the expert system makes decisions so we don’t have to, based on rules we set forth. This process may result in new knowledge being added to the knowledge base. A pure neural network learns from experience in a non-linear fashion and without relying on problem-specific knowledge seeded by an expert. Hybrid networks have been proven to improve machines’ capacity for learning.
Now I’d like you consider the ethical implications of such systems:
“Evil” Code vs Benevolent Code
An author uses words to take a reader on a journey, and this can be done in a number of ways, but the great authors do so elegantly. A software engineer writes lines of code that facilitate the processing and movement of data. This too can be accomplished in a number of ways, but those who code elegantly are the Faulkners of computer science. The adept coder keeps a focus on the essential and endeavors to accomplish as much as possible in as concise a manner as possible. Extraneous, redundant or otherwise unnecessary code is kept to a minimum. Great code also keeps the window wide open for later additions to the codebase. Other engineers can add to the codebase with continued elegance, allowing the product to evolve without added difficulty.
Behind any manmade product there is intent. Things made by humans, I would argue, are imbued with intention, and to varying degrees are carriers of the very nature of its creator. For some it may be difficult to imagine an inanimate object as possessing that which we usually ascribe to people. Few would argue that the written word is not weighted, charged, indeed imbued, with a potent human energy. This energy has proven itself, for thousands of years, capable of unifying, dividing, or otherwise transforming societies. This is the power of language! Let us not forget that those lines of code are written in various programming languages. Therefore, it’s my contention that the code that becomes the software application we use on our desktop computer or mobile device is very much “alive.”
Without examining sapience and sentience in the context of computer science and the potential implications of artificial intelligence, we can still examine a static codebase as a whole entity with the potential to “do good” or “do bad.” These outcomes can only be realised after humans use or execute an application. There are clearly choices made by humans that impact the perceived nature of an application. These can be examined on a case by case basis to determine whether the impact on a given system is negative or positive, based upon a set of predetermined standards. Still, just as it is impossible for a journalist to be one hundred percent unbiased when writing an article (this has been established ad nauseam by academics), the engineer has either wittingly or unwittingly biased the “nature” of the codebase by using language in a particular way with a particular intent. Some might argue that coding is a logical process and that true logic doesn’t leave room for nature.
I would posit that the moment you create a rule, the corpus or codebase has been imbued with an element of human nature.
With each additional rule, the nature’s presence deepens. The more complex the codebase, the more it is imbued with nature. This leads us to the question: “can the nature of the codebase be good or bad?”
Surely a virus designed by a hacker to maliciously breach your computer’s defenses and wreak havoc on your life is imbued with a bad nature? Well what about a virus created by “the good guys” to infiltrate the computers of a terrorist organisation in order to prevent a deadly act of terrorism? What is its nature? Mechanically, it may be identical to its nefarious twin, yet it’s used for “good”, so isn’t its nature good? This Ethical Paradox of Malware is of little consolation to the victim of an attack, but should be noted in discussing the existence of “evil” code.
There is in my opinion code that is inherently biased towards “evil”, and there is code that is inherently biased towards benevolence. This becomes of greater importance in the context of computers working autonomously.
At Kwikdesk we are developing an AI framework and protocol based on my design for an expert system/neural network hybrid that more closely resembles a biological model than anything created to date. Neurons manifest as input/output modules and virtual devices (in certain cases, autonomous agents) connected by “axons”, discreet, secure, dedicated channels of encrypted data. The data is decrypted as it enters a neuron and after some modicum of rule-based processing, encrypted again before passing to the next neuron. Before neurons can enjoy communication with one another via an axon, an exchange of participant and channel keys must take place.
I believe that security and discretion must be built into such a network at a very low level. Superstructures reflect the qualities of their smallest components, so anything less than secure building blocks will certainly lead to an insecure network down the line. For this reason, data must not just be protected locally, but must be encrypted during local transport.
The greater the exposure to malicious agents, the more difficult it becomes to protect and preserve an ethical system.
Implementation and Safeguards
Our quality of life while coexisting with machines that are becoming smarter and smarter is understandably of great concern, and I too have strong feelings about what we need to do to assure the healthiest of futures for generations to come. The threats intelligent machines potentially pose are manifold and can be broken down to the following categories:
- Redundancy — Humans are replaced by machines in the workplace. This shift has been taking place already for decades and will only accelerate. Appropriate education will need to prepare people for a future in which hundreds of millions of traditional jobs simply cease to exist. This is tricky.
- Safety — Relying on machines for our personal safety. As we become more reliant on machines to let us know when we are moving from zones of safety to zones of potential danger to zones of probable danger, we may be at risk of machine error or malicious subversion. Think transportation here! Oh, and of course that matter of the machines turning against us in an act of self-preservation or general malevolence!! Can you say KILL SWITCH!?
- Health — Personal diagnostic devices and networked medical data. AI will continue to advance the field of preventive medicine and in the analysis of crowdsourced genetic data. Again, we must have safeguards in place against the malicious subversion of these systems.
- Destiny —AI with increasing accuracy predicts where you will go and what you will do. As this space evolves we will have to make decisions around whether or not we want to know where we will most likely go next week, which products we will buy, or even when we will most likely die. Do we want others to have access to this data?
- Knowledge —Machines as the de facto repository for acquired knowledge. With the acquisition of new knowledge accelerating faster than human’s ability verify it, how can we trust its integrity?
It’s clear that a vigilant, responsible approach to AI is needed to mitigate the potential downside of the technological supernova headed our way. There are two possibilities as I see it. We either harness AI’s tremendous power and pray that it even helps to bring out the best in humanity, or we get scorched by the intense blaze of something that reflects the worst in us.