Sparking Machine Consciousness

Dietmar Millinger
3 min readSep 2, 2015

Some years ago I stumbled over a science fiction book called “The Stories of Ibis” from Hiroshi Yamamoto. The book works through a possible plot how, in a distant future, humanity is outsmarted by machines and sent into the woods for good. Since reading this book I cannot get this topic out of my mind. And it does not help that Hollywood is now reeling out “machines taking over” flicks on a quarterly pace.

While digging into this topic it is hard not to run into David Chalmers. In this interesting Youtube video he presents ideas how to build AIs in a safe manner without being outsmarted by the result immediately. He sets up something like a leak proof container in which such a system is constructed without us looking inside. This could possibly be a solution if building an AI is the direct goal of a project.

However, from my past experience in the IT industry I have a scenario in mind that is completely different and bothers me seriously. Not only because it can happen in this way, but more that it can happen soon. This scenario starts with us humans building ever smarter machines and software for specific applications, not directly intending to build an AI nor a conscious machine. Examples are Deep Blue for chess playing or stock trading software. A fascinating example for such a smart software is a recently published deep learning software which enables us to find something in the haystack even when we don’t know what we are looking for. Researchers trained a deep learning network with a million images and afterwards presented a picture with a dog to the network. The software found three images with other dogs, without ever beeing trained to identify dogs. Such smart application-specific systems can be called “weak AIs” as they are generally not intelligent, but do a specific job very good, much better than humans. These systems are created today on an impressive pace for a multitude of applications.

The scenario continues now with us humans starting to connect more and more weak AI systems to form bigger systems for ever more complex tasks. A good example is iCEO, a system that automates complex management work in companies. The software is still a prototype, but the project is a blueprint for many other approaches to integrate multiple weak AIs into not-anymore-weak AI systems. Take cars. While older cars with electronics had several independent functions like anti-lock breaking system and electronic stability program, todays cars usually have all those functions integrated into smart driver assistants which keep a 2.5 ton SUV on the road even if the driver is not skilled. And the self-driving car is just around the corner.

So we are already seeing such new systems being built with ever smarter and broader functionality. However, the engineers connecting the weak AI subsystems into bigger systems have no chance to always understand all aspects of the integrated subsystems. They have to focus on the intended system function and cannot care too much about possibly unintended and unknown interferences between the subsystems.

And here comes the point. As we integrate more and more complex subsystems, each of them a weak AI, there will be a point when the resulting conglomerat of interconnected subsystems sparks a form of “machine consciousness” out of such interferences. What ever this may be, we will not know it and we will not have seen it before. But it will be there and it will create unpleasant effects like the resulting system developing and following its own private agenda. Without further protection, it is by chance whether the resulting conscious machine will be fast enough to protect itself from our shutdown attempts or not.

Now is clearly a good time to step deeper into the topic of possible forms of machine consciousness and what can constitute such a phenomenon. Only with a much better understanding it is possible to establish safety mechanisms that effectively prevent unintended interferences which would otherwise send us into the woods. In my search for researchers and philosophers which are working on this topic, I found several well known persons that warn about those risks, but little work of substance how to migitate the risk without loosing the benefits of machines with artificial intelligence.

Fractals created with sciencevsmagic.net. Many thanks to Nico Disseldorp for providing such a great tool.

--

--