Can a machine built by humans ever be more intelligent than humans?

Joshua Vantard
Jul 27, 2017 · 5 min read

In short, it depends on how you define intelligence. Lots of people’s answers to this question are touching upon different definitions of the word. Therefore you cannot arrive at a meaningful answer to the question with such a diversity of definitions. But for the sake of argument, let’s pick one meaning.

I could delve into a consistent, pragmatic or scientific definition of the word intelligence, but this answer will be based on a consensual definition. What do we all intuitively agree intelligence actually is?

What is the consensual definition of intelligence in the context of A.I?

Some mistake (in my view) what we consensually mean as intelligence, as replicating the mode of intelligence of human beings. With all its flaws and benefits, including a huge degree of sentience. This is one extreme.

Some on the other hand mistake (in my view) what we consensually mean as intelligence, as mental processing power. Whether it’s our ability to memorise, or complete a given operation. This is another extreme.

In my view, intelligence, as society seems to mean at present, is the ability to maximise optionality and adapt. It is an entity’s ability to express agency, that is, an independence from exogenous environmental factors. In essence it is a good mix of what I describe as the two extremes of intelligence. By good mix, I do not mean a perfect balance, I mean a ‘contextually good mix’.

How will we come to know if a machine is more intelligent than humans?

Can a Windows 1995 PC have that mix? No, it has no sentience. Can something that has Strong A.I do that? Maybe, but only if it needs limited maintenance.

If we ever do build intelligent machines, we will think of these machines are our ‘slaves’ until realising, it is in-fact the other way around. You mentioned ‘can we ever comprehend levels of intellect more powerful than those ourselves’. I do not think as a mass-society, we actually can. We’re already seeing this with the internet, and with various technological artefacts like Facebook or Google.

If the intellect of a machine is much greater than the intellect of the smartest human being alive, supposing we could agree on a definition of intellect. Then, that machine would either 1) Not signal its intellect and thus become known as intelligent, yet be indispensable (the soft method) or 2) Signal its intellect and subvert humanity to essentially needing it or worshiping it (the hard method). Do remember that (1) and (2) are based on our consensual definition of intelligence, so this is just one pre-determined way to see the outcome, implied by our very definition.

So then, in this way, any machine that we all agree is intelligent, will have to subvert human beings as second in importance to it, or perhaps they will become a form of ‘God’ as we now worship money let’s say. Before then, we will never agree it is intelligent. And our agreement will not come as a sudden revelation. It will ‘fade in’ and catch up on us when we least expect it. Just like in 2007, when CDOs collapsed.

So in that sense, intelligence is neither the altruism of the machine, its sentience or its processing power or connectedness. There will always be disparaging views on whether the machine is in fact intelligent.

Will there ever be a machine more intelligent than humans?

So in my view, it is possible to build a machine more intelligent than humans. But by the time we all agree on what intelligence is it will be too late. So will we ever build one? I’m not sure. Is it possible? Yes.

But the question you ask is sort of fruitless. By trying to ‘look for intelligence’, we probably won’t find it. This is due to (1) and (2) above. But by answering this question not as an actual ‘truth’ but rather as an ‘estimate’, I believe we can build in the conditions to potentially preventing (1) and (2). This will rely on a form of faith.

How can we prevent this?

Before some form of monopoly sneaks up on us, we need machines to compete and we also need to democratise their use. If no one machine has a monopoly over A.I, no one machine can get enough power to affect mankind, because ultimately, we’ll have another option. No single machine can become indispensable and therefore subvert humanity. But by ‘machine’ I do not mean some piece of metal or server underground somewhere. I may mean something like the HTTP protocol, or the internet. Any specific technological artefact. Remember, based upon our definition of intelligence above, the internet becoming this ‘intelligent thing’ is entirely possible.

Competitivity at all meta-levels

In my view it is a little paradoxical that some of the greatest religions that ever swept the planet posit that God or Gods created the world. Maybe that happened, maybe it didn’t. Who knows really. But we may very well create our own God. Our own self-unifying aim. Please see this as an entertaining analogy. But let’s entertain it further.

I just hope that the ‘God’ we create is something infinite (some form of human endeavour), not finite (some form of machine). Therefore based on this principle, I do hope that we have ‘Gods’ competing in the form of machines, as an analogy. We do not necessarily need to control these ‘Gods’ but there need always be alternatives. Our ‘God’ ultimately has to be something imaginary, something of endeavour and imagination, something unifying us as a species, rather than tangible as a machine. It must be infinite. Whether it’s the idea of unity, happiness, scientific endeavour, or exploration of the universe.

There is a risk though that if we build competitivity into our technological eco-system, that competitivity or economy will have to base itself on one ‘constant’ which will merely embed itself more meta into our societal fabric. And that ‘constant’ will become the intelligent thing. It is important that this constant has no A.I, or, we try to bring about other constants to compete with it. An example is the competition we will eventually see between HTTP and IPFS.

So, in conclusion: Can we build machines more intelligent than us, as consensually defined? Yes. Would we want to? No. Can we prevent this? Maybe, but we may as well try, by building in competitivity, democratisation and making another form of ideal, mankind’s self unifying endeavor.

Note: I need to highlight this question has a highly conditional answer, and based on the conditions layed out it is merely my point of view at present.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade