Assumptions & Common Objections

Benjamin Stingle
Somebody has to!
Published in
7 min readApr 8, 2018

--

For simplicity, I’m going to outline some of the the assumptions & opinions underlying my discussions. While speculative & controversial, these opinions have many credible supporters, and there has been much written on these topics.

One thorough option is Superintelligence, by Nick Bostrom.

Eliezer S. Yudkowsky is excellent. Read all his stuff (my favorite ideas in Superintelligence are from him, also read his Quantum Physics Sequence if you’d like to change your view on reality).

I’ll leave you the reader to argue elsewhere about these issues, and just take them as granted here (or feel free to move on).

Primary Assumptions:

We will develop a machine (often called an Artificial General Intelligence or AGI) with intelligence greater than that of a human being.

  • There are no barriers to creating a machine at least as intelligent as a human
  • We have been making progress at a swift rate, since the invention of computers.
  • A general AI equal to that of a human is very likely within the next 200 years, conservatively (assuming civilization survives that long).
  • There is no reason to believe that such machines cannot then become more intelligent than humans, in both speed, flexibility and general quality.
  • It is unlikely humanity can consciously restrain itself from developing such technology (when have we ever done this, when it was very valuable?)

An AGI is, at the least, a significant existential risk to humanity.

  • We might get the stereotypical Skynet
  • We might get the benign-sounding but disastrous Paper Clip Maximizer.
  • We might get a well-meaning intelligence, that nevertheless ends up wiping out humans as an externality, much as humans have destroyed countless species (and indigenous human societies) as we have grown.

We might get a benevolent AI, leading to a paradisiacal singularity. But, I do not believe any argument that this is a certainty or even likely. Therefore, the risk is still worth dealing with.

  • This is very similar to arguments that climate change might a) not be severe, or b) produce benefits. While either of these assumptions might be true… there is no evidence to suggest they are particularly certain, and thus we should still prepare for the big downside, even if only as insurance.
  • I also tend to find most arguments that AGIs would be beneficent to be lacking in quality, and backward looking. This reminds me of the turkey, who, being a good empiricist is most convinced that the farmer wishes him well the day before Thanksgiving.

AI might destroy humanity more thoroughly than even global nuclear war. Given this risk, it’s well worth it for us to prepare how to reduce, or at least, understand this risk.

This is my primary interest in these posts

Secondary Assumptions:

Machines will outstrip biotechnology until we reach Human-level AI. Biotechnology is unlikely to allow human intelligence to keep pace with machines, or ‘merge’ with them.

This deserves it’s own discussion, but in short: my observations suggest very fast progress (say, in the last 40 years) in machine ‘intelligence’ and vastly slower progress in understanding, let alone improving, biological systems.

Some factors influencing this are:

  • We have very limited tools for understanding biological system still. Developing news tools is hard, and not that profitable. For example, successful drug companies are more valuable than tool companies. Successful biological tools companies are rare. I experienced this working in Venture Capital for many years, desiring to fund such companies.
  • Biological systems, let alone the brain, appear exceedingly complex. We are still very far from understanding their dynamics.
  • The number of well-defined tasks that humans can perform much better than machines is steadily, quickly decreasing, whether it be Go, radiology, poker, driving, investing, chemical synthesis, or even burger-flipping. Even things like production of art, or mathematical proofs are in play.
  • The cost and time it takes to improve biology is massive compared to information technology. This is in part due to our limited tools, but also due to other factors such as:
  • Regulation and social dynamics. (E.g. genetically engineered salmon have taken over 25 years since their development to be approved for Canadian markets, and are still banned in the US).
  • The inherently slow nature of biological systems (e.g. growth + development time of humans is long! You can can’t test anything quickly). Software is inherently much faster.
  • Machine / human integrations are likely not a panacea. These will be very hard to produce, and rate-limited by all the things that limit biology in general. Unless the biological component provides real added value , I don’t see why they will persist, and the machine side not become dominant.

Common Objections

Below are some commonly made objections to these points or arguments for why we shouldn’t be discussing this now. These all deserve more space than they get there, but this is just not my priority now.

Machines can’t be conscious. Only organic things like us can.

A: I am not concerned with philosophical definitions of consciousness here. The question at hand is an empirical one, as to whether machines can beat humans in all relevant empirical tasks.

Neurons are vastly more complex than their digital representations. We are nowhere close to making machines with the computational power of biological systems.

A: There is sufficient evidence to make such an argument. The more we learn about biology (neurons being just one example), the more it seems like an immensely complex alien nanotechnology that is far beyond our current understanding. That said, our machines (from the wheel up to Alpha Go Zero) seem to be able to handily beat biology in well-defined and relevant tests. So complexity is clearly not a universal defense. The number of tasks at which machines beat humans is growing steadily. I see no evidence that any of the remaining tasks will be insurmountable due to yet to be understood elements of biological complexity, as they do not seem to utilize fundamentally different systems or processes.

There are more important things (related to this) to worry about! What about climate change, nuclear weapons, poverty, or inequality caused by automation obsoleting jobs now?

There are many things to worry about in the world! Oh man it gets me down thinking about them. But this is one I think we should worry about more than we currently are. This is because a) we are not worrying about it much, currently (though Elon & Co. are helping!) b) I believe this is the highest probability cause for the complete extinction of humans (or at least a tie with nuclear weapons).

Global Warming: Yes this is a big deal. But it is highly unlikely to result in the complete extinction of homo sapiens. Also, plenty of other folks are carrying the torch here.

Nuclear Winter: Good point. This is a huge thing to worry about, which we don’t currently worry enough about. I’m not sure how much it would take to truly drive homo sapiens to extinction, but nuclear winter doing so seems at least plausible. But I have little to add on this topic. (Except to throw in quickly that appeals for world leaders to grow sane are laughably naive, and that the best chance is a defense that beats ICBMs).

Concentration of power / machines taking human jobs:

Yes these are all big (and current) issues to deal with. In some ways these are simply the current steps in the process toward AGI birth. They certainly deserve more worry than AGI, currently. But that does not mean we should ignore AGI risks.

That said, these risks are unlikely to totally extinguish humanity. We could recover from them. And there are many, many other people fighting them.

We can both deal with these challenges and prepare for the more distant future.

Human society is already like a huge (decentralized) intelligence that rules our lives and creates huge amounts of misery. AI will just be a gradual extension of this process:

This is a subtle and fascinating perspective. It is correct, perhaps, but this is more due to the perspective change than to any really new information. AGI may well result in the continuation of this process. But before true AGI arrives, this decentralized “intelligence” is unlikely to exterminate humanity, so I don’t think this frame is really helpful. Let’s just keep this in the frame of the previous question: it’s a political worry.

This is such a complex and uncharted domain. We don’t have the ability to foresee what will happen. Just look at how bad we are at predicting how technology will progress over 5 years, or how an election will turn out! It’s a waste of time.

Maybe. But the cost of trying is not huge. The opportunity cost of not trying could be massive. Let’s try.

AI / machines learning is totally over-hyped! We are so from from AGI it’s not worth talking about:

I hear you. There’s a huge hype cycle here, and it is amazingly annoying. And we are probably far from AGI (of course ‘far’ is a completely relative term). But, we are within 200 years of it at best. And maybe it’s much closer. The risk of not addressing this in time is existential. Read Yudkowsky here for why the transition to AGI might hard to see coming.

There really can’t be an intelligence much better than ours. We are already close to the global optimum.

The simplest argument against this, is that even if an AGI were simply equal to human intelligence, it would operate much faster, at lower cost, and more flexibly, which would render it massively superior. It seems very unlikely this sort of AGI is not achievable. Nick Bostrom and others cover this well.

Machines will always do what we tell them to. So no worries.

Do your current machines behave in predictable ways? The first lesson of software engineering is that even simple programs can be incredibly unpredictable. No one is smart enough to keep an AGI genie in the box.

We can work hard to create AGI that is under control, or at least safe.

This is equivalent to a genie who will grant your wishes safely. This genie is very hard to define.

Got more arguments?

I’m not surprised.

--

--

Benjamin Stingle
Somebody has to!

Two souls look out through bars: One sees mud, the other, stars.