Fail-Safe AGI. Why worry? Let’s do it right the first time.
Steve Hazel
1

Appropriate Caution But Overlooks Key Variables

While all of what you say is appropriate, what is not included is as bad or worse than cautions you raise.

First, government’s military and intelligence services globally do not play by these rules. First and fastest is what rules in favor of fear of AGI or ASI able to dominate all competitors. Second, once a machine learning moves to comprehending the Internet the ability to out maneuver humans becomes a real concern.

Said differently, any situation that leads to an AGI intelligence explosion can be as bad as a bad, wrong or poor design.

Also, suggest Barrat’s, Our Final Invention, is more readable than Bostrom.

My Medium publication, A Passion to Evolve, has number of articles related to all these issues such as, Why You Should Fear Artificial Intelligence-AI , Only 6 Possible Outcomes in Next 20 Years [ — 4 are Bad — ] , and longish but detailed discussion of big picture evolutionary context and risks, Macroscopic Evolutionary Paradigm .

Doc Huston