Sep 6, 2018 · 1 min read
To anyone who is interested in the topic, I can only recommend the books Superintelligence (Nick Bostrom) and Our Final Invention (James Barrat). Both talk about the potential and the dangers of AI as soon as or after it surpassed human intelligence. I am sure that Elon is more afraid of these scenarios and also about the fact that not everyone will be developing AI based on the moral standards that you rightfully talk about. Of course, it is possible and necessary to play it safe during AI research, but this might not be clear to those making the decisions.
