But In 12–15 Years.

It’s pretty obvious that right now, we needn’t worry about self-driving cars or essentially harmless programs making copies of themselves for harmful purposes and initiating an endless chain which will end up eliminating humans. But in 12–15 years, we better worry indeed about AI’s or AGI’s falling into harmful hands, doing a tiny little bit of harmful code (maybe even by accident) and literally eliminating humans forever. Military AI’s in our enemies hands, doing incredible harm from self-replication? You better the hell believe it. Like I said many times previously, it’s the size of our tiny little planet combined with the ungodly awful power of AI that will do us in, and 99.9% you don’t believe it in the slightest. THAT’S what’s really funny — that you AI professionals don’t believe it either, because you can only focus on the good things. I warned you.