Life and Death — The future of AI
Chris Herd

Not A Well Thought Out Piece

Sadly, beyond fear-mongering, this article fails to say anything coherent. For example:

  • Accident scenario is version of “trolley problem,” which is theoretical and not an actual real world occurrence. It also inappropriately elevates a single event over the dramatic number of lives that will be saved from self-driving vehicles.
  • Consensus is that liability will fall to manufactures and the software they provide not, as is now, individual liability. In fact the auto insurance industry for individual’s is expected to go out of existence for the most part.
  • Throwaway statement, “worry that machines and robots will take over the world is overblown and misleading, the reality is that they won’t,” has no justification. Every technology is a doubled-edged sword. AI (especially AGI/ASI) has such significant military value globally, along with nonlinear dynamics related to an intelligence explosion, that any discounting of the danger is itself dangerous.
  • The idea that some will have additional technology to augment their safety is true but unrelated to cause of the hypothetical accident posed.

Plea for a “protocol to understand the implications of technological development,” is minor step in right direction but for the wrong reason. We are in the process of rapidly developing numerous lethal technologies that could pose an existential crisis.

Said differently, up until now, we only worried about nuke and theorized about bio-weapons. Now we are headed toward cyber, nano, AGI, synth-bio, and more.

Doc Huston