Aug 24, 2017 · 1 min read
Great article. Hopefully Elon will read it.
You wrote: “They can’t protect us. You know why? Because that’s not how we program AI! We don’t give it a bunch of explicit rules. It figures out the rules for itself.”
Q: Is it possible to have a control interface that allows us to override or modify the dynamic rules the AI is building, in case it has run amok in some way? Seems like a safeguard that would be essential, other than just shutting it off.
