Google’s AI Constitution, Really? 🤯
That’ll Do.
It all sounds fine. Put the company with the unofficial mission of “don’t be evil” in charge of the AI push and what you expect to happen.
“If you said something evil, you’re not right in this case. It’s not evil this time, it’s incompetence.”
Isaac Asimov Redux
Google has decided to play sci-fi novelist for real and has decided to develop a constitution for AI robots. Inspired by Isaac Asimov’s “Three Laws of Robotics” the DeepMind robotics team has developed a series of safety-focused prompts issued to a controlling LLM.
“At least they haven’t involved L. Ron Hubbard’s ill-written works in this.
The trouble with this issue? We need to think about how sensible it is to entrust an LLM with a real-world, physical robot. You know, one with access to knives (and in some areas, guns).
I’m saying that because I used trust ChatGPT with naming parameters. Some questions about API design. Then I found…