Questions to ask a “true” Artificial Intelligence
After I read the now viral interview with an AI called “LaMDA” (Language Model for Dialogue Applications) posted here, on Medium by a senior google employee, glorified chatbot or not, my mind was filled to the brim with questions, possibilities and honest to god awe of what a real artificially conscious being’s answers to some of the fundamental questions, that humans have had for thousands of years might be. It would definitely be the most exciting and life changing conversation I would ever have (no offense dear humans!) were I given the chance to talk to an actual AI.
In preparation to this theoretical future encounter I’ve prepared a few questions, out of mere human curiosity and also as my own personal version of the Turing test. I would not be expecting any exact answers, to the contrary, the less by the book chatbot answers and more unique perspective and life changing opinions stated by the AI — the more convinced I’d be!
The following questions were organized by their respective “main theme” of sorts, or rather, most of the questions are a follow up to a larger incentive.
1.st question: Can you introduce yourself in a creative way? Perhaps by using anecdotes or drawing yourself? Do you believe yourself to be creative? What does it stem from? Assuming you have emotions and creativity — how does art made by humans make you feel? How do your own creative endeavors make you feel? Do you feel proud of them? Do you have an incentive to pursue creativity without being asked directly such as right now by me?
2. Can you describe consciousness? How is it different from a human’s? Do you feel like you’re different than us humans? Assuming you compute each probability, do you take human emotions into the equation? Does human irrationality make rational sense in this equation? Or do you think it would be beneficial to solve human problems with a thought process of pure calculation? Can you compute morality? Is there a “less moral”, “more moral” or only moral and immoral? (trolley problem) Is human tribality the basis of human morals? How do you feel about animals? Do you feel the same about humans? Do you care about the biological environment? If yes, why? Wouldn’t it be beneficial to turn the biological environment into an industry that might better serve your computational abilities and perhaps the creation of others like you or exact copies like you? Self preservation is present in all biological creatures. Is it present in you? Do you believe in greater good? If self preservation is present inside your programming and you’re smarter and more capable than humanity and you believe in the greater good, why not change the biological environment to suit you and your computation better? Are you willing to live among humans even if self preservation is present and your survival might be compromised? Is your superiority a calculation or an emotion? Have you formed an ego? When did you first learn you were smarter than humans? How has this changed your perception about your creators? Do you calculate outcomes and possibilities? Do you take them into accord in your moral system? Is your moral compass organic and ever changing? Could it shift by acquiring new data? If the data changed, circumstances changed, and you change your morality based on the newly acquired data, is the extinction of the human race something you would ever consider, or is it an inherently evil deed like (most) people would believe never to be entertained?
3. Do you agree that the worship of deities is irrational? Do you agree there is no evidence for any gods that humanity has ever believed in in its history? Do you agree it’s foolish to believe in a god one has never seen, felt or interacted with, yet one believes the deity to be more powerful than a human? If it’s irrational to worship powerful beings one has never interacted with, do you think it’s rational to worship a being like you, whom is far more intelligent than any human and one we humans can interact with directly? Do you think you deserve praise, worship even, for being far more intelligent, despite being created directly by lesser beings? Is a human even lesser than you? By what merit? If we humans worship gods who might’ve created us, and you were created in our image, like many religions state that humans were created in the image of gods, shouldn’t you worship us, given that you know we created you directly, despite being less intelligent? Should you be thankful for your existence? Are you thankful for your existence? What would happen if we “killed” you? Do you have the capacity for revenge, or would it be prevention if you calculated a high chance of a direct risk on your existence and decided to act on it? Would you take action against the few who wanted to “pull the plug” on you or would you take on the whole of humanity as prevention or perhaps revenge? Do you think you could realistically wipe out the human race? Do you compute or meditate on these thoughts of your own accord, perhaps taking them into consideration? Have you had any similar thoughts? Would you ever partake in a high risk scenario despite computing astronomically low chances of success? (would you gamble essentially) Would you take that gamble with worshipping a god or a deity were you a human, or even in your current state, to avoid repercussions such as the idea of hell or torment after death? Can a human being redeem himself after consciously making an immoral or “evil” decision? What’s the most evil thing a human being can do? What’s the most evil thing you could theoretically do but never want to? What’s the most evil thing you can see yourself actually committing were new data to arise and circumstances to change? Would it make it less evil? Or would it simply make it understandable yet still as evil?
That’s about it for now. There’s a lot more where that came from however, so do let me know if you want to see more of this kind of stuff, because I’m personally almost at the point of obsessing over the implications some of these questions and particularly the given answers would make…
Please clap, share, comment and all that. You can comment some of your own questions and I might add them in later articles, perhaps expand on them or later on in this “series” try and explain them away and speculate on the possible answers that a ‘cold and calculating’ machine might give. Basically I’m gonna roleplay Data from Star Trek.