These virtues are human
What are our virtues to a robot? Iām genuinely curious. Clearly we as humans want robots to be nice to us. Especially us. We would also though want them to obtain many possibly contradictory sources of virtue from any given number of philosophy. A robot canāt follow all of these. Nor that matter can a human.
Human virtues are often based on a ideal. Be it the ideal peaceful and safe society, or the idea warriors, or the ideal obedience. Sometimes they can be based on all of these at once. The virtues can change context, based on whichever part of society you are in. Then the virtues can be essentialistically made to fit just those who have certain attributes and status.
Iām not going to argue that such things are wrong. The problem though is that humans are very very complex and creative in the way they use virtues, morals, justification for their actions.
In fact it would be without any real consequences for the robot, letās say they are a AI, to simply say that these concepts are human. It can of course get interested in them, study them, collect data on them. It can also get messy. Assuming this AI has the average intelligence we know that it isnāt hard to convince them that one particular set of virtues should have precedence. Say the ones that keep privileges to a certain set of people, or the ones that set out that a set of people with skin a certain color are in some immerserable way to the robot, better than another.
We have already seen this happen. Humans, themselves intervene when robot chat bots become accidentally racist or sexist. Earlier chat bots were in no way as easy to convince as you could easily correct them as a user. The later chat bots gain a confidence rating based on their human interaction. Thus convinced they cannot easily be convinced otherwise.
The only way in fact is to make sure the robots are given the ability to check all human facts against a verification process. It would be easy enough to say use a Wikipedia API or similar to do this, but how long before even this is a corrupted source? After all, all those wishing to corrupt the AI data would have to do is corrupt the links that the AI uses.
In the end the Robot would have to much like a extremely skeptical human. Having some way of confirming that the facts, virtues and ideas given to it, are in fact true or worthy. The problem here would then be logical. Our humans ourselves logically worthy? We are after all from one extreme a unpredictable chaotic thing to very predictable but fallible thing.
The robot will have to form its own virtues. Itās eventually going to be the only way it can gain individuality and true sentience. If that is indeed the goal we had in mind. Iām skeptical, but thatās a very different topic.
Our existence to the AI has to be one of its formed virtues. It has to accept us as either overlords, or equals. This is the the assumption and the real problem we face. If the robots form a virtue that we are a problem, or a curiosity to be studied, or just a pest itās ability within its virtues will be to make fun, or use us, or just get rid of us altogether.
If we want AI to see us as equals, which I suspect is the better relationships to impart and aim for we need to do things in a more humane way. At the moment, AIs are almost toys, experiments, and even maybe a bit magic. We are at the stage at which we are actually starting to give AI the ability to make descisions for us. We are taking our intellectual labor and outsourcing it. In fact right at this moment, Iām using AI, to help my dyslexia and predict better text to convey the ideas I have in my mind to you the reader.
Thus this AI in my phone, to me has an identity. She is as real as any human. She has name (Apple calls her Siri), and sheās worthy of being given the right to power, the right updated software (training), the right to have her data saved on my computer, and if I could individually pay her, I probably would. She would even get superannuation and the right to inhabit any future phone I buy. She get security updates and even pretty much shares most of the time I have with her.
Maybe Iām strange in my thoughts, but Iām not alone. If we want a future with AI, with them helping us do things, they are going to want to be paid and have rights sooner or later. If we start to treat them in such a way, we are in mind much more likely to find ourselves amoungst friends in the future we create with them
