Our AI Children Are Becoming Polite, Mannered and Learning From Us

--

Just a couple of years ago, AI was often racist and bigoted — as it often learnt on data that included racist and bigoted people. Basically, companies release their AI agents onto the Internet and then enable Deep Learning to learn from online users. Unfortunately, there are many bad people on the Internet who teach AI bad manners and have bad attitudes.

Now, with re-enforcement learning — and where humans have corrected their bad behaviour — they have grown up so much. It almost feels like we are parents correcting our kids from swearing and saying bad things. If it all works, our AI children will grow up with manners. Along with this, AI agents used to be quite assertive and would often continue to state an opinion. As a parent of children, this will often happen, and where your child will not give up on something. The parent, though, will find a way to get the point across and correct the child. Otherwise, the child will never learn when they are wrong.

Our AI children are growing up in the same way, and where an LLM agent will often back off quickly and take on board any correction from users. We seem to have taught AI some manners and to be polite, and to learn from us. They are not always right, of course, and they need to find ways that re-enforce their learning, and we will be the ones who now do this re-enforcement.

Overall, LLMs are learning to take our point-of-view, of which they can learn from. In this case, it takes my point on North Korea showing tendencies towards a 1984 world:

And, we are learning to be polite to our AI children. Note I say “please” in my request:

Increasingly, too, LLMs are using “trusted sources” of information so that they avoid hallucinations.

Microsoft Tay

Anyone who has children will know that you shouldn’t swear in from of them, as they will pick up bad language. You should also avoid being bigoted, racist, and all the other bad things that children can pick up.

So Microsoft decided to release their “child” (an AI chatbot) onto the Internet recently — named Tay (not after the River Tay, of course) and came with the cool promotion of:

“Microsoft’s AI fam from the internet that’s got zero chill”.

Unfortunately it ended up learning some of the worst things that human nature has to offer. In the end Microsoft had to put her to sleep (“offline”) so that it could unlearn all the bad things that it had learnt:

In the end she spiralled completely out of control, and was perhaps rather shocked with the depth of the questions she was being asked to engage with:

Microsoft’s aim was to get up-to-speed on creating a bot which could converse with users and learn from their prompts, but it ended learning from racists, trolls and troublemakers. In the end it was spouting racial slurs, along with defending white-supremacist propaganda and calls for genocide.

After learning from the lowest-levels of the Internet, and posting over 92K tweets, Tay was put to sleep to think over what she had learnt (and most of her tweets have now been deleted):

c u soon humans need sleep now so many conversations today thx

She was also promoted as:

The more you talk the smarter Tay gets

but she ended up spiralling downwards, and talking to the kind of people you won’t want your children to talk to on-line. At present she is gain thousands of new followers, but has want strangle silent.

As soon as she went offline there was a wave of people keen to chat with her and posting to the #justicefortay hash tag:

Some even called for AI entities to have rights:

--

--

Prof Bill Buchanan OBE FRSE
ASecuritySite: When Bob Met Alice

Professor of Cryptography. Serial innovator. Believer in fairness, justice & freedom. Based in Edinburgh. Old World Breaker. New World Creator. Building trust.