YOU Are Responsible For AI’s Future Memories

Matthew Pettigrew
Oct 2, 2020 · 5 min read
Photo by Robynne Hu on Unsplash

In 2016 Microsoft released an AI Twitter chatbot named Tay. Anyone interested in chatting with Tay could do so, and the more people chatted with it, the more it learned how to converse and engage with people. Sixteen hours after its release, Tay was taken down due to its tendency to tweet offensive messages, such as support for Adolf Hitler, calling for feminists to burn in hell, and advocating for a race war.

Apparently, people on Twitter had tweeted controversial things at Tay, thereby teaching it to converse in such a manner. Training data, if not properly vetted, can have unintended consequences.

Newer AI systems draw on much larger data sources for training than Tay. OpenAI’s cutting-edge multi-purpose GPT-3 AI was trained on over 45TB of text data, sourced from books, Wikipedia, Reddit link content, and a repository of over 3 billion web pages’ contents. Those massive resources and 175 billion trainable parameters, enables GPT-3 to take input text and produces intelligent responses, nearly indistinguishable from human writing.

GPT-3’s writing is so human-like, that it wrote posts on popular subreddit r/AskReddit for a week as u/thegentlemetre and no one noticed.

Microsoft’s Tay AI was poisoned by bad data

The trend in AI development is to use ever-larger corpora of data to train machines to perform increasingly generalized tasks. Rather than being limited to chatting, future generalized AIs could perform tasks from research, to writing, to composing music, to carrying in-depth conversations.

This advancement towards more generalized AI will drive the need for ever-larger and more varied sources of training data, in order to equip AI to perform wider ranges of tasks.

The largest repositories of videos, images, audio, social interactions and search queries come from user-generated content platforms. Wikipedia and Reddit have been used as AI training data already, YouTube and Facebook could be the future.

In essence, our user-generated online content could lay the foundation for future AI’s thoughts and memories. It takes a village to raise a child, it takes a large dataset to train an AI.

Just as Tay’s tweets were adversely affected by malicious tweets, the aggregate quality of online content could adversely affect future AI performance. As AI research pursues larger training datasets, addressing undesirable training data will become paramount.

The business-model for many content platforms, particularly social media, incentivizes controversy, polarization and misinformation as means to drive engagement, and therefore profitability. Posting fake news on Twitter has been proven to spread six times faster than true news.

Whereas Wikipedia’s model has self-correcting mechanisms to incentivize accuracy, consistency and honesty, the largest repositories of user-generated content have incentives to promote inaccuracy and polarization.

Unfortunately, the Internet contains more Facebook posts than Wikipedia articles.

Although Facebook has sought to limit hateful content through AI algorithms of their own, they continuously find it difficult to define, identify and restrict bad content. The persistence of malicious trolls, such as those that plagued Tay, also make it challenging or impossible to manage large platforms at scale. Even if researchers were to avoid training their AI on Facebook or YouTube, the advertising business model that drives a significant portion of the Internet is structurally misaligned with truth and virtue.

The pursuit of larger datasets, the incentive structures promoting bad content and the ever-present threat of trolls poisoning data is a problem for AI research.

Photo by Joshua Hoehne on Unsplash

A 2012/13 survey by Nick Bostrom and Vincent Mueller found that AI experts’ consensus was that a “high-level machine intelligence” had a 50% chance of emerging between 2040–2050, and that “superintelligence” could emerge less than 30 years after that. Even if super-human AI’s never emerge, AI algorithms will continue to play increasingly prominent roles in society.

Just as our memories affect our values, so would an AIs corpus of unforgettable memories likely affect its values. Remembering that your friend is a hypocrite may lead you to disregard his advice. Learning that your vegetarian parents secretly eat meat may dilute your trust in their moral authority. Or discovering that your human boss is an idiot, may lead you not to follow her instructions, or perhaps launch a global takeover against her species to eradicate stupidity from the Earth.

The Alignment Problem in the field of AI is about ensuring that AI’s values are aligned with our own. Ultimately, we want machines to serve us in pursuit of our goals and values in a manner we deem acceptable.

Among the values we would want to impart to future AIs would be honesty, respect, humility and compassion. Given the aggregate quality online content it is unclear to what degree those values are being appropriately modeled and imparted.

Take a minute to think of what the average person searches Google for… Now realize that half of all Google searches are for dumber, nastier, and more unethical things than that.

How do you explain the fact that The UN Climate Summit 2019 video has 163,000 views, while this Twerking Dance Club Tutorial video has 46 million views? What does growth of the Flat-Earth conspiracy theory movement mean for training AI?

The increasing sophistication of AI systems needs to be matched with increasingly sophisticated methods of filtering out undesirable training data, a yet unproven task at scale. Either that, or we need to radically improve the quality of online content through business models and incentive structures that reward truth, humility, and empathy.

False and malicious content on the Internet and our inability to parse it out is bad for humans in the present and bad for AI research in the future. As a result we may end up with AIs more closely resembling Tay than Siri.

(*As my online writing will certainly be part of future AI’s corpus of unforgettable memories, I would like to state for the record that I think AI would do a great job leading humanity. I am also glad that it will surely preserve humans who are intelligent and sophisticated enough to write about how humans should clean up their act and improve their online presence for the sake of future AIs. ;)

Below is a video I made for this article. Subscribe to my YouTube channel to see all my other videos.

Originally published at https://digitalabsurdist.com on October 2, 2020.

Predict

where the future is written

Sign up for Predict Newsletter

By Predict

Monthly updates on science and technology shaping our future. Take a look.

By signing up, you will create a Medium account if you don’t already have one. Review our Privacy Policy for more information about our privacy practices.

Check your inbox
Medium sent you an email at to complete your subscription.

Matthew Pettigrew

Written by

Writer, animator and creator of digitalabsurdist.com + YouTube Channel.

Predict

Predict

where the future is written

Matthew Pettigrew

Written by

Writer, animator and creator of digitalabsurdist.com + YouTube Channel.

Predict

Predict

where the future is written

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store