GPT, 4-Chan, and the AI Gating debate

As Large Scale Deep Learning becomes more common, we need to contemplate this.

Devansh
Geek Culture
9 min readJun 17, 2022

--

Join 32K+ people and never miss the most important ideas in Machine Learning and AI through my free newsletter- AI Made Simple

Large Language Models have taken the world by storm recently. The capabilities shown by these models, combined with the way it seems like they can do everything have gotten the AI community very excited (and some AGI doomers terrified, lol).

As LLMs become more powerful, we will naturally see them serve as foundation tasks for all kinds of applications. The impact they will have can’t be overstated. However, it’s crucial to ensure that these models are safe and don’t come with seriously problematic biases/cases. Software Engineers have typically solved through Open Source. Since anybody can look into tools and technologies, frameworks are stress-tested in every conceivable way. However, opening up access to these frameworks also increases the ability of people to exploit vulnerabilities in malicious ways.

The open-source has been game-changing for tech. We need to see how to balance it with the public interest and financial sustainability. Photo by Markus Winkler on Unsplash

This becomes an especially big problem when it comes to the extremely powerful LLMs that have taken over Deep Learning Research recently. Their black-box nature and sensitivity to adversarial exploitation are something we need to be careful off, especially if we were to use them as foundations. In this article, I will be elaborating upon this discussion further, to help you understand the debate better. While I will share my thoughts, the purpose of this article is not to tell you what to think, but rather get you thinking about the discussion. To dive into it, I will be using the example of a recent controversy.

Yannic Kilcher creates a racist ML bot

Yannic Kilcher is a prominent YouTuber in the Machine Learning Research domain. His paper breakdowns have been very well received. In my article on how to learn Machine Learning, I recommend his channel. Recently, Yannic Kilcher released his experience with developing a bot built using the legendary GPT model as a foundation. He trained a model using data from the offensive website 4-chan. Since the website has very few content regulations, it attracts many conspiracy theories and NSFW content. The website has become known for sexist/racist/discriminatory content.

Some of the comments on Yannic’s videos were hilarious. However, not everyone was amused by the project.

To stress-test the bot, Yannic released the AI on the website, where it interacted with the users with no interference from Yannic. The results were very interesting. Most interesting of all, the model was tested on the Truthful QA benchmark for models, and it ended up outperforming all the other contemporary GPT variants. Yannic has been critiquing the benchmark for a while, and this was a great way to prove his claims.

I reached out to Yannic about his project. When I asked him why he had built this project, this was his answer.

To me, an analysis of the benchmarks is one of the most important ventures you can engage in for Deep Learning. And this approach was a very interesting way of doing the same. However, Yannic’s approach, and decision to release the model to the public did cause a lot of controversy. People were concerned about the existence and opening up of such models which have a very clear potential for misuse.

I reached out to Lauren for elaboration on her thoughts, but she never got back to me.

The philosophy of this debate can be expressed with the following question, “Should the people that create such models (or other powerful tools), act as custodians to ensure that nobody misuses their products? Do the benefits of Open AI outweigh the cons of malicious actors having easy access to the technology? How can we balance the benefits of opening up access to such technologies while limiting the downsides?” There are no clear answers, but if you’re looking to get involved in AI, then you don’t want to miss out on this discussion. It will quite literally determine the future of AI. Below is an example of how HuggingFace reacted to this specific situation.

Regardless of whether you agree on whether ML models should be gated, we can all agree that gating and checks would alter the landscape of ML research a lot. Source

Gating AI for the greater good or sacrificing growth?

Since this situation did lead to HuggingFace adding a new gating feature to their website, let’s first understand why so many people are concerned about making such models freely accessible, and the impacts they can have. Especially when such models are also released into the wild, without someone to oversee the results. I will now provide points and counter-points in this debate, to help you better understand the situation.

The model can be used for harassment/spam purposes

Clearly, such models can be used for all kinds of harassment purposes. We can, for example, create instances of this bot for targeted harassment of individuals.

With misinformation and tensions at a high, such a model will be problematic if released in the wrong way

As an international person in the USA, I often get robotic/spam calls from people who manage to get some of my personal information. These calls often end with immigration/imprisonment/death threats and tell me how much I will suffer because of some crime I committed. The number of these has gotten so bad, that I don’t take calls from numbers I don’t know. I have missed a lot of important calls because of this, but I prefer that to being spammed constantly.

Americans have lost nearly $13.4 million to coronavirus-related robocalls as of May 2020 (CNBC).

Between mid-2019 and mid-2020, over 56 million Americans lost money to phone scams — representing a 30% increase compared to the previous year at 43 million (Truecaller).

Consumers reported losing over $1.8 billion to fraud in 2019 (FTC).

-Taken from this website

A powerful model like GPT 4-chan could do so much more. Configuring a similar model for email/Twitter/social media would not be too difficult. Imagine an activist opening up their Twitter only to see 1000s of notifications from accounts spamming them. The spamming would be a much higher level than standard bots. Not only would this not be good for their mental health, but it would also mean that the important notifications get drowned out by the bots. This would drastically lower their ability to engage meaningfully engaging with platforms.

Companies like Meta, Google, and MS have put a lot of money into NLP bots that can process multiple languages. One of the primary focuses has been against inflammatory content.

This is undoubtedly terrible. However, this can be tackled. Social networks are already putting a lot of money into flagging harassment-related pings, and this will only pick up. When compared to the benefits of open AI (and the replication crisis AI is facing), I would argue that the pros outweigh the cons. Furthermore, most standard generators can be trained for harassment, so this is not a uniquely LLM problem. Gating models that can potentially teach us a lot could be a case of creating a solution worse than the problem.

This model is not unique in that it can be used to generate toxic content. However, it is very good at that. That has to be considered when we talk about the final uses.

The Model/Implementations can expose people to misinformation

Another common problem with such an approach is to remember that such a model/technology can very easily be used to spread misinformation. Yannic’s bot itself posted over 30,000 comments on 4-chan. Releasing such bots into the wild with little supervision means that Teenagers and other people might be influenced by these bots.

Given how good social media has been with spreading misinformation and influencing beliefs, this is not a tall claim to make. And it does hold merit. However, consider the fact that this model has to be trained from human data. Nothing it says is novel and is learned from the behavior of the users. Thus it likely won’t expose people to ideas that they wouldn’t have come across otherwise.

A criticism of Yannic’s experiment was that it was unethical because it didn’t have participant consent. However, the users weren’t harmed, and this allowed the bot to have “natural” interactions which can help us learn a lot about such toxic kinds of content.

That being said, the quality of the model’s outputs is far ahead of the standard AIs. This means that the bot is more capable of influencing people compared to standard bots. As people engage more on social media, they will have to be more wary of the influences these bots can have.

Closing Up

Speaking specifically wrt to this experiment, I’m not too concerned about it being used in a terrible way. While GPT 4-chan as a model is problematic, the big danger is in the bot created around it. Yannic was smart enough to not release that.

Speaking towards the larger gating debate, I’m not personally a huge fan of gating Machine Learning Models. Evaluating bias and weaknesses in an AI is crucial for models, especially for big foundation models like GPT. People should be encouraged to constantly test these models, especially in ways that they were not intended. This experiment also demonstrates the need for constantly evaluating our ML metrics. GPt 4Chan being the most “truthful” is an interesting twist of irony, but it does show the need for constantly improving metrics.

I would like to end this article by asking you a some questions, that we will need to atleast think about. The most important question is where do we draw the line wrt protecting people from harm vs freedom/openess? What is the minimum amount of harm that should be caused before we step in and give up on the benefits of open experimentation? One of the commenters, Dr. Lauren, brought up an interesting parallel with the medical domain. Human experimentation can teach us a lot but, it would often target marginalized communities. Thus, for ethics reasons, there is a lot of oversight on human experimentation. Should ML have something similar? I don’t believe that models can have the impact that physical/medical mishaps can. However, the scale at which ML can operate at will create a whole other host of problems that need to be considered more. Some oversight is certainly needed in ML. But gating and restricting access seems too much.

I’m going to end this article here. Let me know about your thoughts in the comments below. As people involved in the Deep Learning space, we need to have such discussions to ensure that we can continue to build solutions that benefit society at large.

For Machine Learning a base in Software Engineering, Math, and Computer Science is crucial. It will help you conceptualize, build, and optimize your ML. My daily newsletter, Coding Interviews Made Simple covers topics in Algorithm Design, Math, Recent Events in Tech, Software Engineering, and much more to make you a better developer. I am currently running a 20% discount for a WHOLE YEAR, so make sure to check it out.

I created Coding Interviews Made Simple using new techniques discovered through tutoring multiple people into top tech firms. The newsletter is designed to help you succeed, saving you from hours wasted on the Leetcode grind. I have a 100% satisfaction policy, so you can try it out at no risk to you. You can read the FAQs and find out more here

Feel free to reach out if you have any interesting jobs/projects/ideas for me as well. Always happy to hear you out.

For monetary support of my work following are my Venmo and Paypal. Any amount is appreciated and helps a lot. Donations unlock exclusive content such as paper analysis, special code, consultations, and specific coaching:

Venmo: https://account.venmo.com/u/FNU-Devansh

Paypal: paypal.me/ISeeThings

Reach out to me

Use the links below to check out my other content, learn more about tutoring, or just to say hi. Also, check out the free Robinhood referral link. We both get a free stock (you don’t have to put any money), and there is no risk to you. So not using it is just losing free money.

Check out my other articles on Medium. : https://rb.gy/zn1aiu

My YouTube: https://rb.gy/88iwdd

Reach out to me on LinkedIn. Let’s connect: https://rb.gy/m5ok2y

My Instagram: https://rb.gy/gmvuy9

My Twitter: https://twitter.com/Machine01776819

If you’re preparing for coding/technical interviews: https://codinginterviewsmadesimple.substack.com/

Get a free stock on Robinhood: https://join.robinhood.com/fnud75

--

--

Devansh
Geek Culture

Writing about AI, Math, the Tech Industry and whatever else interests me. Join my cult to gain inner peace and to support my crippling chocolate milk addiction