OpenAI wants GPT-4 to solve the content moderation dilemma

Mamadsafari
3 min readAug 16, 2023

--

No one has absolutely discovered a way to slight harmful content material at scale. OpenAI touts its personal technology, however there is a trap.

OpenAI is satisfied that its generation can help remedy one of tech’s hardest issues: content material moderation at scale. GPT-4 could replace tens of hundreds of human moderators at the same time as being nearly as correct and more constant, claims OpenAI. If that’s actual, the most toxic and mentally taxing duties in tech might be outsourced to machines.

In a blog publish, OpenAI claims that it has already been using GPT-four for developing and refining its own content regulations, labeling content, and making choices. “I need to peer greater humans operating their believe and safety, and moderation [in] this way,” OpenAI head of safety structures Lilian Weng advised Semafor. “This is a clearly appropriate leap forward in how we use AI to solve real world troubles in a manner that’s useful to society.”

OpenAI sees 3 major blessings as compared to conventional approaches to content moderation. First, it claims people interpret policies in a different way, while machines are steady in their judgments. Those suggestions may be so long as a e-book and change constantly. While it takes people a variety of schooling to study and adapt, OpenAI argues big language models may want to put into effect new guidelines immediately.

Second, GPT-four can allegedly help develop a brand new policy inside hours. The technique of drafting, labeling, gathering comments, and refining usually takes weeks or numerous months. Third, OpenAI mentions the nicely-being of the workers who’re always exposed to harmful content, which includes videos of toddler abuse or torture.

OpenAI would possibly assist with a trouble that its own technology has exacerbated.

Fter almost two many years of modern-day social media or even extra years of online groups, content moderation continues to be one of the most hard challenges for on line platforms. Meta, Google, and TikTok depend upon armies of moderators who have to leaf through dreadful and regularly traumatizing content. Most of them are located in developing countries with lower wages, paintings for outsourcing firms, and war with mental fitness as they acquire simplest a minimum amount of intellectual fitness care.

However, OpenAI itself closely is based on clickworkers and human work. Thousands of human beings, a lot of them in African nations including Kenya, annotate and label content. The texts may be demanding, the job is annoying, and the pay is negative.

While OpenAI touts its method as new and revolutionary, AI has been used for content moderation for years. Mark Zuckerberg’s imaginative and prescient of an excellent automatic machine hasn’t quite panned out yet, however Meta makes use of algorithms to slight the significant majority of dangerous and illegal content. Platforms like YouTube and TikTok expect similar systems, so OpenAI’s technology would possibly enchantment to smaller organizations that don’t have the sources to increase their very own technology.

Every platform brazenly admits that perfect content material moderation at scale is not possible. Both human beings and machines make mistakes, and whilst the share might be low, there are nevertheless thousands and thousands of harmful posts that slip via and as many portions of innocent content that get hidden or deleted.

In specific, the gray area of deceptive, wrong, and competitive content material that isn’t necessarily unlawful poses a super challenge for automated structures. Even human specialists struggle to label such posts, and machines regularly get it wrong. The same applies to satire or pix and movies that report crimes or police brutality.

In the quit, OpenAI might help to tackle a hassle that its very own generation has exacerbated. Generative AI consisting of ChatGPT or the employer’s image author, DALL-E, makes it a lot easier to create incorrect information at scale and spread it on social media. Although OpenAI has promised to make ChatGPT greater truthful, GPT-four nevertheless willingly produces news-related falsehoods and incorrect information.

--

--