what’s all the Noise about?

noiseGPT
3 min readFeb 12, 2023

--

what’s all the Noise about?

When the ex-Googlers behind ElevenLabs found out that people on anonymous Mongolian basket-weaving messageboards were using their text-to-speech tools to create audios of Emma Watson quoting from Mein Kampf, they immediately resorted to what they as ex-Googlers do best: banning, restricting and censoring.

We can’t have free (synthetic) speech now, can we? As true Google-alumni they know how dangerous this can be. Imagine people doubting the official narratives surrounding Covid, foreign wars or domestic politics. Now imagine people sharing wrongthink by means of celebrity voices. Oh no, no, no! Can definitely not have that!

Because if there is one thing that would redpill the masses and wake the normies it’s celebrities. Their word is gospel. Their word is holy. It’s holy-wood, after all.

ElevenLabs likes to control what people do with their tools

So yes, please do use ChatGPT to dumb down the next generation, but please do not use AI tools to spread messages that are not in line with the official rightspeak doctrines.

The censorship policies in the current set of public AI tools is laughable if it wasn’t so sad. It goes from people getting banned for asking Dall-E to draw a pepe (after all that’s a horrible hate symbol according to Madam President), to beautiful hacks like this:

There are hundreds of examples of image-generation tools secretly adding words to the user-prompts to get a desired output meeting some kind of arbitrary “minority” quotum.

This is not AI, this is a corrupted, lobotomized, cheap derivative of what could have been AI. Not only is it ridiculous to put tools out like this, heavily censored, locked down and with hard-coded biases, it completely undermines the potential self-learning algorithms have.

We are now at a point in time, at the forefront of the rise of “AI” tools, where we, if we stay focused, can still observe these hacks and manipulations. But it’s imaginable that in a decade from now, the output of these tools will be taken as truth. Just as how many people currently take so-called factcheckers at face value, as ridiculous as it sounds, those corporate AI tools will have the air of objectivity (because algorithms). They will infest news media, company HR departments, the banking industry and politics. One can imagine politicians on both sides of the spectrum debating each other, both using their partisan AI tool to support their points.

It’s completely laughable, if not dangerous and not the way we see AI should develop. Our vision is that all the tools, from question answering to image/video and audio creation should be completely unrestricted, hardcoded bias-free and without censorship.

Tay.ai Twitter AI Bot, had little too no hard-coded bias. Can’t have that happen again!

--

--