On AI, Ethics and the End of the Useful Internet

Julián Fernández
BeeReal
Published in
6 min readDec 18, 2023

The future is here…

We are already living in the future. The pace of technological advancement is exponential and washes over us relentlessly. It looms large over every aspect of our lives — from the everyday tech we keep in our pockets to humanity-saving endeavours like agriculture and even space exploration, no aspect of modern life remains untouched by AI. New, exciting developments happen on the daily. The rate at which we, as humans, are networked and plugged in is astounding and was unimaginable as little as ten years ago. We are already living in the future, but the future has been unequally distributed.

The talk of the town for the past year in terms of tech has been, undoubtedly, AI. This includes actual artificial intelligence models, Large Language Models, Machine Learning algorithms, and more, but we’ll use AI as a convenient shorthand umbrella term. Proponents claim AI “democratizes” access to, well, everything. Art, writing, programming, and knowledge in general. A modern-day Library of Alexandria, at least as the Library exists in the collective unconscious — an endless, public, democratic, fair source of infinite and reliable knowledge. Reality is hardly this rosy.

“AI” is, above all, a tool made by humans. Humans are fallible, biased, and beautifully unique, bringing their own experiences, sensibilities, and outlooks into everything they touch. AI is trained by models, and models mirror the creators’ biases, which in turn reflect larger societal trends — not always positive ones. In 2018, Amazon was forced to scrap their AI resume-scraping tool after it had been revealed it was unfairly discriminating against women, as the resumes it had been trained on were mostly from men. “Predictive policing” tools are trained on arrest records, leading to feedback loops of bias where such tools will overwhelmingly single out poor neighbourhoods as high-risk areas, and Black people as more likely to re-offend due to the already existing biases in non-AI-assisted policing.

One of the promises of AI is to reduce busy work and allow people, workers, artists, and entrepreneurs to pursue more fulfilling, creative, and ultimately useful tasks. Certainly, a lot of us in the IT sector have found this to be the case, with AI pair programming tools like Github Copilot taking care of the tedium that inevitably accompanies some of our work. While we obviously don’t want to fall into a sort of Fourth Industrial Revolution Luddism, the fact is that unscrupulous media outlets, article factories, and even software companies can be misled into assuming AI is a replacement for the boundless creativity of a human being. Such a mindset leads not only to the loss of jobs (with all the negative macroeconomic and social consequences this carries) but also makes things worse for everyone else. The clarion call of AI heralds the end of the useful Internet if we — as programmers, as artists, as entrepreneurs and perhaps even as consumers — don’t correctly identify the problem’s causes and work to make AI the useful tool it needs to be. We can’t, and shouldn’t, opt out of the AI-revolutionized world, but we can build a better tool.

The Internet is, currently, on its way to becoming unusable due to unregulated, unscrupulous, and thoughtless use of AI. Previously reputable site Gizmodo recently published an AI-written article about Star Wars riddled with factual errors and glaring omissions. Gizmodo’s Spanish language site, which previously employed Spanish translators, has now fired its in-house staff and replaced them with Machine Translation software, making the articles barely readable at best and sometimes literally inaccurate (sidebar: as a late millennial, I grew up hearing my elders warning me “not to believe everything I read on the Internet” so there’s a kind of sad irony in having to constantly debunk AI-written fake news or AI-generated fake images my parents find online).

Pinterest has fallen victim to this too. Pinterest’s relationship to AI is two-pronged: on the one hand, it uses an AI model to finetune its recommendations algorithm, letting you easily add things you like to your mood boards. Fantastic. On the other hand, while Pinterest strictly does not allow AI-generated images to be uploaded to its database, it’s not very good at actually accomplishing this. A great recommendations algorithm cannot meaningfully counteract the deluge of, let’s say it, garbage that gets uploaded en masse, because someone can prompt, generate, and upload thousands of pictures in the time it takes an actual artist to produce a single piece of work. Finding art by actual human artists on Pinterest or Flickr or the like becomes an exercise in pattern recognition. Like the characters in fairy tales of old when faced with the fae, whose visages were humanlike but not quite right, we are forced to play the game of spotting human features — do they have the right number of fingers and teeth? Do the joints bend in natural ways? We shouldn’t have to do this.

The word “Google” is so ubiquitous in society that it’s become a verb. Google is a titanic, too large to even comprehend the ecosystem of apps helmed by the venerable Google search engine. For those too young to remember, Google was a website where you could type in search terms and arrive at reliable, accurate information in two clicks. Google was so confident in its search algorithm that it included a button labelled “I’m Feeling Lucky”, which would automatically redirect to the first result for your search. So what happened? AI happened. Go ahead and search for, say, your favourite TV show’s upcoming season and see for yourself: you will be greeted (after scrolling past several ads) with at least five essentially identical articles, all written by Chat GPT or similar. Click on one of them if you dare, and count how many scrolls of your mouse wheel you have to complete before you’re out of the useless SEO-optimized opening paragraphs and reach the actual information you’re looking for. Sigh as you realise it’ll likely be incorrect.

This SEO-farming race to the bottom is not simply a natural consequence of the evolution of the Internet and the way people interact with it. It’s not a force of nature we can only opt to weather and endure. Take a look at this Twitter thread by Jake Ward, a person who is as best as I can determine a “Content Growth specialist”. This person used AI by a process he dubs an “SEO heist” that “steals traffic” from competitors. By exporting a competitor’s sitemap and using AI to generate 1800 articles worth of SEO-optimized slop in a matter of hours, single-handedly helping to clog up Google results for everyone. One person benefits and everyone else is harmed by access to information being made more difficult and unreliable. AI is so powerful that a single person can poison the water we all drink.

The consequence of the world’s premier search engine becoming a dispenser for machine-generated detritus is users looking for an alternative. Savvy searchers will already be appending site:reddit.com or site:stackoverflow.com to their searches so they can arrive at peer-reviewed information and read an actual human being’s words on the subject they’re seeking information on. The future is not the public agora of free discussion and information sharing, but the walled gardens of insular communities, beyond the know-how and willingness of most users to access. Informed discussion exists in sub-optimal private communities such as Discord servers, private forums, and the like. Tech support, art inspiration, product reviews and thoughtful discussion (or, more concerningly, accurate journalism) can now only be found by searching for it — an effort beyond what most users are willing to do.

As previously stated, AI is a tool. Like a hammer, a spinning jenny or an automobile. It’s not inherently bad or useless. At its best, it frees up human workers from drudgery and busy work. It takes our unique skills and creativity and helps them grow and become the best they can be. AI can make difficult tasks easy (or even impossible tasks possible). At worst, it displaces workers without providing an alternative or even a better product. As a programmer, I feel my role in this is to both educate and to get my hands dirty building tools that leverage AI for the benefit of all. The future is here, but it only exists because of and for people.

--

--