The AI Witch Hunt: Are Writers Being Unfairly Accused?

Tohbie Adelaja
ILLUMINATION
Published in
15 min readJul 8, 2024

--

How AI Detection Tools Are Misfiring and Mislabeling Genuine Talent!

Photo by Neeqolah Creative Works on Unsplash

It all started with a false positive! A friend of mine was wrongly accused of using AI to write his essay, when in fact, he wrote it all beside me. He didn’t even use Grammarly but just wrote it up stylishly, putting it in words he thought were cool.

But those adjectives he used, which he thought were awesome, were actually words and phrases he had learned from reading tons of AI-generated content.

I’m glad he was given a chance to express his views and defend his work, and I’m even more pleased the editors rethought their stance. They saw through the low score given by the AI detector, which had flagged his work incorrectly, and decided to award him the recognition he deserved.

This incident got me thinking. If it happened to him, it can happen to me, and it can happen to anyone.

As much as these AI detection tools are doing a great job, they are not infallible. They provide a percentage that suggests the likelihood of AI involvement. However, these percentages are not always accurate, and false positives can occur.

According to recent studies, tools like Copyleaks, Turnitin, OpenAI’s AI Text Classifier, and GPTZero have varying accuracy rates, with false positive rates ranging from 5% to 15%. It all depends on the tool and the dataset used for testing. These inaccuracies highlight the need for a more nuanced approach to using AI detection tools in academic and professional settings.

Human Creativity and AI: A Delicate Balance

AI, I believe, is meant to take human creativity to a level of sophistication never achieved before. It is meant to create new areas of specialties and domain expertise.

The difference should be obvious between a generic writer and someone with domain expertise, but as the technology evolves, it’s becoming quite hard to tell the difference.

The conflict between AI-generated content and human creations

This battle between AI and humans is already underway and it will only get more complicated as we proceed. Right now, various engines have been developed to detect not just AI-generated content, but also AI-generated HTML, code, pictures, and more.

Copyleaks is perhaps one of the best tools out there developed to do this job, and from the reviews, is doing a great job designating AI-generated words from humanly written words, but in the next 5 to 10 years, when the false negatives get to a crescendo, would the algorithms have learned enough to improve their detection?

The problem with reading tons of AI-generated Content

Well, the AI detection tools for pictures and codes may be an exception. But the problem is this: as writers, we are continually influenced by the words we read, so if we read lots of AI-generated content, we might assimilate and learn some of its preferred adjectives, unique word styles, and vocabularies, and there is a tendency it comes out in our writing as well.

Yes, it won’t be intentional, it’s going to be subtle and subconscious. Some scholars will contend and say this is not how a human will write, but it’s good to never underestimate the capacity of the human mind to learn from what it is constantly exposed to.

Judging by the volume of AI-generated content now in circulation on social media, news blogs, and popular websites, a new generation of kids with fresh mental states may likely grow up to read tons of AI-generated content.

With constant exposure, they will assimilate the styles and peculiar words used by AI, from the chat engines, and subsequently think that is the way the written words are formed.

Unlike the present generation, they will be born into an age of AI-saturated content, and will fail to spot the subtle differences. This way, they will pen out their humanly composed words like an AI or borrow key phrases just like my friend did, thinking it’s cool, only to get flagged.

Why AI Detection Tools Are Failing and What It Means for Writers Everywhere

Photo by Steve Johnson on Unsplash

The problem of written works being mistakenly flagged as AI-generated is particularly troubling for young writers just starting out. Scientists describe the human mind of an infant as a tabula rasa. Hence, a young, avid reader with a blank slate may absorb and internalize the styles, key phrases, and sentence constructions prevalent in AI-generated content, making it increasingly difficult to blur the lines.

This phenomenon can be likened to training: just as being trained by a chat robot might lead one to write like a robot, being trained by a human shapes one to speak and write like a human.

Let’s consider the case of Marina Chapman, who was kidnapped as a young child and left in the Colombian jungle. She survived by imitating the behaviors of capuchin monkeys, learning to forage, climb, and communicate in ways similar to her primate companions. Marina’s story, later detailed in her autobiography, details how she had to relearn how to speak and integrate into human society. This illustrates the profound impact of environmental learning, which reveals that our surroundings and experiences significantly shape our abilities and modes of expression.

So how do we regulate this effect and get back on track?

The challenge for educators and industry professionals will be to discern and value genuine human creativity amidst the growing influence of AI. In fact the future of the writing industry may require a balance between embracing technological advancements and preserving the distinctiveness of human expression. Encouraging young writers to explore diverse literary styles and engage with a broad range of human-authored works will be crucial in this endeavor.

As we navigate the integration of AI into creative fields, it is equally essential to address ethical considerations and develop effective regulatory frameworks. Beyond flagging content, AI-generated content raises questions about intellectual property, the potential for reinforcing biases, and the ethical use of AI tools.

Regulatory strategies should include transparent disclosure of AI involvement in content creation, guidelines for ethical AI use, and mechanisms for addressing false positives in AI detection.

The positive contributions

Despite the problems associated with AI, it is useful in many ways. AI is being adopted across multiple fields, because of its potential to enhance human creativity. For instance, AI can assist writers by helping them to brainstorm ideas, generate outlines, and may even provide a style structure.

In the realm of music, AI tools can be used to assist in composition, this provides musicians with innovative ways to experiment with sound. Similarly, in visual arts, AI-generated art is gaining recognition, with artists using AI to explore new forms of expression. This collaboration between AI and humans can lead to more sophisticated and creative outcomes.

Historical Resistance to Technological Change

But much is left to be done. The question we should ask is: why are we (humans) often resistant to technology?

Throughout history, technological innovations have often provoked strong reactions from those whose livelihoods were threatened. Let’s go back in time to draw some parallels; maybe we can unlock fresh insights.

The calculator

Photo by StellrWeb on Unsplash

In the first century, when calculators became widely used by mathematicians, a similar problem arose. Popular statisticians, the likes of Professor John Tukey expressed his skepticism. His views? Tukey argued that calculators would lead to a decline in mathematical insight and creativity. And he wasn’t alone. Many of his peers shared the same sentiments.

They reached a conclusion that calculators would degrade their mental arithmetic skills, and not just that, it would undermine the purity of mathematical practice. However, as calculators became more prevalent and their capabilities improved, attitudes began to shift.

This shift took place not out of theoretical consideration but due to practical necessity in fields like engineering, physics, and economics. These were fields where precise calculation was crucial. One concrete example was the adoption of calculators in the aerospace industry.

During the 1960s and 1970s, engineers at NASA and other space agencies had to rely heavily on electronic calculators to perform intricate calculations for space missions, which included trajectory calculations, orbital mechanics, and propulsion system designs.

Some of these calculations were far too complex and time-consuming to be done manually, and the accuracy of the calculations was essential for the success of space missions.

Hence, these calculators, initially viewed with skepticism, eventually found their place in classrooms and research labs as tools that enhanced rather than replaced mathematical skills.

Today, calculators and computer algebra systems are integral to mathematical education at all levels, from elementary schools to advanced research institutions. But were calculators the only time technology faced resistance? Let’s look at another seemingly offensive technology that got shut down repeatedly.

The sewing machine

Photo by Omar Alrawi on Unsplash

Way back in the 19th century, when the sewing machine was invented, skilled seamstresses and tailors in Europe and North America viewed the sewing machine as a threat to their expertise. They believed that machine-made garments would lack the quality and attention to detail that handmade clothing offered. This sentiment was particularly strong among guilds and trade organizations that sought to protect their traditional methods and craftsmanship.

Resistance from Unions and Guilds

Organizations such as the Tailors’ Union in London basically campaigned against the adoption of sewing machines in garment production. Their argument was that the sewing machines would lead to unemployment among skilled workers and lower wages due to increased competition from mass-produced clothing.

Are you drawing the parallels? These guys feared the worst economic disruption (just as we do for new emerging trends in AI and automation). Reports suggest that disgruntled workers, who feared their jobs would be gone, vandalized equipment and caused damage to the factory of Isaac Singer for his invention.

But despite this initial resistance, the sewing machine was a very useful tool. Its ability to mass-produce clothing at a fraction of the time and cost of hand-sewing made it gain speed.

Worst fear materializing

Singer persevered and expanded his operations. He formed the Singer Manufacturing Company, which became one of the largest producers of sewing machines globally. Over time, the efficiency gains brought by sewing machines led to:

  1. Lower production costs (Exactly what the skeptics were afraid of), but
  2. Increased output in the textile industry, and
  3. Transformation of the garment industry.

Now fast forward to our world: who the hell sews clothes professionally without a sewing machine? You can also ask, who the hell operates a dry cleaning business without a washing machine? The list goes on.

You see, this story of Isaac Singer and his sewing machine illustrates how technological innovations can provoke strong reactions from those whose livelihoods are threatened. It basically underscores the disruptive impact of new technologies in reshaping industries and economies, despite initial resistance from established interests.

Now let’s move on to a similar resistance to technological change, coming from a different time, context, and geographical place.

The Luddites

Photo by Crystal Kwok on Unsplash

In the early 19th century, England was in the throes of the Industrial Revolution, a period marked by the rapid mechanization of manufacturing processes. Workers, known as Luddites, became emblematic of resistance. Between 1811 and 1816, Luddites expressed their discontent through a series of protests and acts of sabotage. Why? They feared that the introduction of power looms and machinery would undermine their skilled craftsmanship.

The rain and destruction of machinery were done over the course of time. These Luddites believed innovations would lead to mass unemployment and lower wages, jeopardizing their livelihoods and traditional way of life. However the British government responded with harsh measures, they deployed military forces and implemented legislation that criminalized their actions. One of the most famous was the Frame-Breaking Act of 1812. The result? Technology persisted and prevailed.

Oh, so I hear someone shouting, “Why are you bringing up the past, these cases are different! AI adoption is not going to take place under my watch! I’m the publisher of this journal! Never in my time!” So let’s look at a fairly recent event.

The Feud between the traditional Taxi Drivers and Uber

Photo by Viktor Avdeev on Unsplash

Fourteen years back, the transportation industry witnessed a seismic shift with the emergence of ride-sharing apps. Traditional taxi drivers and local companies vehemently opposed these services, seeing them as a threat to their livelihoods.

This was due to the competitive pricing and flexible work arrangements offered by the new car companies, Uber and Lyft.

Uber, in fact, had a unique business idea; they weren’t trying to claim ownership of the cars. Instead, they were willing to leverage existing private vehicles and their owners as independent contractors to provide on-demand transportation services.

This model was going to allow Uber to rapidly scale its operations without the burden of owning a fleet, while also providing flexibility and income opportunities to individual drivers. But it wasn’t going to be business as usual.

Widespread Protests and Legal Battles

The launch of Uber in 2009 and Lyft in 2011 led to legal battles between traditional taxi operators and disruptive new entrants. Widespread protests occurred between traditional taxi operators in various cities around the world.

How did it happen? Taxi drivers in London basically went haywire, protesting against Uber’s operations. This led to some fierce legal battles over regulatory issues and fair competition.

In Paris, taxi drivers staged protests against Uber, citing unfair competition and regulatory concerns, and in New York City, similar protests occurred. Are you drawing the parallels?

Automation

Photo by Possessed Photography on Unsplash

The automation of manufacturing processes, driven by advancements in robotics and computer-controlled machinery, has always been an issue of contention as far back as I can recall. The introduction of robots and automated assembly lines in the 1960s and 1970s led to fierce debates over the future of employment and income inequality. Many labor unions advocated for protections and retraining programs for affected workers.

However, this shift also raised fears of widespread job displacement and the erosion of skilled labor. And the problem was, these fears were confirmed.

As you can see, resistance is a recurring theme throughout history. In the book “Suppressed Inventions,” the author highlighted several advances in science shut down, because the creations were too bold and bright, and had the likelihood of toppling whole sectors and industries.

As contemporary writers express concerns about AI-generated content, they hold valid opinions about how these technologies could potentially devalue their work, leading to changes in employment patterns, and the perceived quality of written work. There is, however, a lesson to learn from history.

Would AI tools really replace human creativity and expertise in writing, editing, and content creation? Or would it advance it to a new level of sophistication?

Personal Experiences

A few years ago, when AI was introduced for web design, I tested a tool that created a website based on my business idea, logo description, and vision statement. In just 5 minutes, my website was up, but when I looked at the web pages, it was all too generic and all too common, so I gave it out to a low-code expert to customize.

Unbelievably, it took her 2 weeks, while she did her 9 to 5, to customize it to fit what we had in mind. Now, do we remove the fact that the website was AI-generated initially and ignore her hard work?

The same thing for an editor who generated a script for a movie with AI and now edits the script over the course of several weeks.

But our greatest fear is probably not for those who rework the script and rework their outlines.

Our greatest fear is that a few people will capitalize on the AI edge to game the system

Our greatest fear is that the more deserving writers who learned the trade over the course of several decades will get ignored and sidetracked in lieu of those who have little expertise and knowledge.

Most seasoned writers have valid concerns that AI tools will be exploited by a select few to manipulate rankings, publication opportunities, and literary awards.

Some writers have this concern that AI-generated content would flood the market, and thereby saturate readers with quantity rather than quality and this way, overshadow the craftsmanship of skilled writers.

We all have anxiety that AI’s ability to mimic human creativity will lead to a decline in recognition and appreciation for the depth and originality of human-authored works.

And you know the developers? They’ve got these hidden fears that AI would perpetuate biases present in training data, which would potentially amplify disparities in representation, perspective, and narrative diversity.

But while the use of AI often favors formulaic, predictable narratives at the expense of daring experimentation and unconventional storytelling, it’s important to point out a subtle distinction. The uninitiated often think that working with AI gives instant results. Yes, instant results if you want to be generic and sound like everyone else.

But it takes a level of specialty to work with AI, to bring out a peculiar masterpiece. It takes a level of domain expertise to craft a unique prompt and wield an input, such that an engine brings out an output that is unique, distinct and authentic.

The wild claims

There’s a claim that the most sophisticated writers on the planet now use AI to brainstorm and craft their outlines. In fact some writers swear to have been able to channel the spirit of Shakespeare and Hemingway through AI. According to Writaholics, some writers are allegedly using AI to co-author books — resurrecting historical authors to complete unfinished manuscripts and settle literary debates from beyond the grave.

Does that scare you?

Well, it scares me as well. With AI, some writers are creating plot twists so unpredictable that even Agatha Christie, would have been surprised.

Giving you jitters? We live in a world where AI now sings, and talks, and as time goes on, AI will materialize in our theaters — places we’ve confined solely for humans.

But despite our reservations, history has demonstrated that new technologies often lead to new opportunities, increased efficiency, and economic growth. Hence understanding and managing the transition to AI involves balancing technological progress with considerations for human welfare.

This dialogue is crucial so we can navigate the impacts of AI on employment and ensure that these advancements benefit both the enterprise and society as a whole.

While the reactions we see today might not be as violent as those against the sewing machines. Writers who toil on their craft, editing all night through several months and working with AI-generated outlines and drafts to fine-tune their work, often face harsh judgment from those in the publishing industry.

The Future of AI and Human Creativity

But the future lies in synergy, not separation of tools. Just as mathematicians learned to embrace calculators and Excel functions to make their lives easier, we must begin to embrace these AI tools and stop viewing them as the death of our trade.

AIs are not meant to steal your jobs; they are meant to create new specialties and take your creativity to an advanced level. They are designed to bring new levels of sophistication and expertise. Yes, the threat of uncurbed AI is real, with concerns about privacy, security, and job displacement. However, the possibilities are endless.

I’ve read several brilliant articles about humanity preparing for a future without jobs, a future where AI does most of the work, but I doubt if that future is coming. When we automated alarms in telecommunications, some said we were stealing jobs from NOC officers who manually escalated issues. The reality was a shift in work. NOC officers now had more to do, more reporting and paperwork, and had to investigate the root causes.

So as we draw the curtails, it is important to see the many sides to this battle: the feud between the no-code revolution and the code experts. Across other industries, it is seen in healthcare professionals skeptical about AI diagnostics as against the AI developers who are promising faster and more precise diagnoses. It is expressed in the Legal experts debating AI’s role in legal research versus proponents of AI, championing streamlined legal processes and increased access to justice.

These perspectives reflect the ongoing debate and contrasting views on the integration of AI across different sectors, which highlights concerns about disruption, ethics, job displacement, and the potential benefits of this technological advancement. These battles reflect a common theme: the evolution of work and the need for humans to adapt and embrace technological advancements.

Key Takeaway

  1. Embrace the inevitable, and prepare for a work shift; what is coming may turn out more powerful than expected.

2. Rather than think of a future without jobs, prepare yourself to function in new frontiers.

3. Recognize that AI is fast becoming an integral part of many industries. Embrace this shift to expand your horizon.

As we look ahead, it’s important that AI’s evolution is guided by principles of transparency, fairness, and accountability. it’s becoming increasingly necessary to future-proof our careers, by focusing on skills less likely to be replaced and automated.

Cheers to all those who believe in a future where ethical AI use amplifies human potential!

--

--

Tohbie Adelaja
ILLUMINATION

Award-Winning Content Strategist🏆Associate Product Manager 📈Follow for insider tips on creating usercentric products & strategies to make your brand stand out