Fake or Fine Work? To AI or Not to AI
If you’ve been on social media recently — in my case, Threads — you probably have noticed this type of post spreading like swarms of bees to a field of flowers.
YES, that is supposedly me. It’s the result of jumping on the “Make your own stickers” trend. With some extra steps, anyone can excitedly embed and publicly parade the cute visual attachments on communication channels.
Wait, you’re telling me you’ve never heard of it? And you are curious to try!
Feel free to use this super simple Prompt you can copy and paste (like how I merely copy-pasted it from the first post I saw) to OpenAI’s ChatGPT.
“Create a 3D kawaii a 10:16 canvas featuring nine chibi-style stickers in various outfits, poses, and expressions. Use the attached image for reference. Each sticker has a white border and includes a speech bubble with regular use phrases. Set on a soft white-to-pastel blue gradient background for a fun, positive vibe, perfect for WhatsApp use.”
So far, I’ve tried this on ChatGPT, Meta AI, and Google Gemini. ChatGPT, with the most up-to-date engine, seems to deliver better, if not actual, results. Meta AI randomly made up something I didn’t even want. Google Gemini? It simply balked at me, as usual, saying, “I am sorry, but I am unable to create images. I cannot fulfill your request to create a 3D kawaii image.” Hahaha.
What’s that, you say? Microsoft Copilot?
Yeah, I tried that, too. But Microsoft Copilot instantly blurred out my face as if it were something NSFW. And then it went on to CORRECT THE GRAMMAR of the Prompt instead. No. I’m NOT even kidding.
Anyway, the point is, it was a fun stunt. Useful, even! It adds a flimsy fun to communication with family or friends. But the question that arises from this is written in this writing’s title. Can this be considered as a FINE work, or, instead, a FAKE work?
…
This is just one simple sample of how Artificial Intelligence has altered the world in recent years.
Admittedly, I always hesitated to use AI before. I think it was in 2023 that a mutual acquaintance on social media expressed concern over AI taking over people’s work. I assured my acquaintance that no matter how advanced technology grows, it would and could not replicate humanity, nor the heart and soul one can infuse into a work. The human touch!️
I carried on my days, holding on to that belief. But then last year, in February, one of my pen pals persuaded me to try the service. He and his partner used ChatGPT to assist with their gaming and work. My pen pal said something along the lines of, “Hey, why not turn ChatGPT into your invisible assistant?”
Driven by sheer curiosity, I jumped headfirst into the service. Was it helpful with my work, as my pen pal proposed? Well, not quite. It did help me produce the content I wanted much faster. Even so, the original content of my work was messy and needed to be verified for consistency with older, tangible data, which required manual checks using actual examination and literal asking. That’s a DUD.
I would be lying if I said I stopped using AI due to that disappointing twist. Quite the contrary, I found an effective use of it. What’s that? Helping me subtitle my letter into another language!
This refers to the context of my pen pal correspondences. Before, I exchanged letters with them using English, regardless of our native languages. Now, my letters are delivered in a bilingual format! The main reply remains written in English. But I also include an AI-assisted transliteration in my pen pal’s native language and characters. This way, my pen pal will not need to translate my letter upon receiving it, as I’ve already done the task for them. Not all pen pals approve of this method (sharing similar hesitance I used to have). But for the rest, our correspondence has become more convenient.
…
Hold your horses!
Are you suggesting or assuming that means I’m asking AI to write THIS essay that you’re reading, too?
Well, what do you think? Hahaha.
The answer is NO. A definite 100% NO. Even if I use AI, I will not accept such a cheat, as the result may take away the individual characteristics and style of my writing. Besides, as many (here in Medium, too) have pointed out, AI services have specific patterns and vocabularies they keep using repeatedly. Trust me, the more time you spend using ChatGPT or Google Gemini, the easier it will be to spot any work created with one. “Ethereal” or “Tapestry,” anyone? Hahaha.
But I’ll be frank. I do ask AI to review my writing professionally. I prompt it to offer suggestions for improvements I may consider and implement in my next writing. AI can be tweaked to write a neutral response when we ask it to examine our text-based works. More importantly, it can analyze and compare our creative results to other online resources and pinpoint our strengths and weaknesses.
Yes, I can—and I do — ask other people’s opinions, too. Then again, whoever I consult with, the response will always come with a biased or subjective preference. Every human being has a set of subjectivity embedded internally, for better or worse. Friends or family who already know me? They are used to my writing style and may not offer blatant criticism out of compassion. A random stranger may be more subjective. But will they provide constructive criticism if they don’t know me, and, as such, couldn’t care less about my growth?
Despite having its unique biased tendency, AI has no will or conscience. Hence, it can bypass that subjectivity angle. Best of all, the response will be produced in minutes.
Okay, you’re still NOT convinced that this blog post isn’t AI-generated?
In that case, I told Google Gemini to read my previous posts and write a paragraph about “apples.” Here’s the generated result:
“Apples. They’re just there. In bowls, on counters, in those plastic bags that always seem too big. You buy them. Thinking, ‘healthy.’ You bite. It’s fine. Sometimes, it’s a little mealy. Sometimes, it’s surprisingly crisp. The skin sticks in your teeth. You tell yourself it’s fiber. You consider making a pie. Then, you remember you don’t actually like making pies. More dishes. More time. Time you don’t have. You eat another apple. Because it’s there. Because it’s something. Because, well, fruit. You throw the core away. It hits the bottom of the trash with a dull thud. Another thing done. Or, not done. Just…processed.”
So, what do you think? Did Google Gemini’s generated essay resemble this writing?
The vibe feels different, somehow. Would you agree? And, seriously, a word like “crisp” or a phrase like “dull thud” may not even pop up naturally in my head. That part alone is enough to tell me, “NOPE, that’s NOT me.”
Besides acting as a reviewer, nowadays, I’ve been regularly using AI for the videos posted on my YouTube channel. As you may have noticed, I write/type A LOT. ChatGPT helps shorten the video description I’ve written to fit the word limits. Meanwhile, Google Gemini helps to produce visuals (like the one I included above). My YouTube page is a “Music Diary,” a song cover channel. That means the AI image I generate merely helps depict a visual I feel can represent the song choice.
My reason for doing so is that AI services usually do not (maybe it can, but I don’t know how) generate the same carbon-copy result of the same Prompt. That makes each visual “unique” in a way. To prevent the AI from stealing the likeness of a random person, the visual I instruct generally hides the person’s face. By adding that touch, hopefully the visual has not violated anyone’s privacy or copyright properties, since the content or elements inside those images are not and never were real.
…
Copyright, huh? That’s where things get a little murky and complicated, though. Because if we’re speaking about copyright, the fact at hand is that AI has become and still IS a genuine concern.
There’s a valid reason why many people have openly voiced and sounded the alarms not to let AI “steal” their jobs. That well-known 148-day WGA strike, followed by a 118-day SAG-AFTRA strike in 2023? It included AI on the list of its demands, citing how it can one day replace actors and writers’ creative presence. Research and analysis on how AI unleashes copyright infringement risks can be found often. For example, you can watch this insightfully alarming video posted by CNA Insider or thorough reporting by 60 Minutes.
Think of it this way. Generative AI is called that way for a reason. It doesn’t generate things out of thin air. Generative AI uses the vast collection of information circulating cyberspace instead. It studies and learns from blog posts, journals, photography snaps, artwork, and designs that people HAVE uploaded, ARE uploading, and WILL upload on social media. We are TOO accustomed to sharing things so casually, giving away the privacy of those materials in mere minutes. Therefore, ANY AI services will have access to those for their learning process.
Hold on. Does that mean I’ve been feeding AI data surrounding my likeness by uploading my photos to create those Stickers? Am I teaching them to write like me, simply by posting on my blog?
The answer to such rhetoric is, unfortunately, YES. Well, to be fair, I know FULL WELL that putting my blog out there means exposing myself to the public. In a way, it IS a desire of mine to share my opinion or rants with the world.
But I’m not gonna lie. The thought that someone, somewhere, out there, has just generated an essay based on my writing style for his/her school homework, or created an imaginary image who looks a lot like me in real life? It does feel creepy.
Furthermore, it will take them only a few minutes to pull off using several jumbled sentences that don’t even need to be grammatically accurate (remember the Prompt that Microsoft Copilot fixed above?). AI services CAN do that. The result may look super FINE. But no matter how we see it, it’s FAKE and not real.
Just take a look at this example below.
I played around on Google Gemini to create that image, describing the details of the scene and the person, including the precise clothing the person wore. After repeated attempts, a better result eventually came out. If I show that photo above to my family members? They might likely wonder, “When did you travel to the beach? Where was it?” That’s an almost spitting image of how I look daily!
Where did I make that image? With my phone, lazing on my bed, wearing none of those clothes.
…
One day, when AI has reached its higher form, it may even replace the needs of certain people altogether. Mythic Quest poked fun at this in the second episode of its fourth and final season. The show’s two leads created AI bots of themselves to work with one another while the real people focused on their private time elsewhere.
Meanwhile, a year ago, Abbott Elementary’s third season included a similar AI twist on its ninth episode. But while it was an enjoyable episode, I agree with the sentiment that it failed to touch on or at least reflect on the bigger situation. AI usage has been an ongoing, or should I add, dilemmatic, topic of discussion in the academic field.
As a former educator, I knew firsthand the creative ways students would go to lengths to make their learning “easier.” Based on friends who are still actively teaching, those methods have persisted, if not evolved, in modern times. Thus, I can relate to the concerns discussed in the article. Having a handy AI that’s mere button clicks and Prompts away will make any school assignment easier. Will that reduce the students’ capabilities for creative thinking?
Some schools have taken serious action for this. Heck, countries like Singapore and Australia even went deeper into the roots. While AI might not be their primary goal, the ease of online access would inevitably lead to AI usage. I agree with their approach, allowing modern kids to spend their childhood beyond the shackles of virtual confinement.
Yet, on the other side of the equation, you probably have sensed a growing market for AI courses. This CNET article, teaching us to be better and more efficient at using AI services, is a good example. As I was writing the original form of this post, YouTube shoved ads for several AI learning centers to my face. I wasn’t even looking for it.
AI is everywhere. To state otherwise would be massive misinformation or dumb denials. We’ve heard how Pokémon GO has been accused of making players train Niantic’s Geospatial AI. As seen above, Duolingo now enforces AI to enhance practice sessions through its new paid subscription, Duolingo Max. Do you know the free, online Digital Audio Workstation BandLab? It has a handy AI feature called Voice Cleaner that allows you to record vocals even in the noisiest environment.
My point is AI can help people do A LOT of things that might warrant many, longer years to manually complete, such as upscaling a movie to 16K resolution — even if the result may end up in an uncanny valley experience.
Hmmm…
So, how should we approach this whole AI approach? Is it really beneficial for us, or should the dark, terrifying side scare us instead? That’s a question that will be answered differently, depending on our subjective stance on the matter.
In reality, ANYTHING in this world, even the simplest object like a pebble on the road, can be used for good or bad. A pebble can be stacked to build a sturdy wall or used as a weapon to attack smaller living beings. Don’t forget the Biblical story of David using pebbles to defeat Goliath and bring victory to his country, while stepping on a random rock on the street could make us lose balance, fall, and even fracture a femur. It truly depends on HOW we utilize them — for a benevolent business or malevolent means.
The same applies to AI. No matter the ginormous level of processing, it is still very much a human invention. What if we enforce AI to assist or streamline our work, WITHOUT ever losing our distinct signature footprint? Then I arguably agree that the AI and the result CAN be considered FINE work. Still, human beings being human, such a mindset isn’t always the case, huh? People end up exploiting AI, such as the issue of students skipping work to complete homework, without maintaining their sense of personality in the results. That’s the aspect I’m not too fond of, honestly. With proper management, even an AI-assisted result CAN still retain our unique human touches.
A brush in the hand of a regular laborer is used to paint walls. But in the hands of an artist, it can create a masterpiece that may sell for millions. Maybe we should consider AI the same way. Instead of treating it as a tool to do the work dirty, it should be optimized to enhance or enrich our work. As a well-known tech inventor once said, “It’s not the tools that you have faith in — tools are just tools. They work, or they don’t work. It’s people you have faith in or not.”
Lastly, I agree with Professor Graham Morehead’s wisely stated and advised words in his WIRED “AI Support” video.
“You gotta think about AI as your expert friend who knows a lot, but has some bad misconceptions. So you don’t trust everything. If I were you, I would use it a lot and trust it very little.”
I encourage you to watch that video as Prof. Morehead talks casually and simplistically about everything AI. If you’re like me and occasionally think that AI is “dumb” (lack of consistency as I noted above) or oddly biased? There IS an answer to that! Prof. Morehead also covered the origin of AI, the ways their thinking processes are FINE-tuned, the difference from Machine Learning, how to spot their FAKE-ness, what it logically lacks, and many more, including the potential, if not possibility, for even more advanced future applications.
With that, it feels apt to end this post with another well-crafted statement from Prof. Morehead.
“It’s gonna be harder and harder to avoid AI. AI is everywhere…
You don’t have to use AI, you can live your life and write your own essays, do your own homework. And I encourage you to do that.”
Yes, I WILL, Professor! Most definitely.
And, would you look at that! I didn’t even need AI’s help to respond…
Originally published at https://f-cons.blogspot.com.