Artificial Color. Artificial Flavor. Artificial Intelligence.

LisaBeth Weber
7 min readFeb 9, 2023

Let’s Chat About ChatGPT

(For the record, my writing —including this article—is unofficially certified AI free.)

As if there’s not enough to worry about these days, the world has pretty much done a 180 in the first month of 2023. Yes, I’m talking about ChatGPT. If you’re aware, you’re aware. If not, just google ChatGPT news. I’ll wait. Or just scroll down to the various links below. AI has been around for some time, but ChatGPT was just released in late 2022, and things may never be the same. Real-time update: Google has now rolled out their competition to ChatGPT, called “Bard”. The first day didn’t go so well, to the tune of costing Google $100 billion (with a b) dollars in market value when the AI gave a glaringly incorrect answer about the James Webb Space Telescope.

Yes, there will certainly be good uses for ChatGPT and other AI, and it’s already in play in ways we can’t even imagine (some are comparing it to the industrial revolution), but it’s also a VERY slippery slope. Sure, it will inevitably lead to careers that don’t even exist yet, but it will also eliminate jobs and careers, cause huge kerfuffles (my favorite word) in education and in mis/disinformation, both intentional and unintentional.

If you haven’t heard of it by now, you might be sleeping under a rock. The launch of ChatGPT went bonkers in the media in January 2023, and has turned the world on its head in a split second. It’s writing rocket science, it’s passing the medical board exams (some of the time), it’s composing songs, it’s writing college papers for students, it’s spreading dis/misinformation at times, and it’s both an innovation like we’ve never seen before, and it’s also NOT OK. Yeah yeah, it’s amazing, blah blah. But be careful what you wish for. Basically, you’ve got a robot auto-writing from a gazillion pieces of content in the cloud. Yeah, cuz that’s a good idea. NOT. Hello, is this thing on? Does anyone see a problem here? Let’s not even go into the scenario that it’s robo-writing for a world in which people may still think a human has written it or at the very least fact checked it. How is that all supposed to be interpreted? Oh yeah, and how about the disclaimers on the OpenAI website:

· “May occasionally generate incorrect information.”

· “May occasionally produce harmful instructions or biased content.”

· “Limited knowledge of world and events after 2021.”

Though the disclaimers are right there in plain English, it seems like the inertia is that many people will just glaze over that little “nitpicky” part.

ROOM FOR IMPROVEMENT? LET’S HOPE SO.

Speaking of disclaimers, the common denominator retort to the fact that AI is often incorrect are claims that it’s improving every day.

So is ChatGPT perfect?

No.

Does it claim to be?

No.

Do people pay attention to that?

Probably not enough.

Read that again.

Probably not enough.

SO IS IT PLAGIARISM?

Let’s review.

The Oxford dictionary defines plagiarism this way: “Plagiarism is presenting someone else’s work or ideas as your own, with or without their consent, by incorporating it into your work without full acknowledgement.”

Let’s discuss.

If a person/student/employee creates a document using AI and doesn’t disclose that fact and gets discovered, is it plagiarism? Does this open a whole new can of worms? Are we going to be having debates over what constitutes a “someone” and if AI will be considered a someone? When is it ok to use AI and when is it not? For instance, if an employee is tasked with writing blogs for a website, is it ethical if they use AI? Do they even need to disclose whether they did the original writing or not? Maybe not. Jury is out. But if a student submits a paper that was written in AI, that’s a whole different story. What if a scientist uses AI to write a white paper to be submitted for something related to a new medication? And what if the information generated was wrong to the point of causing harm? Needless to say, pass the Tylenol, I’m getting a headache even thinking about it. The solution or at least partial solution is checks & balances, editors, reviewers, and fact-checkers. These jobs will be crucial, perhaps more than ever, and I just hope that is realized sooner than later.

GIVE ME A PENCIL AND AN ABACUS

It’s times like this that make me think of a pendulum swing and that the momentum will eventually swing back in the opposite direction; translation — people will want real humans at the helm at some point to some extent. Though there’s no putting the genie back in the bottle, and AI is here to stay and then some, I believe there is a real risk to having computers generate knowledge to the point that it may be nearly impossible to distinguish fact from fiction, correct from incorrect, and all that goes along with truth-telling as far as WHO or WHAT did the work in the first place, and what happens to that work next, and next after that, and so on. Years ago in relation to earlier days of computers, I would occasionally say, “Can we just go back to using an abacus?” and, true story — when calculators were first used in schools and I was in 6th grade, I asked my teacher, “Aren’t we going to forget how to do multiplication if we use calculators?” Ok, I admit that in this day and age I’m pretty much glued to my computer and my phone, and I confess that I can’t imagine living without them, BUT the difference is, I still want to do the creating, albeit with tools at the ready to help, but with my brain doing actual work. I think the tidal wave of AI will bring back more proctors in the classroom and cameras in the testing centers than there already are, and more of a need to see the proof in the pudding, as in did the person submitting the writing do the actual writing? Will the honor system be enough for the educational system when no one is watching? I think not. Disclaimer — there are plenty of honest people out there. But will (some) humans push the ethical line in claiming they wrote something when it was really written by AI? Well, one career path that may be growing as a result of all this is ethicists.

Since it’s definitely here to stay, and supposedly improving by the day, or by the sound byte, I hope the innovations of AI will do good things, honest things, cutting-edge things, but at the same time, I hope it doesn’t turn a corner to a disastrous outcome that we can’t reverse.

NEED A HUMAN? WE’RE STILL HERE.

About a month prior to ever hearing the word “ChatGPT” (is it actually a word?), I found myself saying to some colleagues, “I think the computers are starting to rebel again”. I meant it a bit tongue-in-cheek and in the not-so-subtle reference to HAL from the classic film 2001, A Space Odyssey. But I was referencing things like having so much to manage online and how some of the tech seemed to be micromanaging itself (case in point, when Google prompts a permission to access a doc that you already know you’re shared on — see THIS hilarious video if you haven’t already. Oh and by the way, if you haven’t heard of Kalen Allen in the video, check out THIS video of OPRAH gushing over him!) Ok, back to our show. I didn’t think in a million years that it would end up being more about the computers taking over. Like REALLY taking over. Or are they? So here’s the thing. HUMANS will still be needed. For reviewing, editing, researching, fact-checking, and more. So I’m going to make a bold prediction here. Humans will be needed even more once ChatGPT and all its soon to be AI siblings are born. After all, someone’s got to monitor the machines, and hold them accountable to fact over fiction.

SMOKE AND MIRRORS

The best computer may ultimately still be the human brain. At least it has the sense to know when to hold and when to fold. This new phase of AI is here to stay and only attempting to be improved by the day. But remember what mama always said, “If it looks too good to be true, it probably is.” Think about it. It’s the shiny toy right now. But down the road, when you’re reading say, a magazine, a blog, an article, even a book, do you really REALLY want to be investing your time in reading content written by a non-human? They don’t call it artificial intelligence for nothing. Remember artificial colors and artificial flavors? I’ll just leave it right there.

Scene with HAL in 2001: A Space Odyssey: https://youtu.be/ARJ8cAGm6JE

The CEO of the Company Behind AI Chatbot ChatGPT Says the Worst-Case Scenario for Artificial Intelligence is ‘Lights Out For All Of Us’
https://www.businessinsider.com/chatgpt-openai-ceo-worst-case-ai-lights-out-for-all-2023-1

GPTzero: An AI Detector, developed by a 22-Year-Old Who Is Trying to Save Us From ChatGPT Before It Changes Writing Forever. Meet Princeton Student Edward Tian.
https://www.npr.org/sections/money/2023/01/17/1149206188/this-22-year-old-is-trying-to-save-us-from-chatgpt-before-it-changes-writing-for

ChatGPT Could Make These Jobs Obsolete: ‘The Wolf is at the Door’ https://nypost.com/2023/01/25/chat-gpt-could-make-these-jobs-obsolete

What Could the Rise of Artificial Intelligence Mean to Education, Life in General? https://www.nbcphiladelphia.com/news/tech/what-could-the-rise-of-artificial-intelligence-mean-to-education-life-in-general/3482639

--

--

LisaBeth Weber

Copywriter/creative consultant to businesses & entrepreneurs. Musician/Artist #TEDx speaker. Previous board member @PlayingForChangeFoundation lisabethweber.com