Why AI is NOT going to take your research job

Luciana Zamprogne
9 min readJan 26, 2023

--

The illustrations in this article were generated by Dall-e (AI).

At least not yet.

I've been using Chat GPT since I saw a post last year on Linkedin saying AI would substitute my job as a UX Researcher. I didn't have anything work related to ask, so I started to ask questions about Research, UX Research and so on.

I was curious about a software that would take my job in the next few months (as predicted by the knights of Linkedin Apocalypse). The posts I saw talked a lot about product discovery, research and plans, so I started from there.

The results were… intriguing.

I asked the AI to write some research plans for me, and the answers were repetitive.

The combo: desk research (literature review), interviews, survey, usability test.

But then I thought I was not giving enough briefing about a project, so I looked up a Research challenge I did and copied the briefing to Chat GPT.

And again, with slight differences, we get the same steps.

Let's be honest. The answers are not wrong. You can use the combo "desk research, qualitative interviews, surveys and usability tests to do many things, and most people will repeat these steps for years and years. All boot camps you see in the market follow this cake recipe for research, and junior researchers will follow this religiously. Why? Because it works. Interviews and surveys are the most flexible methodologies we have, and together they can uncover a big chunk of information.

But not all problems can be solved with this combo, and things get way more complicated, especially in discovery research, when you start… well…to discover something.

I tried the same for discussion guides and questionnaires, and the answers came as a pirate parrot adopted by a UX Researcher.

Using Chat GPT for actual work — literature review

So since Chat GPT kept repeating to me that I had to do a literature review, the first time I had to do an actual literature review, I jumped on the AI website for a little chat.

Even though my first interaction was a bit disappointing, I was excited. I am a curious person who loves innovation, new tools and methods and spends a lot of time thinking of ways to improve my processes. Especially the boring ones, for example, a literature review.

So I asked straight away for an answer.

Wow. It even comes summarized in bullet points.

If you are a researcher and have yet to find the problem with this answer, look again.

Yes, there are no sources.

Where is this information coming from?

As a researcher that came from academia, I learnt, at the early stages of my career, not to trust anything that doesn’t reference reliable sources. Emphasis on the reliability, because that survey you did with ten people (I am sorry to deliver the news) cannot be considered a reliable source. Whatsapp is not a reliable source. The voice in your head is also, not a reliable source.

Always check the sources and where the information is coming from. Don’t trust anyone that uses the Institute Voices of my Head to prove a point.

So I asked for the sources, and I got this quick answer.

Where is this coming from?

I must the honest. The first thing that came to my mind when I saw this, was: “wow, maybe I am going to lose my job soon”.

But of course, I went to check it. Because the second commandment of reliable sources is to check everything. And I started to realize something was wrong with Chat GPT.

First thing, they were all websites, no pdf, nothing academic or not even a case study from a renowned company.

Second thing, all the sources ended up like this

The sites don’t exist.

Wait a minute, how is this possible?

I asked again, and again, and again. I spent at least 40 minutes clicking on sources Chat GPT provided me, and none of them existed.

This was so intriguing. I had so many questions.

Were those old broken links?

Is this AI database outdated?

Where is this information coming from?

Hmm…

how the bot found information from those sources if all of them were 404 errors (this page doesn’t exist)?

Chat GPT not only lied to me but also invented all the sources.

At this point I was thinking to myself:

How can I prompt Chat GPT to show me where they are taking information from?

How dare you lie to me, Chat GPT!

I was flabbergasted. This thing was inventing all the sources and even got a little defensive when I kept asking questions.

This has so many ethical complications. My head was exploding.

What is CHAT GPT anyway?

After this, I read a bunch of articles, including the openai.com blog, to try to understand what I was dealing with. Summarizing in layman’s language, CHAT GPT is a NLP, a language software that tries to simulate a more "humane" type of answer.

Maybe a little too much!

It does this by combining algorithms, machine learning and deep learning models. The software extract, classify and label text data.

One thing I found very interesting on their blog is the AI limitations

Limitations

  • ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. Fixing this issue is challenging, as: (1) during RL training, there's currently no source of truth; (2) training the model to be more cautious causes it to decline questions that it can answer correctly; and (3) supervised training misleads the model because the ideal answer depends on what the model knows, rather than what the human demonstrator knows.
  • The model is often excessively verbose and overuses certain phrases, such as restating that it's a language model trained by OpenAI. These issues arise from biases in the training data (trainers prefer longer answers that look more comprehensive) and well-known over-optimization issues.12
  • While we've made efforts to make the model refuse inappropriate requests, it will sometimes respond to harmful instructions or exhibit biased behaviour. We're using the Moderation API to warn or block certain types of unsafe content, but we expect it to have some false negatives and positives for now. We're eager to collect user feedback to aid our ongoing work to improve this system.

What worked very well with Chat GPT and how I found it useful for my work as a researcher

I decided to ask the AI to do the opposite. I would give it sources and ask it to summarize it for me.

Any website link I gave to it spilt some summary. That means the AI must have some access to the internet (or some internet content in their database), but I am not sure how this works; as far as I researched, I could not find this answer. The down part was It could not summarize any scientific articles unless I copied the whole article to the chat (a solution provided by the AI itself). I didn’t do it because I was, at this point, questioning the ethics of this whole situation.

You can also ask it to write some paragraphs about specific topics, and use this as a base. It works well for standard jobs that don't need too much brain power, and it could save time. You know when you read an article, but you feel this incredible laziness of summarizing it to add to your library of references? For this chat GPT is very helpful.

It is also helpful to inspire you to start your text if you feel like you are stuck or have a blank page phobia.

Some helpful prompts I used were:

  • Summarize this <link>
  • Can you write a paragraph about <topic>
  • Can you offer me a structure of PowerPoint slides for this <topic>

And the best thing I found in Chat GPT: cooking recipes with what you have in your fridge!

So what now?

Well, there are a bunch of reflections I have been doing about this whole AI situation.

The first thing is to separate things.

Although I find exciting to see technology with Star Trek feelings popping into our screens, I advise using it with extreme caution, especially if you are a researcher or anyone that deals with the private, personal information of other people. Open AI states clearly on its site that they are collecting and analyzing all the inputs. Do not share interviews or work-related data on it. You not only might be breaking your confidentiality agreement with your respondents but also doing something illegal.

These whole tabloidy articles saying that "AI will take your job" need to be very carefully analyzed before we all start jumping to conclusions and apocalyptic predictions.

Some professions will die, and others will appear, whether you like it or not. But it's not something that will happen from day to night, where you will wake up and your job doesn't exist anymore (unless you work for Elon Musk). Those things take time. And you will probably realize this and shift your career before it happens(I hope).

Predictions are a nice name for guessing. No one knows what is going to happen in 5 or 10 years. No one. You don't even know what is going to happen to you tomorrow. We guess it; everybody is guessing it. So it's essential to take a breath and think about how we will fit into new models instead of being scared of losing our jobs. Changes are happening all the time, and we need to learn to go with the flow; otherwise, we stay behind.

There is, though, something that worries me.

The lack of understanding of what research is, scares me more than anything else.

I have seen a lot of people writing about chat GPT, and using this tool the same way I did, but believing that everything was written by it is the absolute truth. They didn’t check any sources the AI provided. For them, the repetitive answers of a research plan and discussion guides seemed great.

This is scary.

It’s scary because if people believe in anything without checking whether it’s true or not, a big door opens for unethical practices to be conducted and for anything to be called research.

It is also scary because I hear, every single day, researchers struggling to be heard in their own companies, myself included in this bunch. We have our competence and our data contested, dismissed and disused in a blink of an eye. And then you see a chatbot with prompted shallow answers get way more credit than you will ever have.

It is painful.

Chat GPT made me question how we will move forward. I foresee AIs helping researchers to do part of the job that is repetitive and boring, inspiring us to write our projects and helping to find patterns in data faster than we do now. But I also hope it does not contribute to diluting our profession even more by empowering people to think they can do our jobs with more half-baked solutions.

And that, my fellow researchers, it's my best guess for today.

Sources:

https://www.ibm.com/topics/natural-language-processing#:~:text=Natural%20language%20processing%20(NLP)%20refers,same%20way%20human%20beings%20can.

https://www.atriainnovation.com/en/how-does-chat-gpt-work/#:~:text=In%20conclusion%2C%20to%20use%20GPT,an%20appropriate%20and%20coherent%20response.

--

--