A real account of writing with AI. Sincerely, a PHD student

Liyana Azman
6 min readMar 4, 2024

--

I have this bad habit of checking Linkedin in the morning for the latest publications, reading that publication, then getting upset over its findings for 2 hours. Derailed from last night’s planning to work on my thesis, here I am slamming keys on my keyboard to vent out my feelings on a paper written by someone I do not even know, will probably never meet, and does not care about my feelings.

Today’s area triggering distress: a study reporting how “doctoral students engaging in iterative, highly interactive processed with GAI power assisting tool general achieve better performance in writing tasks”. (1)

Hmmmmmm.

The tool used in the study was CHATGPT. It did not use more “extreme” writing tools which can churn out complete essays with minimal prompting — at just a click of a button. For the sake of higher education, I will not name those tools explicitly here (though I am pretty sure students already know what I am talking about). My main beef with the findings is the generalisation that AI means “better performance in writing”. What does “better” mean? I am not here to discredit the authors. The methods are interesting, I liked the tracking processes and mapping flows. There is a statistical analysis and it is an impeccably executed study. That’s why they are published in a Q1 journal and I am just disagreeing here on a personal blog. I mainly want to challenge certain premises below (1).

“Ai reduces cognitive demands ….serves as a strategic ally in optimising the utilisation of working memory during academic writing. By efficient offloading certain cognitive task to AI, writers can free up cognitive resources, allowing for a more focused and meaningful engagement in creative aspects of the writing process”

“it serves to support and streamline the initial stages of academic writing, such as literature review and ideation, rather than replace the nuanced, critical thinking and analytical skills required for comprehensive synthesis and critical writing.”

“Through the automation and streamlining of lower-level processes in academic writing, Ai can effectively release cognitive resources, allowing writers to dedicate more attention to high-order aspects of composition, such as content organization, argument development, and creative expression”

The paper does not really explain what “lower-level process” are. But mash these sentences together, it sounds like literature review and ideation is offloaded so you can focus on the more “important things”. My mind is screaming, and I hope this thought jumps off the page: When did literature review and ideation become less important?

I am not an anti-Ai student. I spent the first 4 months of research without using AI literature tools. This means my literature sources were retrieved through search engine tools (ie-Google scholar) and library searches. This was before search engines started incorporating CHATGPT in their search button. I started experimenting with AI literature tools in my 5th month. An honest review, I do enjoy using them. It is nice to have an alternative to search engines. These tools can visualize impressively in connective webs (i.e- Litmaps, Research Rabbit). Some can generate tables to help you scope through rows and rows of literature (i.e-Scispace). Some even write paragraphs of the literature in the field (i.e Scite.ai). Searching through search engines can yield millions of results. After clicking NEXT 3 times, you wish there was a tool that could scan through millions of results and pick out the best relevant ones. AI tools do bring forth literature quicker, presented to you in a less overwhelming format. Most tools do not bother to disclose how many search results there are. Free versions would limit literature search results to 30 articles. If you are the kind of person who has 100 tabs open at once, 30 is a small number. So it in some ways, yes cognitive burden is slightly reduced.

But…there’s always a but. Here is my personal account of using AI tools (not just Chat GPT, others included). And since this is MY blog and not a Q1 journal demanding citations, I guess you are just going to have to take my word for it.

1) You can still suffer information-overload-syndrome with AI literature review tools. Instead of results presented to you in separate pages from 1 to a million, you now have it snuggly fit in one screen with endless portable links to click. Be prepared to go down the rabbit hole with Research Rabbit. This problem can also occur the old school way. You see a reference and jump to it, before finishing the article. But when you are doing research the “old way” you are bound to develop habits where you are more strategic and calculative with your time. Fast skimming does not equate to faster written results. You may be able to read things ‘faster’ through short cut prompts and AI generated summaries. But in totality you could be reading more than you would without AI. In this scenario, the “cognitive burden” AI sought to reduce, is not achieved.

2) Current AI tools do not inform you who are the top researchers in the field. AI is not an expert researcher.

I repeat: AI IS NOT AN EXPERT RESEARCHER.

It is a runner retrieving you information based on your prompt. Unless someone on the web published a written piece identifying you must read “so and so and so” for this particular research question- chances are if you prompt CHATGPT for a list of names, it is a hit or miss. Believe me, I have tried. When you do not engage with work published by the ones on top- the editor or journal or reviewer will reject your paper. I learnt this piece of advice from Chicago school of writing (2), and experienced rejection for this reason: you must know your audience, you must know who is on top. You can identify those on top through reading papers yourself. Yes, YOURSELF. That one or two papers which exist in every single paper- those are the ones you have to read and cite.

3) You do not know the algorithms which these AI tools generate their searches on. Some tools count citations (which can be misleading). Others prioritize latest works only (where you end up missing out essential literature from the field in 80s 90s or times before). Sure, Google search engines have been criticized for making money for boosting results. But the results you see also mean it has been read by others. Someone is bound to cite it.

4) And finally the most glaring account, if AI truly improved written results, my thesis would be done by now. Of course, there are a lot of those on youtube who would blame my own deficiencies in “not using the tool right”. The internet is filled with videos on how to write a paper in xx hours, how to write using CHAT GPT without accused in plagiarism *insert hush emoji*. My experience with AI (coupled with my stubbornness and self-imposed high expectations) beg to differ.

So after a 1239 words rant, what message am I trying to get across?

AI brings benefits, but don’t you dare say it is “BETTER”.

I am Liyana Azman a PHD Candidate in AI Law. I asked AI “ is the phrase “A real account on writing” grammatically correct. AI says is **not grammatically correct**. Grammarly says To improve it, you could rephrase it as “A genuine account of writing” or “An authentic perspective on writing”.

No thanks, I’ll keep my title.

References:

(1) Andy Nguyen, Yvonne Hong, Belle Dang & Xiaoshan Huang (2024) Human-AI collaboration patterns in AI-assisted academic writing, Studies in Higher Education, DOI: 10.1080/03075079.2024.2323593

(2) See “LEADERSHIP LAB: Writing Beyond the Academy 1.23.15” by Larry McEnerney (University of Chicago Writing Program) https://www.youtube.com/watch?v=aFwVf5a3pZM&t=10s

--

--