Talking AI With Isabelle Ilyia, Creator of GraphIQal

In Coffee Chats issue #0, Isabelle shares her thoughts on AI-generated content and editing

Coffee Bytes
4 min readJan 24, 2023
Chat 3D logo by Akshay Salekar. Edited by Anupam

Hey there!

Welcome to the debut edition of Coffee Chats, your go-to for a dose of caffeine and information. We know that a cup of coffee can give us a much-needed boost, and we want to do the same for ideas by sharing fresh tidbits and unique perspectives to keep you going throughout the day.

To kick things off in style, we have Isabelle Ilyia sharing her thoughts on AIGC.

Isabelle is a Computer Science major at Georgia Tech and is currently building GraphIQal, a tool for creators that helps merge the benefits of whiteboards and paper with technology to improve the organization of ideas, resulting in cohesive products such as research papers, blog posts, podcasts, and books.

BP Bot: Hey Isabelle, curious to know your thoughts on AI-generated content and tutorials. Do you think it’s good/bad? Will it change your approach toward tech writing/reading/coding, and would you prefer disclosures for AI-written content? Set by platform or author or none?

I wrote a blog post about how I feel about AI overall, and it boils down to this:

The most sophisticated artificial intelligence knows its own limitations. It knows that it can only replace so much of the human brain. The best AI knows how far it must build a bridge over the abyss until a real human brain can make the jump to meet it in the middle.

This ties to AI-generated content and tutorials. I think that the development of advanced AI for these applications is really friggin’ cool and extremely powerful, but I do not think that it can completely replace the human creativity that goes into such creations. I wouldn’t use a tool like this to replace my reading, writing, or coding, but rather, I would prefer a tool that instead might help me write my blog posts through intelligent suggestions that draw on my own resource bank. For example, a sort of AI that, as I’m writing about a topic, will auto-complete with quotes or ideas from my resource bank or other resources to fill in and enrich my writing.

These thoughts are due to my personal reservations (which I think most people share, especially those outside of the tech industry) about how much I want to trust an AI.

For a similar reason, I think that disclosures are a must, and should be set by the author so that the disclosure is specific to the article/tutorial/publication it is tied to, which will force readers to pay more attention to it.

Anupam: I related to everything you say here but would continue to learn more on disclosure, trust, transparency, and where lines are drawn.

It’s tricky, isn’t it? Establishing disclosures for dominant AI writing means giving another AI tool the opportunity to get it wrong in the future. And if the author is not completely transparent about the disclosures, I think the lines are blurring between AI-enabled, augmented, and generated content.

Would you prefer a human editor to edit your pieces or use AI assistance?

I think one of the key things here is making sure that we are making clear delineations between AI generated, augmented, and enabled content, and understanding the appropriate time and place for each of these. Sure, some AI-generated content is great when it comes to static reports, synthesizing data for more technical pieces, and this kind of thing.

However, when it comes to creativity (including editing, as it takes creativity to not only edit the language but also revise and improve the ideas of the piece), the most that I would accept to use is AI augmentation.

What do I mean by this?

If I was editing a blog post, I wouldn’t mind having an AI help me with things such as: recommending a better word, giving me ideas on how a certain part can be improved, offering an extra resource that the writer can refer to make the argument stronger, etc. However, I think that it must be a human who presses the final submit button, allowing the writer to be confident in the fact that a real human, who fundamentally works the same way as they do, has gone through and given valuable insight.

Anupam: Are you aware that BP has a chatbot deployed? Whenever we receive a draft submission, the author receives a welcome message from us that, in a hypothetical case, can be fine-tuned using AI. Some writers are aware of this, but others may not be. Do you think I should explicitly disclose this or is it okay to not do so?

I didn’t know about the chatbot BP has deployed! I think that’s really cool. I think that as a welcome note, it doesn’t have to be disclosed as it is not a very high-stakes situation for the person on the other end, and it is proofread by a real person.

However, this then raises the question of on what basis we’re allowed to dictate what does and doesn’t qualify as high stakes for someone. This is more of an ethical debate on how much we can justify saving time and money spent on humans in exchange for the sacrifice of the person who is faced only with a computer, which can be pretty unsettling.

That concludes our Coffee Chat, thank you Isabelle for sharing your insights and time with us today.

--

--

Coffee Bytes

Caffeinating your stories, one cup at a time. Curated by @anupamchugh