AI as a Data-Cruncher (and a Boss)
A Look Back at Three Generative AI Projects — Part 2 of 3
This article is the second of a series looking back at three participatory projects using generative AI. 👉 Part 1 can be found here.
By May 2023, the proverbial cat (ChatGPT) had been out of the box for about six months. Its user base was increasing at an unprecedented rate in the history of tech adoption. The amazement was universal, but people were still trying to grapple with the implications.
Once again, I was invited by C2 Montreal to come up with a new experience using the same 40-minute format for a group of about 15 people. For this project, I was interested in the idea of using AI to crunch data and make it accessible in a way that would foster collective intelligence. A conference seemed like the perfect setting for this: much is said on stage, and opinions, facts, and numbers abound.
As a participant, it can be quite overwhelming. How do you identify what is worth retaining when bombarded with information? How do you process it and engage with the content? How do you extract meaning from it? Can AI help do that? This is where the idea for Storyhub came from.
Storyhub
Storyhub is a newsroom for events. It is both a space and an experience that invites participants to engage with what has been brought up on stage by speakers.
Participants act as journalists for a fictitious magazine and aim to write a pitch for an article addressing one of the topics discussed on stage. The project complements the experts’ wisdom exhibited on stage with the collective intelligence of amateurs, aided by data and AI.
At the core of the project is a sort of AI models conga line, augmented by human reporters:
- Audio input from threes stages is transcribed into text.
- Human reporters monitor the incoming text to extract key quotes and insights on-the-fly.
- Text goes into entity extraction and keywords extraction models (powered by Google Cloud’s Natural Language Processing) and into GPT-3.5 for summarization.
- The resulting data is visualized onto multiple displays and serves as the basis for a group exercise in journalism.
TL;DW — Too Long; Didn’t Watch
The first role AI played in this project was to make information more readily accessible to people who did not attend the conference. Using speech-to-text, automatic categorization, and quotes helped people get up to speed with the topic at hand and gave them an idea of the positions held by the speaker(s).
As our average attention span seems to be getting shorter and shorter, and the rate at which we produce information is constantly increasing, can AI help us filter, prioritize and tap into the information that is most relevant to us?
(if you made it this far into my post, kudos to you! ;)
Of course, some subtleties got lost along the way, which is why we also had human reporters listening to the talks to extract verbatim quotes they deemed important (and attention-grabbing).
Despite the speech-to-text accuracy being imperfect, our reporters still found it helpful to have this live transcript. When trying to capture a quote, it allowed them to go back to the transcript to catch the missing bits they weren’t able to remember.
Bypassing the Blank Page Syndrome?
After getting familiar with the specific topic of the conference and discussing as a group the various quotes and data points, participants were asked to draft a pitch for an article.
They could select the section of the magazine they wanted to publish their article under (International News, Sports, Business, etc.) and input the topics that emerged from their discussion. The system (using GPT-3.5) would then output a pitch for their article. The objective was to experience what it felt like to have an AI help you get around the blank page syndrome.
While this led to the satisfactory feeling of having a decent outcome with minimal input (the resulting texts were grammatically correct), it made clear an interesting fact: just because it sounds good doesn’t mean it makes sense. The model always found a way to tie the keywords sent its way into a compelling text, regardless of how disconnected the input data was.
This acted as a good reminder that LLMs are inherently crowd-pleasers. It has been said over and again, but it’s worth repeating: what these models do is autocomplete, not think.
It sometimes felt like grilling a student on a topic they clearly did not grasp, giving answers that sounded confident but lacked depth and accuracy. This underscored the need for critical evaluation of AI-generated content and a clear understanding of the distinction between genuine knowledge and predictive text generation.
The AI boss will see you now đź‘Ť đź‘Ž
After generating their article pitch, participants had to run it through their AI editor-in-chief, who would decide whether to green-light it or not, and explain why. In all transparency, I have to admit that the yes/no part was random…but hey, what would art be without a sprinkle of self-indulgence?
That being said, the point of this last step was to hint at a possible future: one where AI systems are fed the output of other AI systems (a predicament known in the AI literature as model collapse).
As AI systems permeate most aspects of our lives and are given more permissions, our behaviors and outputs are increasingly sanctioned by AI systems (think credit scoring or any risk scoring, really). This AI evaluation gave a glimpse of what may be to come.
I closed this project with the feeling that no, AI isn’t a silver bullet (yet), but it can help automate ungrateful tasks (like minute-taking), support sense-making (by aggregating and summarizing data), and (as already hinted by my previous project mentioned in Part 1 of this series) lubricate the creative process — if only by providing a first draft.
Little did I know…
📡 Stay tuned for Part 3!