Large Language Models in Workshops

Konrad Hoppe
AI insights by querifai
5 min readApr 15, 2024

Elevate your workshop pinboard visuals into actionable insights with AI services on querifai.

Overview

In this article, we illustrate the process of extracting handwritten text from sticky notes, followed by leveraging Large Language Models (LLM) to comprehend the content and sentiments within.

querifai does not only offer the tools for digitising these notes but also conducts content analysis. Utilising tech from well-known companies like Google, Azure, and AWS, our platform ensures optimal performance.

Please note that the full version of the article can be found here, which provides a step-by-step guide to run the full use case from text extraction to sentiment analysis.

About querifai.ai

querifai.ai is a user-friendly SaaS platform powered by AI, designed to cater to both, businesses and individuals. Our platform harnesses the power of multiple cloud-based AI services from industry leaders like Google, AWS, and Azure, making it easy for anyone to leverage the benefits of AI technology, regardless of their level of expertise.

Transforming Handwritten Text from Sticky Notes

The initial stage involves extracting the handwritten text present on sticky notes. To achieve this, we employ an Optical Character Recognition (OCR) service. While a general OCR service capable of converting handwritten characters into digital ones is suitable, specialised document analysis services are particularly effective in this context.

These services, tailored for document interpretation, offer the advantage of recognising organised structures, such as tables. In cases where your pinboard follows a structured layout, like a Kanban board, these services identify the arrangement and present results accordingly. Sticky notes are consequently identified as individual table cells, minimizing the need for extensive post-processing.

Step 1: Extracting Insights from Sticky Notes

Consider a scenario where a product manager conducts a workshop on their software product, seeking to identify well-appreciated features and areas requiring additional focus from the development team.

During the lively discussion, the product manager captures feedback on sticky notes displayed on a pinboard in the room. Post-workshop, these inputs transform into actionable items with the assistance of AI. The goal is to extract handwritten texts from the notes and input them into a Language Model for summarization and in-depth analysis.

The following screenshots illustrate the initial phase of text extraction, comparing various API services on querifai.ai

The screenshot below offers a more detailed view of what each API service could identify, aiding in the decision-making process for the most suitable solution.

AWS seems promising for this use case, accurately extracting the texts into a table:

Following text extraction, the data can be fed into a Language Model for summarisation and further analysis.

Step 2: Unlocking Valuable Insights from the Workshop Feedback

Our goal is to categorize feedback from the pinboard into distinct categories such as user experience & visual display, technical and performance issues, security concerns, service, and others. We also seek a count of how often each feedback category is reported to understand the importance of different aspects.

Prompt Engineering

Language models are highly sensitive to the formulation of prompts, and even slight changes can yield significantly different responses. This sensitivity amplifies with the prompt’s length. Prompt engineering is a technique to craft input prompts, influencing the model’s behaviour and generating desired outputs.

Two prompt engineering methods are illustrated in this example:

Chain-of-Thought Prompting

This method involves constructing a logical sequence in the prompt to guide the model’s generation process. It ensures coherence and context relevance in the output. An good example can be found for example in https://arxiv.org/abs/2201.11903, where detailed instructions on approaching a problem are provided to the model before presenting the actual question.

One-Shot Prompting

This technique offers a bit more guidance than zero-shot prompting, providing a list of tasks to be addressed in the prompt. For instance, in a project proposal task, the prompt might include specific points such as addressing client pain points, outlining levers and associated measures, and creating a project plan with milestones. Few-shot prompting takes it further by offering additional relevant data to the language model, including staffing constraints and detailed milestones.

The Model Response

The response from the language model precisely adheres to the instructions outlined in our prompt, as evident in the screenshots below. We ask for a three-step process and this is faithfully followed:

  • Classification into Categories
  • Counting Feedback for Each Category
  • Table Presentation for Quick Overview

Curious about the exact prompt we crafted utilising chain-of-thought and one-shot-learning for the well-desired outcome? Read our full article here!

Conclusion

In conclusion, querifai is your go-to solution for seamlessly transforming handwritten notes into digital format or enhancing your productivity with advanced post-processing capabilities.

Our user-friendly no-code interface provides easy access to OCR and LLM services, while our API endpoints enable smooth integration into your existing infrastructure. Explore the tailored solutions querifai.ai offers to meet your unique requirements.

Sign up now to unlock the potential of AI and discover how it can effectively address your use-case specific needs.

--

--