AI Augmented Knowledge Work — new tooling delivers astounding workflow improvements

Jesse Stevens
Creative Continuum
Published in
5 min readNov 29, 2023

When large language models (LLMs) like GPT-4 came on the scene, knowledge workers like myself were gifted an incredibly capable new assistant.

At its core, knowledge work is about uncovering, collecting and making sense of information — cognitive tasks that require mental effort, creativity, planning, domain expertise, complex problem solving and sharp decision-making. Luckily, these are exactly the types of activities that the new breed of LLMs are best at. Properly applied, these new tools can supercharge the productivity of a knowledge worker, and by extension, lead to radical transformations of many companies and across industries.

Lately, I’ve been working on several apps that leverage LLM capabilities for knowledge work. I’ve included two purpose-built GPTs in this article for you to try. My ultimate dream is to have one Agent that I collaborate with, that in turn calls upon the more focused individual agents — to become an ultra capable and all-knowing assistant. From there, I’ll only need to concentrate on the highest-value insights and can take actions fully informed. From what I’ve seen and worked with, this dream is not far off.

I’m going to walk through some high-value LLM applications I use in my everyday work, and try and show some estimates of how we can quantify the productivity gains

But first, I want to address one thing really quick:

Hallucinations — while these are something to be wary of, there are workarounds. There is a term for this — “verification layers”. You can verify your info manually. You can use the new internet searching plugins or functionality. And you can get less hallucinations using the API of an LLM to adjust the “creativity” settings to zero. These methods let the user rest easy and feel confident in the outputs from an LLM. (also, I hear that the hallucination issue will be solved in short order with newer models) So for this article, I’ll pretend that this is not an issue.

Meeting Transcriptions and Summaries

If you aren’t using a tool to transcribe your meetings, you are really missing out. Tools like Otter.ai and Tactiq allow easy integration of meeting transcriptions to any platform you use, and offer AI features like meeting summaries, action items, and custom behaviors. While transcriptions are not an AI tool, having a text record of all of my meetings that I can revisit, and analyze with LLM tools has been a massive efficiency improvement in my day-to-day work.

Coming up to speed

One key advantage to using LLMs in knowledge work is the increase in the speed of learning. In an agency setting, I’m always working in new sectors and industries. One of my main joys with client work is getting to collect SME knowledge like baseball cards. So far I’ve got: commercial real estate, energy, healthcare, mergers and acquisitions, credit markets, manufacturing test equipment and fast-casual chicken. Just think of the startup I could create that combines them all! In each case, becoming a mini-expert in the problem space has been the secret weapon in finding our way to what I internally refer to as a “good solve”. That is, a solution that meets clients needs, that is grounded in context and data.

LLMs are a godsend here. If I need to learn about, say, credit markets, there is a lot to know! This a complex subject. Sometimes, I find the best place to start is by asking the LLM — how should I go about learning this subject? Creating a game plan of all of the areas I need to investigate is a great step on the road to a cohesive knowledge set. What do I need to know?

There’s general knowledge — what are the main steps in the process, who are the players, and how do they interconnect, what are the trends. There’s industry jargon and acronyms — what is a CDO or a REIT? What is a credit opinion and why does it matter? Then there’s the more detailed aspects that may be important to the problem you are looking to solve.

As you are learning, capture the information in a document, keeping it neatly pruned and organized. This allows for reference, knowledge sharing, and is valuable context for further LLM use. Building a corpus of information specific to the client and the problem space can now be done much more efficiently than ever before.

Market Research

With clients, it’s important for everyone involved to understand the landscape they are operating in. Conducting Competitive and Market Analysis allows the clients to study their competition, understand opportunities in the market, and position their product offerings relative to that information. I’ve made two GPTs that assist this workflow, give them a shot!

Competitive Analysis Agent

Market Research Report

Enter the subject company and the Agents will search the internet to help you understand the client and competitive space.

User Interviews and Analysis

If there was any task which a) takes a long time in the traditional way, and b) is perfectly suited for LLMs, User Interviews and Analysis is it.

Let’s do a quick task analysis of user interviews. I’ll list out the basic tasks, and rate them low/medium/high depending on how well an LLM process can augment them:

  • develop user interview questions — high
  • conduct interviews — low
  • capture key moments from the interviews — high
  • tag and categorize insights — medium
  • analyze the results — high

Lots of high-value LLM use there. A common rule of thumb at our agency is that it takes about 1.5 times as long to note and analyze an interview as it does to conduct it. For example, a 1 hour interview needs at least 90 minutes of noting and analysis afterwards. In my experience, augmenting this work with even the most basic of LLM tools cuts that time down to about 15 minutes. From a time standpoint alone, that’s a 6x time saver.

Quality improvement is harder to measure here, but its effects are immediately felt by the augmented worker. Using an LLM to do cognitively heavy tasks like generating user interview questions, capturing themes and insights from a session, or analyzing a tagged set of observations, frees the workers brain to make bigger picture, more high value connections. Let the machine do the cognitive grunt work, and the human can steer the system towards the important context and solutions.

Using AI to Build AI

It’s a bit meta, but these days I’m using AI for the purpose of advising clients on how they can integrate AI into their businesses. Using AI to create more AI — am I working for the AI now? Who’s building who here?

If we were to do a task analysis of a different set of activities of a knowledge worker (I’ve done a few), we would find many tasks we could classify as high or medium in terms of AI integration possibilities. I’m already seeing some radical transformations to workflows happening out there, and more are coming. To put the timelines in perspective — Chat GPT was released one year ago, and GPT-4 only 8 months ago. The next couple of years will make these advancements look, in the words of Sam Altman, “quaint”.

Faster Analysis. Better decision making. Higher quality work. AI is the next level-up in knowledge work, and as the tools and capabilities of LLMs increase, the closer we get to my dream of the fully informed and ultra-intelligent digital collaborator.

--

--

Jesse Stevens
Creative Continuum

Maker / father, artist / strategist. Passionate about things that extend our reach and enhance our touch.