Preparing for AI in 2024

William LeFew
𝐀𝐈 𝐦𝐨𝐧𝐤𝐬.𝐢𝐨
4 min readMay 1, 2023
“Using an AI chatbot in the year 2024 surrounding by all the data” as envisioned by Midjourney
“Using an AI chatbot in the year 2024 surrounding by all the data” as envisioned by Midjourney

It is difficult to overstate the impact of large language models, most notably ChatGPT, on well…everything. The tools which are emerging to leverage its contextual brain-in-a-box abilities are legion and expanding in both scope and creativity. On the ground, almost every business, research, academic, government, and personal enterprise are grappling with the best ways to leverage and adopt this technology (while simultaneously fighting the losing battle to keep up with whatever tools and alternatives were released last week). As active as such grappling may feel, it is a reactive response. Below, I suggest a path which will prepare enterprises (and individuals) for the tools of tomorrow with actions to begin today.

TL;DR: We need to start saving all the things to best utilize tomorrow’s tools.

In 2024, ChatGPT-like (or beyond) tools will enable native consumption and interaction with enterprise (or personal) data (incidentally, this effort is well underway, consider this or this). While the utility of such tools will be felt when applied to today’s processes and data, with planning, we can do much better. Consider the following 5 examples:

1. Meetings as a knowledge base

This is the lowest hanging fruit. Given the continuing ubiquity of meetings with remote software support, every meeting should generate complete(-ish) transcripts. Save, annotate (for permission based access), and archive them all. A non-exhaustive list of potential use cases might be:

  • Contextually searchable record of events for both the working group and present/future interested parties (think onboarding)
  • Briefing generation (for leadership, RACI I’s, etc…)
  • Automated action plans, minutes, preliminary future agendas, etc…

2. Return of the open text field

The open text field has been the bane of many an application and the data quarry which mining forgot. The ability to generate contextually relevant sentiment from mounds of open text field data enables us to get at what we really want. Instead of punting to 5 point scales, we will be able to ask employees to record how they are doing in their own words. Instead of sales notes being a black hole for future analysis, they can be a bright star illuminating specific insights. Net net, the ability to take the output of open text fields and aggregate responses, construct meaningful pictures, and generate appropriate interventions both transforms the utility of historically underused data and enables the design of future targeted fields.

3. “Shelfware” finds new life

In many cases, the long term value of technical whitepapers is limited due to access (i.e. “Where is that, again? Who did that?”), situational awareness (i.e. the author has departed and the wheel will be reinvented), and the labor involved in matching modern context to prior work. Allowing this “shelfware” to be accessed by the tools of tomorrow enables contextually relevant access and saves the cost of reinvention. As such, the rate of generation of whitepapers, memos, presentations, and other internal documents should be (even artificially) increased to supply tomorrow’s tools with valuable data.

4. A more robust brain dump

The utility of role based access to enterprise data should be self evident based on the wide adoption of various data lakes, warehouses, and the like. The advent of “dataAIhouses” will enable an end around on modern data cleaning, warehousing, and structuring. Imagine a personal “dataAIhouse” that consumes verbal notes, memos, tagged e-mails, captain’s logs (supplemental, of course), and other artifacts that you designate throughout the day allowing you easy context based reference to an AI driven memory archive (example prompt: “What did I think about Program X after the initial kickoff meeting?”). Now imagine being able to pass on such a “dataAIhouse” rather than spending the last 2 weeks (before moving on to a new opportunity) frantically brain dumping to already stressed colleagues.

5. Document permissions become sharp edged

The vast array of data artifacts which can be produced in support of this generation of AI will require new orchestrations in order to properly maintain access driven credentials resulting in custom “chatbots” for every role in a company. Concretely, the answers given to the CEO from the prompt, “How is my team doing this week?” should be quite different from the results of an individual contributor asking, “How is the leadership team doing to this week?”. For businesses of reasonable size (and above), the gain from an ability to access answers which are contextually relevant relative to your role will more than compensate for the overheard and orchestration required to realize such a system (consider the benefits of being able to ask direct questions about *your* HR benefits to such a system).

Closing Thoughts

The excellence of tools which will be present in 2024 is difficult to predict, but the data which will enable maximal leveraging of their capabilities is not. Designing organizational structures and practices to prepare for this eventuality will enable companies to modernize rapidly and enjoy the advantages of that modernization.

Comments expanding the premise and calling out other relevant use cases are most welcome (as are references to tools which accelerate the relevancy of the projection).

--

--