GEN AI and Automation, How it impacts the way Businesses operate across multiple industries:

Balakrishnan Thiagarajan
3 min readOct 20, 2023

--

In this series I will be writing how different real world business use cases across multiple industries can take advantage of Gen AI and LLM, augmenting with their existing Automation investments.

Part 2 of N: ECM and LLM — The Dream Power Combo

Enterprise Content Management is not a new trending topic in 2023. However, organizations don’t realize the value of the leading edge they have when it comes to AI.

ECM and Content Services platforms serve as the mothership of an organization storing vast amount of customer, business and compliance documents that can provide key insights into R&D, State of the business and how the business can scale.

However, organizations these days use ECM platforms like IBM Filenet just as a repository of documents, just like a network drive. Robust and Modern ECM platforms like Filenet has a very clear document classification taxonomy design that allows every LOB’s to categorize document types as per their business functions. On top of this, they provide fine grain security and expose content services for document and meta-data to be consumed by other applications.

Now let’s talk a little bit about LLM and the big boost ECM can offer to LLM.

As we covered in my earlier topic, LLM foundation models are trained on global large massive data sets that may not be directly used by every organizations. In order to narrow down to a particular use case, an LLM model have to be trained on that organization’s data set.

Creating an AI LLM Model from the scratch is extremely expensive given the amount of infrastructure that is required in terms of the cores, GPUs, memories etc.,. The cost effective way is to use one or many of the existing Foundation models(BERT, BART, BLOOM, GPT, FlatT5 etc.,,) to your organization’s use case.

Easier said, than done..

There are 2 techniques in making a foundation model work for your use case:

  1. Prompt Tuning — Training a model based on supplying different prompts and responses for the model to learn based on the use case.

For example: Supplying different set of user queries and responses that a model should give based on a field worker’s experience in handling queries.

2. Fine Tuning — Supplying the most accurate set of data and documents related to the LOB’s business to the model, to recalibrate the vector embeddings and produce the best probability weight of the next best action/phrase.

For example: Giving a LLM model that powers a customer self service chatbot in telecom that gives users the information about their current plan and recommendations to save cost.

Fine Tuning becomes extremely important in determining the accuracy and cost of the result produced by an LLM model. Getting a model access to an ECM system significantly reduces the time it takes to train the model for accuracy. Besides, ECM systems like Filenet protect the data and documents with fine grain authorized security that controls what data goes in and out.

This is one simple implementation pattern where organizations invested in a high-end ECM system like IBM Filenet can take immediate advantage of using LLM’s to create AI models that addresses specific problems.

--

--