Rise of Generative AI in Business

Arthur C Clarke famously said that any sufficiently advanced technology is indistinguishable from magic and perhaps the first time that we played with Generative AI it did evoke a sense of magic. Suddenly, for the first time in our history, we have a technology that can speak our languages, understand our requests, and produce entirely novel output. AI can write poetry and draw otherworldly images. It can write code. It can surprise and delight us with an original joke or musical composition. It can create and an act of creation that often inspires wonder. But AI is not magic. It’s maths and science. And it wasn’t sudden. These experiences have been decades in the making. AI is going to touch every aspect of our lives. It will change the world. But how it will change the world is up to us. To all of us.

History of AI

People have been speculating about the possibility that machines would someday think since the late 1800s, but the idea really took root with Alan Turing’s seminal paper in 1950. Historians called Turing the father of AI. He theorized that we could create computers that could play chess, that they would surpass human players, that we could make them proficient in natural language. He theorized that machines would eventually think. But Turing was just a beginning. If Turing’s 1950 paper was the spark, just six years later, we had the Big Bang, the Dartmouth Workshop. A couple of young academics got together with a couple of senior scientists from Bell Labs and IBM and proposed an extended summer workshop with just a small handful of top people in adjacent fields to intensively consider artificial intelligence. That is how the phrase ‘Artificial Intelligence’ was coined, and it marks the point at which AI was established as a field of research. They laid out, in extensive detail, many of the challenges that we’ve been working all these years to solve and develop machines that could potentially think: Neural Networks, Self-Directed Learning, Creativity and more. All still relevant today. For perspective, this was 1956, the same year the invention of the transistor won the Nobel Prize. Now we can have over 100 billion transistors on a GPU and banks and banks of interconnected GPUs to provide the compute power to create and execute generative AI functions. All these years, the AI theories, techniques, and ideas have been developed in parallel with progress in hardware that resulted in dramatic reductions in computation and storage costs, all converging now to make generative AI real and practical. But it’s not just about powerful hardware and clever algorithms. The third, and maybe the most important ingredient, particularly when it comes to any business, is data. We can’t talk about generative AI without talking about data. It’s the third leg of the AI stool. Model architecture, plus computation plus data.

Foundation Models

We hear about large language models or LLMs that are powering generative AI. So, what are they? At a basic level, they are a new way of representing language in a high dimensional space with a large number of parameters, a representation that we create by training of massive quantities of text. From that perspective, much of the history of computing has been about coming up with new ways to represent data and extract value from it. We put data in tables, rows of employees or customers, and columns of attributes in a database. This is great for things like transaction processing or writing cheques for payments to individuals. Then we started representing data with graphs. We start to see relationships between data points. A person or a business or a place is connected to some other people or businesses and places. Data, represented this way, starts to reveal patterns. And we can map a social network or spot anomalous purchases for credit card fraud detection. Now, with large language models, we are talking lots of data and representing it in neural networks that simulate an abstract version of brain cells. Layers and layers of connections with billions, even trillions of parameters. And suddenly we can start to do some fascinating things. We can discover patterns that are so detailed that we can predict relationships with a lot of confidence. We can predict that a specific word is most likely connected to some other word. These two words are most likely followed by a specific third word building up, reassessing, and predicting again and again until something new is written, created, or generated. That’s what generative AI is: the ability to look at data and discover relationships and predict the likelihood of sequences with enough confidence to create or generate something that didn’t exist before. Text, images, sounds, whatever data can be represented in the model. We could do a limited version of this earlier with deep learning, which was an AI milestone in its own right. With deep learning, we started representing a massive amount of data using very large neural networks with many layers. But until recently, a lot of the training happened using annotated data. This is data that humans would label manually. We call this supervised learning, and it’s expensive and time consuming. So only large institutions were doing that work and it was done for specific tasks. But around 2017, we saw a new approach: powered by an architecture called transformers to do a form of learning called Self-Supervised Learning. In this approach, a model is trained on a large amount of unlabeled data by masking certain sections of the text, words, sentences, etc., and asking the model to fill in those masked words. This amazing process, when done at scale results in a powerful representation that we call a large language model. Instead of narrow use cases and areas of expertise, we started to have something broader. Basically, these LLMs could be trained on huge volumes of internet data and acquire a human like set of natural language capabilities. Self-supervision at scale, combined with massive data and computation, gives us representations that are generalizable and adaptable. These are called foundation models, large scale neural networks that are trained using self-supervision and then adapt it to a wide range of downstream tasks. This means that we can take a large pre-trained model, ideally trained with trustworthy industry specific data, and that our institutional knowledge to tune the model to excel at our specific use cases. We end up with something that is tailored for us, but also quite efficient and much faster to deploy. The current thinking is usually that we can apply this to language, but that sparks a question. What is a language? Signals in a piece of industrial equipment are talking to us. The clicks of a user navigating a website, software code, chemistry, and the diagrammatic representations of chemicals. If we squint, everything starts looking like a language that can be deciphered and understood. AI can be specialized to do all kinds of things that boost productivity in any of those languages. This means that AI can stretch horizontally across our businesses to H.R. processes, Customer Service and Self-service Cybersecurity, code writing, application modernization, and so many other things.

Gen AI in CMI

The data-rich Communications, Media & Information (CMI) industry faces a range of opportunities for digitization, as well as a challenge in managing and analyzing vast amounts of information. CMI businesses have seen some success in leveraging AI to reduce manual effort and improve efficiency, and while some enterprises are well on their way to AI maturity, others are just getting started. Generative AI can be the enabling technology that allows CMI businesses at all levels of AI maturity to accelerate digital transformation and unleash entirely new capabilities and business outcomes.

With Generative AI, some of the greatest potential value is found in accelerating efficiencies through digitization. New opportunities often come with new challenges, and the risks and complexity with Generative AI can be significant. Here are a few opportunities that will be of use to organizations in the coming months/years:

· Conversational chat for customer service (Virtual Voice Customer Assistants): With a Generative AI-enabled voice assistant, customer concerns can be remedied faster and in line with company policies and standards while maintaining or even enhancing customer satisfaction.

· Generative AI for gamers (Game Content Development): Developers can leverage Generative AI to maintain and update their game with new assets and content in line with user community requests and interests.

· Annotation with automation (Code Summarization and Documentation): Automating code summarization and documentation frees up developers to focus on higher-value tasks, while also enabling code explainability for technical and nontechnical stakeholders.

· Content creation with AI (Generative AI-Enabled Creative Tools): Content creation can be facilitated and enhanced with Generative AI tools that minimize the need for manual editing and time-consuming content management.

· Translate specs for sales (Technical Sales Knowledge Management): Generative AI can help sales staff quickly find and translate technical specifications to customers, as well as document and summarize insights from customer interactions.

· Marketing content multiplier (On-Brand Publishing): Using Generative AI, marketing content generation can be cheaper, quicker, and more effective, while still preserving the company’s brand identity.

· Language translation at scale (Content Localization): Generative AI can be used to scale content quickly and easily across regions by translating and converting text and audio into regional languages.

· Technician support on the go (Telco Network Maintenance): Generative AI-enabled simulations can drive network maintenance speed and effectiveness to help field technicians quickly identify and resolve root causes of network issues.

· Enhancing chip innovation (Semiconductor Chip Design & Manufacturing): Generative AI can be used to iterate chip designs by having designs “compete” across a set of performance dimensions.

· Tech specs on demand (Field Sales Assistant): Generative AI can help operations and frontline staff quickly find and translate technical specifications to enable faster knowledge retrieval.

Future of AI

With all the advances achieved in the last few years, the ambition of the 1950s has come full circle. Today’s models don’t constitute true general intelligence, but some of them can pass the Turing test. So, what does it mean for all of us? Some people encounter generative AI and think we’re at the dawn of a bright utopian age, while others think this is the prelude to dystopian misery. As an avid follower of this technology, I take a moderate view. Both the optimism and the anxiety are valid, and we’ve asked the same questions at every major innovation milestone from the Industrial Revolution onward. AI isn’t just about the digital world. It’s also about the physical world. Applied properly, imagine what AI can do for the pace of discovery and innovation, what it can do for discovering new materials, for medicine, energy, climate, and so many of the pressing challenges that we face as a species. Ultimately, our success depends on how we approach AI. It’s a phrase that really became part of the public conversation in around second half of 2022. We have seen new models, evolved models, and an explosion of open models. Generative AI has gone from being a fascinating novelty, to a new business imperative in less than a year and every day there is news of a new use case or application. There’s such rapid growth that I can’t predict exactly where we’ll be ten years from now or even ten months from now. But I do know that we’re going to want to be actively engaged in shaping that journey. The future of the AI is not one or two amazing models to do everything for everyone. It’s multimodal. It needs to be democratized, leveraging the energy and the transparency of open science and open-source AI so that we all have a voice in what AI is, what it does, how it’s used, and how it impacts society. Where we get to decide what AI can do and how it integrates with our business. It’s time to start making plans for how we can effectively, safely, and responsibly put AI to work. Here are the four main pieces of advice from my side. Number one, we want to protect our data. Our data and the representations of that data, which, as I just explained, are what AI models are, will be our competitive advantage. Don’t outsource that. Protect it. Number two, we must make sure that we are embracing principles of transparency and trust so that we can understand and explain as much as possible of the decisions or recommendations made by AI. Number three, we want to make sure that our AI is implemented ethically, that our models are trained on legally accessed quality data. That data should be accurate and relevant, but also control for bias, hate speech, and other toxic elements. And number four, don’t be a passenger. We need to empower ourselves with platforms and processes to control our AI destiny. We don’t need to become an AI expert, but every business leader, every politician, every regulator, everyone should have a foundation from which to make informed decisions about where, when, and how we apply this new technology.

Conclusion

These are the early days of Generative AI, but the technology is rapidly maturing. As it does, organizations in every industry will probe how this type of AI can contribute to their business and open doors to transformative opportunities. As such, an important part of understanding and working with Generative AI is shaping the vision for the future, acknowledging both the potential benefits and the risks. In this Generative AI-enabled era, governance and risk mitigation are business imperatives. The challenges organizations face with traditional AI are amplified in this new arena. A commitment to the trustworthy development and use of Generative AI will only become more important as the capabilities grow and governing bodies shape rules for their application. Still, there is also a risk in waiting to embrace Generative AI. The use cases described in this article are a starting point for exploring how this powerful technology can be used to improve the enterprise today and prepare it to lead in the future.

Mayank Saroha is a Business Consultant for Tata Consultancy Services in the India, Middle East & Africa region and a part of TCS’ Strategic Leadership Program. He holds an MBA degree from the prestigious IIM Bangalore, Cohort of ‘22.

--

--

Mayank Saroha
Communications & Media Industry —A Futuristic Outlook

I'm a Business Consultant at TCS. Inclined towards sports, adventure and occasional travel. If you're interested in movies, then you're a part of my clan.