Written by Kamesh Raghavendra, Chief Product Officer, The Hive
The Hive being an early-stage venture fund and studio focused on data and AI for the past decade, we have had a front-row seat in closely observing the evolution of AI within our ecosystem of entrepreneurs, enterprises, and large cloud providers. Over the past two years, we have seen the dots come close and join for the dawn of a new generative AI epoch in the enterprise, which is larger than the previous two epochs of cloud computing and big data. With the invention of language models, AI has turned into a new, radically more powerful, readily consumable utility, which is bringing the epochal shifts in enterprises’ architecture, operations, and interfaces. This epoch will set in, with unprecedented acceleration, over the next 3–5yrs, disrupting incumbent cloud and SaaS providers as well as bringing a next generation of AI-native enterprise systems that are powered by AI as a utility.
Owing to utility-style readily consumable nature of generative AI, we see the shift to generative AI play out in three concurrent horizons:
Horizon-1.0: Pre-trained LLMs as a pass-through AI utility.
Enterprises can today choose from a rapidly growing options of pre-trained LLMs offered by both SaaS/software and generative AI vendors for significantly improving their workforce productivity. These LLMs are integrated with existing applications as a new interface (such as natural language prompts or text as query) or a new feature (like text summarization or code generation). Hosted LLM services protect enterprises’ data and IP with traceable audit trails on every search or query. Here are the characteristics of this horizon that we have observed so far:
- Given the power and proliferation of pre-trained LLMs in the market (from providers like OpenAI, Google, Meta and NVIDIA), enterprises are adopting them as new hyper-productive interfaces to existing systems. Use-cases like marketing content creation, developer co-pilot, and semantic search & query drive significantly higher employee productivity and upskill the workforce with prompt engineering capabilities.
- Hosted pre-trained LLMs like Azure OpenAI and NVIDIA Megatron provide both protection of data, PII & IP, as well as usage-based pricing. Leading SaaS providers like Salesforce, ServiceNow and Microsoft offer built-in generative AI integrations to power out-of-box semantic search prompts, text completion and content, and query & code generation co-pilots through pre-trained LLMs.
- We expect LLM adoption over the next 6–18 months to show a marked improvement in the bottom-line of early-adopter enterprises, through gains from workforce productivity and automation.
- We are seeing the CIOs play an active role in driving governance and regulatory compliance, thus bringing LLMs into the mainstream of enterprise IT.
- Furthermore, we expect the pricing to drop by 30X-100X over the next year (Azure GPT-4 pricing as a reference to current price levels) driven by language model reuse, infrastructure optimizations and lower adaptation (or small) models.
Horizon-1.5: Enterprise-trained LLMs as value-added, differentiated utility.
While Horizon-1.0 creates a new baseline for enterprise workforce productivity and application interfaces, generative AI also enables enterprises drive market differentiation and revenue growth by leveraging their data assets. In this horizon, enterprises train proprietary language models using internal data assets, which offer specialized outcomes not available with pre-trained LLMs. These specialized outcomes could be related to a business function (like sales & marketing, operations & supply-chain or HR) or the enterprise’s vertical. SaaS/software vendors are playing a key role in this horizon by offering powerful model training frameworks that enable enterprises to train proprietary models without needing to acquire generative AI talents.
- Traditional discriminative AI (like neural networks) could power only single use-case applications (like advertising, fraud detection or forecasting), which set a very high bar of economies of scale that was difficult to achieve outside of consumer Internet. However, generative AI unlocks a utility model of consumption that can concurrently power multiple applications and, thus, making AI viable for many more use cases. This opens up new opportunities for enterprises to drive monetization of data collected from their products, customers, vendors & employees.
- Generative AI has a vibrant ecosystem of open-source models (like MosaicML recently acquired by Databricks, GPT-J etc.) for enterprises to embrace generative AI for driving differentiation in the market with their data assets. We expect proprietary LLMs to also follow and offer plug-ins or add-ons for enterprises to securely train with their internal data.
- Early use-cases include operational co-pilots (like support, incident resolution etc.), security (SoC), IT & system admin, and business intelligence. These operational co-pilots models are trained over the enterprise’s unique customer, systems and operational data unlike superficial pre-trained code generation co-pilots.
- The time to adoption will be driven by costs of GPU-time (currently $4+/hr that could approximately run into $1+MM with even 100sM of parameters) and development (hiring or contracting talent). SaaS and software vendors are offering pre-built frameworks for enterprises to securely train models from their internal data without needing to hire new talent. As we have seen with cloud computing, we expect the GPU compute prices to drop in a medium-term horizon, especially given all the effort in the semiconductor industry.
- Adoption of enterprise trained LLMs will be a business unit/function driven initiative and will play out in a longer product lifecycle horizon with further maturation of technology and clarity in AI regulations. However, given the accelerating factors mentioned above, we characterize this as a Horizon-1.5 cycle in the 1.5–3yr horizon.
Horizon 2.0: Generative AI native applications
Much like big data and cloud, generative AI is a paradigm shift in the design of enterprise systems as it is available to be consumed at every layer and every interface. These applications will radically change the role of human operators from engaging in active operations to providing governance & oversight. They will displace traditional interfaces of workflows, data/query & administration with action, explanation & governance. The costs of model training, model inference and vector embeddings will drive unprecedented application architectures that are optimized for an AI-native infrastructure. These applications will evolve language model frameworks (equivalent to Kubernetes in the cloud-native world) that will allow seamless use of both pre-trained LLMs and a larger ecosystem of specialized lower adaptation (or small) models.
This is a great opportunity for entrepreneurs to rethink every business function (like CRM, service operations, security, and HR) and vertical (fintech, retail & commerce, gaming and robotics) through an AI-native lens. This is a market opportunity to create trillions of dollars of equity value through the next generation of AI-native applications over the rest of this decade. This is an important thematic focus area at The Hive, and we will be partnering closely with our ecosystem in co-creating and investing in the next generation of AI-native applications.
The Hive Think Tank Generative AI event series
The Hive Think Tank is hosting a series of events about generative AI. Our first event in the series was on Databricks Dolly, an open-source Large Language Model (LLM). The second event explored the future of generative AI beyond natural language featuring Dr. Karthik Narasimhan of the Computer Science department at Princeton and co-author of the seminal OpenAI Generative Pre-Training (GPT) paper, along with his Princeton colleague Dr. Ellen Zhong.
The next three events in this series go deep into the adoption of generative AI in the enterprise through three unique lenses of CIOs, enterprise business units & operations, and SaaS & software vendors:
- Wed July 26, 2023, 11am — 12pm PT: CIO Viewpoint: Enterprise Adoption of Generative AI
- Thu Aug 3, 2023, 1pm — 2pm PT: Enterprise Service Management & Generative AI
Tue Aug 29, 2023, 11am — 12pm PT: Generative AI & The Future of Enterprise Software
Please join us at The Hive Think Tank as we go down this exciting journey of the AI decade.