Race Capital’s Request for AI Startups

Bernard Chan
RaceCapital
Published in
9 min readAug 21, 2023

We believe that the best ideas are the ones that make us think differently.

At Race Capital, we have been investing in data, artificial intelligence (AI) and machine learning (ML) space since our inception with investments including Databricks, Opaque, Vectara, Sematic, Zeet, and many others. It has been an exciting journey so far, especially with the recent surge in Large Language models (LLMs). Since the launch of ChatGPT, we observed a Cambrian explosion in LLM applications, leading to the emergence of a brand new technology stack.

In this post, we highlight some areas that excite us and where we are keen to partner with founders to find innovative solutions in these problem areas.

Disruption of Horizontal SaaS with Agents and LLMs

LLMs are in a prime position to disrupt the Horizontal SaaS industries. Instead of spending numerous hours on remedial work, Agents can drastically reduce time spent on information retrieval, organizing, web scraping, data entry, writing boilerplate code, and many other tasks.

Legal

Imagine having a LLM trained extensively on legal documents, case law, statutes, regulations, contracts, and other legal texts that can facilitate legal firms tackling complex legal challenges across different practice areas, legal systems and jurisdiction. It will save hours of research from legal professionals by providing accurate and contextually relevant information, analyses, and insights, allowing them to focus on strategic work and client relationships instead.

HR management

AutonomousHR Chatbot is a great example of how HR management software can be improved with the assistance of agents. The autonomous agent can instantly retrieve information from a company’s timekeeping policy as well as the specific employee’s data. Whenever an employee wants to know how many days of vacation leave they have left, they can find out instantly.

We would love to see this idea being further expanded on where Agents will swiftly retrieve information based on all existing systems of record, internal compliance, and policies. Employees can submit for paid time off, and sick leave all without human intervention and automatic triaging to the right personnel for situations that require further assistance.

Software Development

Smol-developer is one of the first open-source agents that allows any company and individual to have a junior developer readily available. One simply has to grant access to the agent for a specific repository and provide instructions on what they want the agent to build and just wait for the agent to make a PR.

Beyond boilerplate code and low-level tasks, we are excited for the future where an agent would write relevant unit tests for any code generated, automatically create tickets for bugs, write and merge PR, and update any documentation, all whilst ensuring that the core business logic is not broken.

Sales Enablement

Instead of spending hours qualifying leads and crafting personalized outbound messages, AI sales enablement tools can be developed to help sales teams focus on what truly matters, closing deals.

We see agents as the perfect sidekick for SDRs, helping identify qualified buyers on teams ( i.e. for database tuning software: reaching out to the DBAs maintaining the Postgres SQL databases on Amazon RDS instead of just an engineer on the infrastructure team) without spending hours of research on Linkedin, Twitter and blog posts.

Personal Assistant

We can foresee a future where everyone has a personal assistant agent that has memory and information from a knowledge base(Calendar, Email), understands the intent, and executes long-lived transactions on behalf of the user. The “Westworld” research carried out by Stanford and Google is a first glimpse into the exciting future of personal assistant that may soon become a reality: where agents can help rearrange and rebook events on one’s calendar (i.e. Human 1 is running 30 minutes late to dinner, an agent will automatically notify Human 2 and call the restaurant to push back their reservation)

Bringing LLMs to enterprise

Our team at Race have witnessed how major application revolutions have transpired over the years, such as the introduction of the Web in the late 90s. It generally started with a consumer sensation, which for the first time enabled consumers to browse graphical content and search for them. However, the real business impact brought by the Web ended up mostly in Enterprise computing. Businesses began to use the Web to transact on Banking (and other financial services), Logistics, eCommerce, Entertainment and ultimately everything. What are witnessing with the ChatGPT phenomenon is not too dissimilar. We view the ultimate business benefits of GenAI will all be around enabling Enterprise computing to be far more efficient and automated than it ever had.

Larger traditional enterprises have been spending the last year strategizing on how to include Generative AI/LLM features into their core product offerings. Full end-to-end solutions such as Vectara and Cohere offer different hosting options (private cloud, managed cloud, secure cloud partners) to give enterprises full control over the security and privacy of their data whilst leveraging the proprietary models available.

We believe that in addition to fully managed solutions, savvier companies with more custom proprietary data may dedicate engineering resources to training or fine tuning open-source models like Llama 2. We can think of a few potential opportunities for new companies to solve:

Data processing with LLMs

Robust data pipelines are necessary to include both public (i.e. The Stack dataset by Big Code on Huggingface) and proprietary data when training a model. Solutions like Databricks are perfect for more advanced data processing as each of the individual data sources can be treated within a larger data lake. Furthermore, scalable and tractable analytics can be run on the underlying datasets.

However, deriving insights from unstructured data stored in a data lake requires significant time and human effort in gathering annotations and performing quality control.

We are excited by the possibility of using LLMs to generate structured views on datalakes, where an LLM synthesizes code to perform data extraction and also directly extract values from documents.

Fine Tuning services/ custom LLM development platforms

Data privacy is one of the main concerns of Enterprises and startups that are looking to fine tune their own models. They will need to ensure their data is within a secure environment, whilst still maintaining the flexibility to swap out any underlying open source model, and have granular control over cost, throughput and latency.

The emergence of new startups like Lamini, offers an easier and faster solution for companies to build customized and private open source models with the help of additional finetuning/optimizations techniques (LoRA, PEFT). We also suspect that business functions of companies will be more involved with Fine tuning process going forwards, and a low code platform may be needed.

Vertical Specific Fine-tuned models

We are excited about industry focused LLM and some, such as BloombergGPT for financial data, have been gaining a lot of interest and excitement lately. The following industries also deserve their own vertical focused LLMs:

  • Healthcare: Healthcare focused LLM (similar to MedPalm-2 by Google) developed compliant to extensive regulation standards to ensure patient safety, data privacy, quality of care, and ethics.
  • Pharmaceuticals: Pharmaceutical focused LLM with a priority on ensuring drug safety, efficacy, proper labeling, and enforcement of quality standards.
  • Energy and Utilities: Energy focused LLM must be developed to ensure reliable supply, environmental protection, fair pricing, safety protocols, and market competition.
  • Telecommunications: Telecommunications focused LLM can be developed for fair competition, consumer privacy, reliable communication service, spectrum allocation, data protection, and net neutrality.
  • Aviation: Aviation industry focused LLM can be developed to ensure safety standards for air travel, air traffic control, and airport operations.
  • Food and Agriculture: Food industry focused LLM can be developed to ensure food safety, labeling accuracy, and animal welfare.
  • Real Estate: Real estate focused LLM can be developed to protect consumers and ensure fair practices in property transactions, property disclosures, rental agreements, and zoning laws.

Zapier for LLM Development

Fine-tuned models for specific use cases like Developer focused (i.e. Gorilla LLM for writing API calls) will make developers’ life so much easier. We believe the world needs even more developers and operation focused models, we also hope to see more teams fine-tune models around log data that can be applied to IT observability and/or security use cases.

Connecting Legacy Data with LLM

We believe any LLMs are only as good as the data we feed them. We would need ELT/data integration to embeddings tools that can seamlessly integrate any private data a company has from their existing tools like Hubspot, Salesforce, Zendesk, Notion, etc., and create embeddings/ train your own model with just a single click.

The emergence of a new LLMOps stack

To address the need for efficient deployment, monitoring, maintenance, and evaluation of LLMs, the desire for a new LLMOps stack has emerged to reduce computational costs and any other potential negative externalities.

Model Supervision and Evaluation

Evaluation is central to AI Engineering, in traditional Machine learning there is a larger focus on data labeling and model training, and very few successfully are deployed at scale. With LLMs, companies can speed up the prototyping process and start iterating much earlier.

However, the evaluation of LLM outputs is particularly tricky (i.e. lack of training data, and, differences in distribution between the real world and training data.), therefore, only a few companies are currently doing evaluation and human feedback at scale (beyond just using “thumbs up/down”). Most have ad hoc processes and write one-off scripts to update prompts and make spot checks on fixed inputs.

We believe that this is a crucial problem that needs to be solved for LLM applications to be deployed at scale and we are looking forward to meeting companies that are building Model evaluation platforms for model behavior, metrics, and errors.

Security, Compliance, and Safety

Undoubtedly there are ethical concerns around toxicity (harmful/offensive content) and hallucinations (fabrication/imagination) from LLM. Furthermore, enterprise customers are particularly worried about the leakage of Personal Identifiable Information (PII) and other sensitive custom data that may have been used in the pretraining of models.

We believe that enterprises, especially in regulated industries like healthcare and fintech, will need to adopt programmable guardrails (i.e. NEMO-Guardrails by Nvidia or Guardrails AI) and firewalls (i.e. Arthur AI’s Shield — protection against malicious prompt injection and data leakage) to ensure they are following necessary compliance standards. We anticipate there to be a large market opportunity for an end-to-end Generative AI safety and compliance platform.

Commoditizing cloud and the migration to sky computing

Current cloud providers are creating silos for application and model developers through heavy vendor lock-in and custom software. This is a highly inefficient and undesirable situation as companies cannot easily migrate and use better software/hardware and reduce costs effectively. Furthermore, with the increased demand for high-end GPUs for model training, individual cloud providers are struggling to meet the full needs of their customers.

We believe in the vision of a democratized cloud and we hope to support companies building towards this goal.

Few areas we are particularly interested in:

Multi Cloud-compatible frameworks and platforms

Platform and framework that abstracts away the services provided by a cloud provider and allows for applications to be built seamlessly on top of different clouds without any changes. Examples of open-source projects include cluster resource managers (Kubernetes), application packaging (Docker), big data execution engines (Apache Spark), and general distributed frameworks for AI/Python workloads (Ray). Furthermore, platforms like Cloud Foundry (multi-cloud and on-premise application platform) and RedHat’s Openshift have been leading efforts in consolidating the different OSS efforts.

The goal is to be able to run any LLM or AI jobs on clusters across all the different cloud providers based on the highest GPU availability with automatic failover, whilst being able to scale out easily whenever necessary.

Seamless Data transfer across clouds

Data transfers are currently extremely costly (up to $14 for a 100Gb dataset — due to high egress fees) and slow (AWS S3 cp can be as slow as 20Mb/s). Previous systems were designed when datasets predominantly lived in a single region of a single cloud.

Seamless and efficient data transfer between cloud regions of the same cloud providers (i.e. AWS us-east-1 to AWS us-west-2), and across different cloud providers (i.e. AWS us-east-1 to GCP us-central-1) can potentially be transformative for model training and serving.

Access to distributed & underutilized GPUs

Platform/network for running, training, and finetuning of models using decentralized/underutilized clusters of high-end GPUs (like Together, Gensyn), and On-demand GPU providers/marketplaces (like Vast.AI, Coreweave, Fluidstack) that facilitate Docker-based container deployments.

We at Race Capital are actively investing in AI enterprise applications and the underlying infrastructure which powers this industry. If you’re a founder working in this market, we would love to chat with you.

Get in touch at deals@race.capital !

--

--