Introducing the Titan Takeoff Inference Server 🛫

TitanML
3 min readJul 17, 2023

--

Super fast inference of LLMs!

Experience unprecedented speed in inference of large language models (LLMs) — even on a CPU. Just last week, we showcased a Falcon 7B operating with real-time inference on a standard CPU (🤯). Our demonstration caught the attention of data scientists and ML engineers who were astounded at the feasibility of such a process. They wanted to achieve this kind of memory compression and speed up for themselves!

Today we introduce, The Titan Takeoff Inference Server!

Our mission with the Titan Takeoff Inference Server is to make it remarkably straightforward to perform rapid real-time inference, even with large open-source language models. We’ve incorporated features that allow you to experiment with these models swiftly — it’s the fastest way to evaluate models on your preferred hardware!

Use cases

The Titan Takeoff Server opens up new usecases by making language models more accessible. Real-time inference on low cost, readily available devices will drastically change the landscape of LLM powered applications. As the cost of fine-tuning comes down, the capabilities of small models will only improve over time.

Here are just a few app examples that our team has built on top of the Takeoff server over the last few weeks:

  • An automated technical article summarisation tool.
  • A writing assistant, designed to identify negative writing habits and correct them on the fly.
  • A knowledge graph extraction tool for news articles.

These are just the tip of the iceberg, showcasing applications that demand swift and accurate inference built on the robust TitanML inference and fine-tuning infrastructure.

Performance Benchmarks

We have benchmarked the inference server on GPU and CPU. We have seen speeds up to 10x faster with 4x lower memory requirements compared to running the base model implementation 🤯.

We have a lot of work lined up to improve these benchmarks even more, so do stay tuned!

Getting started is a Breeze

You can jump-start your journey with Titan Takeoff by creating a free TitanML account. Then, you’re just a few lines of code away from unlocking its power:

#install the local python package
pip install titan-iris

#launch the takeoff server with your model of choice
iris takeoff --model tiiuae/falcon-7b-instruct --device cpu

Check out our comprehensive documentation here, and join our vibrant Discord community here. Also, don’t miss our end-to-end demo here:

Some first project ideas…

As well as good generalist models such as the Falcon 7b instruct model, there are a number of models designed for specific use cases that you could try…

  • Build chatbots with models like Vicuna (lmsys/vicuna-7b-v1.3)
  • Create summarisers using Bart (facebook/bart-large-cnn)
  • Develop locally-run coding assistants with models like CodeGen (NumbersStation/nsql-2B)

We can’t wait to hear about what you build with the Takeoff inference server!

About TitanML

TitanML enables machine learning teams to effortlessly and efficiently deploy large language models (LLMs). Their flagship product Takeoff Inference Server is already supercharging the deployments of a number of ML teams.

Founded by Dr. James Dborin, Dr. Fergus Finn and Meryem Arik, and backed by key industry partners including AWS and Intel, TitanML is a team of dedicated deep learning engineers on a mission to supercharge the adoption of enterprise AI.

Our documentation and Discord community are here to support you.

A quick note about licensing — the Takeoff server is free to use in personal/academic projects (please credit us if you write it up publicly! 😉) — message us at hello@titanml.co if you would like to explore using the inference server for commercial purposes.

Written by Meryem Arik, edited with love by LLMs❤️

--

--

TitanML

The NLP development platform | Deploy cheaper, faster, more resource-efficient NLP models