BricksAI founders’ diary: Part 1
Building a product for GenAI developers
My name is Donovan, cofounder of BricksAI. In this series, I’ll document our startup journey as it unfolds.
Unlike an Elon Musk biography or a Paul Graham blog post, I’ll be honest about both our successes and mistakes. By doing so, I hope you’ll get a glimpse into the startup world without the survivorship bias.
In this post, I’ll write about our journey from ideation to MVP, and our vision for the company.
Let’s start from the beginning.
How BricksAI started
When we first started Bricks, we were an open-source tool that converted Figma designs into code. To achieve fast and good code generation, we used algorithms to first generate “good-enough” code, then used LLMs (large language models) to refactor it, so it would be more maintainable.
We later gave up on our first idea, but we came out with a big realization — in order to use GenAI in production, we had to build a ton of infrastructure around it.
We would have to build the monitoring system, the logic to handle retries and validation, the system to evaluate our LLM’s performance, and more. Calling OpenAI was maybe 5% of the work, but making sure it worked properly in production took up the other 95%.
As we talked to more developers, we found out that’s the case everywhere. People have to build “wrappers” around generative AI services before they can use them in an enterprise or production setting. We thought this was a great inefficiency because (1) everyone was building the same wrappers, and (2) it distracted people from building actual features that drive business value.
Hence, we decided to dive into the world of GenAI developer toolings.
The MVP
Obviously, trying to do everything at once is suicide for a startup. We only needed to do one thing, and do it really well.
After speaking to more LLM users, we found a few recurring themes:
- Companies are sharing and using the same LLM credentials (e.g. OpenAI API key) everywhere
- Companies do not know where they’re spending money on LLMs
- Companies are worried their LLMs are being misused
After consolidating our insights, we concluded that the issue boiled down to the lack of access control and monitoring for LLMs. People needed a simple way to control and monitor how each team/project/user used their LLM.
Within a few weeks, we put together an MVP and launched on HackerNews. We reached the front page of HackerNews (albeit with a clickbaity title) and our open-source repo gained 600 GitHub stars in a day.
The demo
The product was simple. We allowed developers to connect different LLM accounts (e.g. OpenAI, Azure OpenAI, Anthropic) with us, then create proxy API keys that come with custom limits (e.g. expiry date, rate limit, spend limit). In this way, you could distribute a proxy API key to each user/project, and be sure that a single person/project won’t be able to use up your entire account’s limit.
In addition, we also showed you usage metrics per proxy API key, such as your cost, number of tokens, number of requests, and average latency. In this way, you’d know right away if there’s any unusual activity for any of your key.
Here is a quick video demo:
The business model
Since we had no idea what’s the best way to charge for our product, and we knew that pricing is an iterative process anyway, we figured the best way to start was to copy an existing product. We ended up with a pricing model similar to Kong’s, another API gateway product:
- To generate interest and trust, we would open-source our entire backend under the MIT license.
- Then, we’re going to charge money for (1) our frontend; (2) priority support, and (3) hosting (both cloud and self-hosting).
The thinking was that we’d attract people using our open-source project, then charge people for time and convenience.
The Vision
Obviously, our functionality is very limited today. But we do have a vision for our product.
The future of GenAI will be modular. There will be a ton of LLMs, LVMs (large vision models), LMMs (large multi-modal models), and vector databases, and people will have the freedom to swap things around.
However, people will always need a system to make sure everything works well together. No matter what GenAI service people use, they’ll always need features like observability, access control, data security, error handling, and more.
Our vision at BricksAI is to become the core of any GenAI stack. We want to become the glue that holds everything together. We want to become the go-to starting point that people use to start building GenAI applications. Will we get there? We’ll see!
What now
BricksAI is live today. Try us out here! We’d love as much feedback as possible.
Now we’re trying to put our product into the hands of as many people as possible. Is our hypothesis correct? Are we solving the right problem? We have no idea.
But I think that’s what makes entrepreneurship exciting.
Til next time!