Principles for AI product design

David Peterson
Angular Ventures
Published in
3 min readFeb 28, 2023


Subscribe here to receive new Angular blogs, data reports, and newsletters directly to your inbox.

There’s a famous saying that customers don’t want a drill, they want a hole in the wall.

Given the out-of-the-box power and versatility of LLMs, this aphorism has gained new meaning. Might we be entering an era where most software will get much closer to producing the hole itself? In a world where AI can do a lot of what humans can do, how will software evolve?

To help think through what products might look like in our AI-powered future, it’s worth recalling what is one of the most powerful and ubiquitous AI-powered products of our past: Google’s conversion optimizer.

Back in the early 2000s, paid marketing was an entirely different beast than it is today. Marketing teams spent a lot of their time figuring out what amount to bid. Go far enough back in Search Engine Land and you’ll see what life was like.

Then, in late 2007, Google’s conversion optimizer launched. And in what may have been the first job that artificial intelligence replaced, all of a sudden, paid marketers didn’t really need to bid anymore. Instead, they refocused their efforts on two things:

First, data. The more relevant data marketers could pipe back into Google, the better the conversion optimization worked. At the beginning, this meant sending back conversion data. Eventually, this started to include product usage data as well (and anything else that was seen to improve performance).

Second, creative. The more creative iterations marketers could provide to the system, the more likely they were to find a new winner and boost performance. Top performing marketing teams were creative-production machines, testing new ideas on a weekly or daily basis.

Yes, the conversion algorithm itself was a black box, but that didn’t stop paid marketers from reorienting themselves to do whatever they could to improve its performance at the margins.

So, let’s say you’re building a product with AI at its core. What does the example of Google’s conversion optimizer teach us about how you might design it?

First, build a proprietary data advantage. If you recall, Google launched conversion tracking long before launching conversion optimizer, so Google had all the conversion data they needed to build the initial model. Like Google, you need some sort of internal data to bootstrap the model at the start. It may be the case that an out-of-the-box LLM is good enough to get you started, but some sort of proprietary data will likely be key to long-term differentiation.

Second, enable your users to proactively improve the model. Like Google enabling marketers to pipe back every type of “conversion” they wanted, you need to empower your users to improve the model with external performance data as well. This has the additional benefit of making your product stickier. Not only will the model’s performance improve, it will be a user’s job to improve the model’s performance!

Third, give your users additional levers to pull. Like how marketers pivoted to extreme creative experimentation as a means of improving the model’s performance, think about what additional variables your users can iterate through. By giving your users tools to improve the model’s performance through experimentation, you’re creating users who will define themselves by how good they are at extracting the best performance from your system, improving stickiness even further (as we’ve learned in the past, customer-built products create power users and evangelists.)

Given how versatile LLMs are, it’s tempting to imagine a world where these models replace humans entirely…where they’re able to simply produce a “hole in the wall.” But, just as with Google’s conversion optimizer, I think the reality will be much more nuanced than that.