Our Investment in Parcha: AI Agents for the Enterprise

An interview with founders, AJ Asver and Miguel Rios-Berrios

steve jang
kindred ventures
9 min readAug 10, 2023

--

Earlier this week, our pre-seed and seed investment in Parcha, a new AI startup founded by AJ Asver and Miguel Rios-Berrios, was announced to the public and most importantly, to customers. You can read more about their $5 million seed round and their vision for the company directly from the founders’ blog here and from a Fortune Magazine story here.

At Kindred Ventures, we’ve had the thrill and honor of incubating and pre-seeding their company and helping AJ and Miguel develop their ideas around AI from day zero. They had worked together previously at Brex in senior product and engineering roles helping Brex grow into a multi-billion dollar valuation company. As a sounding board and instigator, we walked through a set of idea mazes that had all originated from their experiences while at incredible companies such as Google, pre-Elon Twitter, the fintech platform, Brex, and Kindred Ventures portfolio company, Coinbase. They possess a unique ability to ideate and prototype engineer in realtime on several different threads “crazy fast” as perfectly described by Brett Gibson from Initialized Capital, our co-investor in the seed round.

We asked AJ and Miguel a few questions about their history with AI and product development and some of their insights as founders. Please enjoy the interview Q&A below!

First off, what is Parcha’s mission and vision?

AJ: Parcha’s mission is to reduce manual workflows and empower businesses to scale using AI. Our vision is that human and AI agents are able to work together seamlessly, eliminating the mundane aspects of work.

I think direct, critical, even painful experiences with problems are invaluable in gaining insights into ideal products and solutions. Tell me more about some of the problems you’ve encountered along your respective careers, perhaps at Twitter and Brex for you Miguel, and Google, Coinbase, and Brex for you AJ?

Miguel: The toughest challenge I’ve faced, and where I’ve learned from my past mistakes, is striking the balance between building quickly and building the “right thing”. My past experiences in research (both in academia and at Twitter), instilled in me a habit of taking time to construct solid and comprehensive projects rather than fast and imperfect versions that we could test and get feedback on. However, during my tenure at Brex, the fast-paced growth taught me the necessity of delivering immediate value without wasting time on prolonged development. I noticed the most successful people I worked with at Brex worked in steps. They always aimed for the larger vision but also unlocked value to the business with urgency.

AJ: At Coinbase and Brex, I experienced the exact challenges we’re now trying to solve with Parcha: Manual workflows across money movement, onboarding, risk management and underwriting that became a bottleneck to growth. At Brex for example, we once had a backlog of several thousand business applications we needed to process, requiring the whole company to spend a week assisting with compliance reviews. This was hugely challenging when we set aggressive growth targets for onboarding.

You’ve both been going down the rabbit hole regarding AI for some time now. What were some of the specific ways you’ve worked in the past with machine learning, computer vision, robotic automation, etc?

Miguel: At Twitter, I was privileged to work on their massive and unique datasets of people, news, events, and more, from a very early stage. As an IC, I worked in real-time content understanding — for instance, detecting events such as earthquakes very quickly using tweets and their context, and clustering users based on their behavior on a large scale. Later, as data science lead, I managed a team of skilled data scientists who worked on large-scale experiments and causal inference; addressing health and misinformation on the platform; and product data science. At Brex, I contributed to one of the early cases of manual workflow automation: automating the process of understanding financial information extracted from bank statement PDFs. I also led the teams utilizing machine learning for credit underwriting, fraud detection, transaction categorization, among other areas.

AJ: My experience with AI started at Google. As a PM on the Google Photos team, I helped bring the first computer-vision-powered AI features to photo editing including face recognition and automatic photo enhancements. At Coinbase, I led the Data product team which included ML efforts across the company which is where I learned more of the technical aspects of Machine Learning.

The phrase “AI agents” could mean many things due to the many different ways AI can be applied in the Enterprise. Give us your definition of that? What would make AI enterprise-grade?

AJ: What makes an AI Agent unique compared to a more traditional workflow automation tool is their ability to autonomously carry out a task while dynamically react to the information it learns at every step, much like a human does. What makes Parcha’s AI Agents especially powerful is that they can learn to carry out these tasks using the same policies, procedures and tools that are already used by humans.

Miguel: When it comes to being “enterprise-grade”, we’re focused on areas such as data privacy and security from the outset. For instance, we’re creating secure, individualized environments for each of our clients, ensuring that their data is only accessible by them. We’ve also initiated SOC 2 compliance process early on. Our goal is to make sure our product is equipped to serve both large and small organizations effectively, from the get-go.

To start, Parcha is focused on applying AI first toward fintech and financial services sectors today. What are some other areas that have similar compliance and operational complexity and could be ripe for acceleration through AI?

AJ: We see any regulated industry that interfaces with legacy systems as ripe for disruption due to the need for manual workflows to carry out complex procedures and policies. Good examples outside of fintech are insure-tech, healthcare and logistics (e.g. Uber, Doordash etc).

Miguel: There are considerable similarities in the controls required to cater to customers in these sectors. In terms of product functionality, AI agents can speed up work in any operational workflows involving a human executing a list of tasks, guided by tools using a set of instructions and expertise. Security, customer support, and revenue operations are areas that spring to mind.

Let’s talk a little about LLMs, compute, and public/private datasets. I would guess roughly that LLMs would be 10x more accurate and useful if they had access to private proprietary datasets, but that comes with many serious issues including privacy, ownership attribution, licensing, economics, and regulation. Tell us about your view of the state of LLMs today and your thoughts on how enterprises should think about their proprietary data in that context?

AJ: First of all it’s worth noting that the state of LLMs is changing every day. Just 3 months ago, most people would not have believed that we would already have open-source LLMs that are as capable as GPT-3.5. This is important because the pace of change makes it really hard to predict which constraints will still exist a year from now, including the constraints around data privacy and proprietary data integrations. Today there is a tradeoff between high performance from a generic but large foundational model vs high accuracy from a fine-tuned but smaller open-source model. For less complex use cases like chatbots and knowledge retrieval, if data privacy is a priority, the enterprises should either deploy a private instance of a state-of-the-art model like a GPT-4 Azure instance, or if they have the ML resources, deploy their version of own open source model.

Miguel: As we look forward a year from now, given the rate of innovation we’re witnessing, I expect we’ll see highly effective, commercially available, open-source LLMs that can be deployed and fine-tuned anywhere. This will allow businesses and individuals to have their own models which they can train and use, without the need to share their proprietary data with third parties. However, maintaining a proprietary model incurs costs related to infrastructure and resources, and isn’t without risks. Companies will have to carefully weigh this against the security advantages offered by third-party, secure LLM solutions.

Yes, I’m very excited about the many permutations of fine-tuned language models made possible by open source LLMs. Switching gears, what are some counterintuitive things you have learned in the process of starting Parcha Labs so far?

AJ: When you look at the AI space as an outsider, you might assume that you need years of experience in training models in order to build an AI product. This is simply not true. While Miguel and I have extensive experience working with ML models in past roles, we’ve learned a lot of the skills of building a production-ready AI Agent from scratch through experimentation and staying up to date on the latest developments in the open-source community. AI is very approachable to folks as long as you are curious, willing to experiment and can learn quickly. That’s why we’re mostly hiring generalist engineers and teaching them how to be great AI engineers.

Miguel: Coming from a research background to a relatively large, then a smaller company, the idea of quickly delivering something imperfect was initially hard for me to stomach. There were several occasions when I felt the need to delay a milestone because a demo or prototype wasn’t polished enough. I’ve learned that delivering anything, however unpolished, to a design partner is much more valuable than waiting until it’s “perfect”. The amount of feedback we’ve received, coupled with our self-imposed forcing function of delivering quickly, has allowed us to iterate very fast.

Another realization was my initial tendency to hire for every need — be it a “Head of AI” or someone 100% dedicated to our Chrome extension. During the first month, I spent half my time interviewing potential candidates and attending events. Eventually, I minimized this to almost zero and instead started to use that time to code. The boost in execution pace was very visible. Now, I’ve worked on most of our codebase and also helped increase our runway. That said, we are now hiring :) https://www.parcha.ai/jobs

It’s been fun, but also educational, working with you from the very beginning of your ideation process and ultimately helping you create Parcha Labs. Tell us about some of the other ideas you both think should exist? What were some of the favorites from your notepad?

Miguel: The initial concept we explored was aimed at scaling “experts”. The goal was to gather content — books, podcasts, videos, notes, and so on — about a person with unique expertise and use it to create an AI version of themselves that they could offer as a service. For instance, top-level executive coaches can only manage a few clients at a time, but an AI version of themselves could potentially serve thousands of leaders globally. We still find the idea of putting this technology directly in the hands of the experts to be intriguing even if we decided it wasn’t the right product for us to believe.

AJ: Another idea is in the consumer space which has seen very little innovation in the past decade. We think there’s a massive opportunity in AI-generated personalized content. For example, imagine if every morning you received a personalized 20 minute podcast that used generative AI and voice synthesis to give you relevant content to listen to based on the topics you’re most interested in.

Founders have an immense impact on the DNA of a startup for its lifespan. What would you say your complementary superpowers are? What do you look for in your early team members?

Miguel: Our main strength lies not at the complement but at the intersection of our skill sets. AJ is not only great at building product and selling, but he’s also very technical. We frequently code together and discuss technical challenges pretty much daily. I also contribute to product development, planning proof of concepts, working with our design partners, and drafting product specs. Our common goal is to increase Parcha’s probability of success. Some weeks, we both sit and write code; other weeks, I may need to don my product hat while AJ is focused on selling. I see this optionality as our superpower.

AJ: Regarding early team members, we prioritize attitude and drive above all else. We’re building something that’s very hard, yet achievable. We tinkered with the idea of having a core value around hope — hope that what we’re building will work, that we’re fully committed to our vision, and there’s no Plan B. Eventually, we took inspiration from the show Ted Lasso and adopted the value “Believe”. We have the yellow and blue “Believe” sign taped to a wall in our office as a reminder.

Haha, I love that. You mentioned Ted Lasso in a fundraising interview. Thanks for doing the interview, AJ and Miguel.

--

--

steve jang
kindred ventures

investor, entrepreneur, and true believer. founder/managing partner at @kindredventures.