Your AI Team is Slowing Down Your Company

Eden Shochat
Aleph
Published in
4 min readJul 22, 2024

There’s a pervasive belief that building a dedicated AI group is the path to leveraging the power of artificial intelligence. My experience, however, points to another conclusion. Since Large Language Models (LLMs) are able to perform better than most, if not all use-case specific machine learning models, dedicated AI teams often slow down progress rather than accelerate it.

Why are LLMs different? Unlike previous machine learning techniques that required deep, specialized knowledge to implement, LLMs are more accessible and can be leveraged for basic use cases with simpler techniques like prompting. This lowers the barrier to entry for many companies, making a centralized, specialized AI team less critical for initial adoption.

I explain more in this short video, or you can keep reading.

Why Your AI Org Structure Matters

Organizational design might not be the most exciting topic, but I’m passionate about its impact. It is more than just a chart on a wall; it’s the very structure that determines how your company operates.

Org design is all about trade-offs:

  • Isolation provides depth and finding a new global maximum (moat)
  • Federation provides speed and finding many new local maximums

This isn’t the first time such tradeoffs create challenges. As a co-founder of face.com, I had a front-row seat to Facebook’s “mobile crisis” over a decade ago. They had a dedicated mobile team tasked with replicating desktop features for mobile. Though this seemed like a logical solution for the unique challenges mobile engineering had back in the day, it actually created a significant bottleneck. Unlike the functional teams, like Photos and Timeline, the mobile team did not have visibility into the business impact of their work, leading to inefficient prioritization and lesser impact.

Think about notifications. When tagging was added to the desktop version, “you have been tagged” notifications became one of the key sources of traffic. Not having that in the iOS app until the mobile team had bandwidth to add it was painful.

Today, I see a similar pattern emerging with AI. Companies create separate AI groups that often focus on what’s technically challenging or has easier access to data, losing sight of real business needs. Classic machine learning problems, like matching supply and demand, become the focus, while more impactful opportunities get sidelined.

Consider Windward, a company that tracks global shipping. A typical AI team might have focused on building a container- arrival forecasting model. Windward, however, saw a greater opportunity in calculating contract penalties for delayed arrivals, a solution with a much higher business impact.

How Your Product Team Can Leverage AI

The key observation is that even though LLMs are the bleeding edge of AI, unlike previous AI techniques and capabilities, most teams don’t need deep knowledge of how they work to generate impact. That said, not knowing what they could do beyond prompting limits Product Managers and engineering leaders from taking full advantage of their capabilities.

Integrating AI into existing workflows does pose certain challenges, like:

  • Knowledge silos: AI engineers often lack deep understanding of product-specific business problems
  • Duplication of effort: Separate teams can lead to redundant work and inconsistent implementation of AI solutions

However, through my experience, I have observed organizational design approaches that effectively address these challenges to gain the speed associated with federating the knowledge. These include integrating AI-informed engineers and product managers into product groups, fostering direct collaboration, and facilitating knowledge transfer. Additionally, establishing an AI guild promotes knowledge sharing, standardizes best practices, and supports infrastructure development.

Why You Should Embed AI Engineers Into Product Teams

By embedding AI-informed engineers within product groups, companies can achieve significant advantages:

  • Faster response: Direct collaboration and aligned priorities lead to quicker development and implementation of AI solutions
  • Increased velocity: Organizations can implement AI solutions more efficiently and at a faster pace, driving quicker time-to-value
  • Better focus: AI-informed engineers gain deeper understanding of the specific business problems they need to solve, becoming a great interface with the AI-specific group for problems requiring deeper AI knowhow

Now, it’s important to acknowledge that some highly complex AI projects might still require dedicated, specialized teams. You should have an AI team if the purpose is to create a differentiator on top of LLM that will become an unfair advantage for the company. This requires internal learning cycles and expertise to figure out what that means and how to approach it.

You want to give them quiet and focus to understand the technological edge, rather than investing in company-wide education.

If your company is developing foundation models, fine-tuning to a unique dataset or has unique AI cost structure needs, it makes sense to have a team focused entirely on that. However, even in these cases, close collaboration with embedded AI engineers within the guild structure is crucial to ensure alignment with business needs and efficient implementation.

Empower Your Product Teams

It’s clear that LLMs offer companies a unique opportunity to embrace AI. Instead of defaulting to dedicated AI teams, which can create silos and slow down adoption, companies should focus on empowering their existing product groups with the knowledge and tools to leverage LLMs effectively. This integrated approach will lead to faster, more impactful AI implementation and ultimately, a more successful AI-driven future.

— — —

Shout out to Uri Eliabayev and Oren Ellenbogen who read & commented on the early drafts of this post. Fun to collaborate with org structure and AI geeks in the eco-system. Thank you!

--

--

Eden Shochat
Aleph
Editor for

Software Poet by Birth, Early Stage Investor by Profession and Entrepreneur at Heart. Working with the Aleph portfolio teams to build stuff.