Why Gen AI Will Transform the Workflows of Data Science & Machine Learning

Duncan
Delphina
Published in
5 min readDec 13, 2023

We’re Jeremy Hermann and Duncan Gilchrist, and we’re all about ML and data products. Today’s post explains what we’re up to at our new venture, Delphina.

Jeremy Hermann and Duncan Gilchrist

Generative AI is poised to transform nearly every industry and discipline — including data science itself.

The recent, headline-grabbing advancements in large language models (LLMs) and deep learning algorithms have created an unprecedented appetite for investing in machine learning and predictive AI. According to Deloitte, 94% of business leaders agree that AI is critical to success over the next five years.

But data scientists and machine learning engineers (MLEs) know the truth: no amount of hype will speed up the painstaking process of preparing and analyzing data and building ML models.

Their days are full of manual data wrangling, trial and error modeling, and frustrating bottlenecks. And there’s a huge talent gap — right now, there aren’t enough data scientists and ML experts in the world to meet the demands of every ambitious CEO hustling to gain a competitive edge.

Within the last 10 years, common ML business use cases have emerged — forecasting, personalization, fraud detection, and pricing, to name a few. These should be cut and dry. But although clear best-in-class-approaches have been tested, data scientists within every organization have to recreate the wheel to do this painstaking work.

Or worse, the talent just isn’t there to begin with.

ML moves slowly — even at Uber — but leaves an impact like a crater

We know this firsthand from our time at Uber. Duncan led data science teams — first in the Ridesharing Marketplace, and then at Uber Eats — and Jeremy led the ML platform team that built Michelangelo, Uber’s groundbreaking ML infrastructure.

Uber had hundreds of data and applied scientists and thousands of engineers to build their sophisticated infrastructure and enable ML to flourish across the business. Our colleagues included superstar academics like Garrett van Ryzin and Peter Frazier, a rotating cast of consulting professors from Stanford, and, in 2021, even 20% of the graduating class from the Harvard economics PhD.

But even with this multi-million dollar, talent-stacked data team, ML was still really hard.

It was standard at Uber for a two-pizza team of engineers and data scientists to be focused on advancing just a few models. It might take a quarter or more for them to make a substantial update to an ML model.

For example, take Estimated Time of Arrival (ETA) modeling for Uber Eats. It would take months to build and test an obviously good idea, like incorporating a restaurant’s menu information into the model. That’s because working with the data itself was a painstaking process: we needed to sift through thousands of tables to identify which had relevant data, analyze it to understand where the signal was, build features to summarize the important parts, train and tune the model, and then work with partner teams to productionize.

Michelangelo, the ML platform Jeremy’s team built at Uber, handled the large scale training and productionization of models. It tied into Uber’s data systems and made it much easier to reliably ship models at Uber scale. But, Michelangelo never really simplified the model building workflow, leaving the company to rely on legions of data scientists and MLEs to iterate in more typical notebook environments.

ML was easy to screw up. Some bets wouldn’t pay off at all. Yet with over $100B in gross bookings flowing through the network, even small positive changes were highly valuable. A major model improvement from Duncan’s team would routinely deliver over $100m in annualized profit.

ML was worth doubling down on because for all its challenges, ML is transformational. It leaves an impact like a crater: profound and enduring.

Generated with Midjourney

Introducing Delphina: using GenAI to change the ML game

Recent advances like commercial feature stores (including Tecton, which Jeremy co-founded back in 2018) have made MLOps somewhat easier, but the modeling itself is still meticulous work. Across over a hundred conversations with companies from the Fortune 500 to startups, one truth rings clear: ML development is slow at the biggest companies, and next to impossible everywhere else.

Until now — thanks to generative AI.

With the latest LLMs, AI itself is now capable of handling many of the tasks involved in machine learning development. Models like GPT4 and Claude 2 now have a human-like ability to understand context and make common sense judgments, and can learn how to connect the dots between the data and the business.

For example, LLMs can understand what kind of information a table contains (like a product catalog). They can understand that the product information is relevant for a feed ranking model. They can write SQL to aggregate the data, integrate text data in a high dimensional embedding, and even craft the code necessary to train the model.

Now, it’s possible for AI to make judgments and micro-decisions about data — and automate much of the painstaking work that data scientists and engineers have had to do themselves. GenAI can’t replace a data scientist, but it can make them 10x faster at the hardest parts of ML model building and tuning.

That’s why we created Delphina: a data science copilot that will do for predictive AI what GitHub Copilot is doing for software engineering.

Delphina gives data scientists the best possible place to start and manage all their critical workflows and helps them make better machine learning models, faster. The slow, error-prone, labor-intensive tasks that once took weeks or months will be completed by Delphina within minutes and hours.

By leveraging LLMs and following guidance from the modeler, Delphina accelerates tedious work across large parts of the ML workflow: identifying and preparing relevant data, training models, and deploying pipelines.

The Delphina workflow

Delphina makes it dramatically faster and simpler for every enterprise — even those without dedicated AI teams — to productize common ML use cases. That means significant business impact and improved consumer experiences, from better fraud protection to enhanced personalization.

Join us on our mission to help the world get better at AI

We’re excited to announce we’ve raised $7.5m in seed funding from some of the savviest investors in AI and ML to make this vision a reality.

Thanks to our VC leads at Costanoa Ventures and Radical Ventures, and over 20 prominent angels including Fei-Fei Li (Professor of Computer Science at Stanford), Lukas Biewald (Co-founder & CEO, Weights and Biases), Alison Rosenthal (early Facebook), and Guido Imbens (Nobel Laureate & Stanford Professor) who believe in what we’re building.

Curious about our mission? We’re hiring, so check out our open jobs.

Or if you’re interested in taking Delphina for a spin at your company, don’t hesitate to write us at info@delphina.ai.

--

--