A Glimpse Under the Hood of Adobe’s AI and ML Innovations: Adobe Sensei ML Framework

Divya Jain
Nov 1 · 7 min read

Adobe Sensei is our AI and machine learning technology, powering intelligent features across all of the Adobe ecosystem. From Stock search to content aware fill in Photoshop to Auto Reframe in Premiere Pro, users across Adobe’s different suites are experiencing the benefit of features powered by Adobe Sensei. Let’s take a closer look at the framework that is powering this amazing technology.

Adobe Sensei ML Framework is built with one goal in mind: democratizing AI at Adobe. In other words, it means growing an ecosystem that makes it easier to build production worthy machine learning features and take them to market quickly. It is tackling the challenge of ML at scale. In August, we looked at the internals of the framework focused on delivering personalized experience through Data Science Workspace. The DSW is Adobe’s tool for quickly bringing content understanding and intelligence to users, helping them access and unlock insights from data more effectively.

Before we jump into the details of the framework, and how it’s being used in other exciting ways at Adobe, let us take a quick look at one of features “Smart Crop,” powered by this framework.

Adobe Sensei Smart Crop uses artificial intelligence to determine the main subjects in a photo of a woman playing frisbee.
Adobe Sensei Smart Crop uses artificial intelligence to determine the main subjects in a photo of a woman playing frisbee.
Adobe Sensei Smart Crop uses artificial intelligence to determine the main subjects in a photo of a woman playing frisbee with her dog in a park and automatically crops them into focus.

Adobe Sensei ML Framework

We are seeing trends in the rise of internal AI platforms in many different companies: Uber has Michelangelo, Facebook has FBLearn, and Adobe has Adobe Sensei. The rise of these platforms is necessary because of companies’ specific needs; these needs cannot be fulfilled by general purpose platforms, and the rigor and workflow needed to sustain the scale for each company calls for an in-house machine learning framework.

Adobe has very unique scale requirements. Along with the sheer volume of users and assets that we have, scale expands in a number of different dimensions:

  • The range of content types is not limited to images and videos, but rather spans a complex portfolio of content types like PDF, PSD, AE, etc.
  • Adobe’s product suite is supported on a wide range of user devices and supporting ML on different kinds of hardware with the same expectations poses its own challenges. How do you run an algorithm effectively on a CPU when it was trained for a high compute GPU?
  • Adobe products have a diverse set of users. Making features that scale from novice users to experts in tools like Photoshop is not an easy task. This requires you to look at machine learning from a different perspective. Your AI grows and changes as your user changes over a period of time.
  • Given our commitment to innovation, we are always pushing the envelope of research and need to make sure that we are always working with the latest infrastructure. We never want to box ourselves into one technology. Working with multiple different technologies at the same time and bringing them together as one, has its own challenges.

The three pillars of the Adobe Sensei ML Framework

Given the specific needs of our company around ML, we took up the big task of creating a ML platform that supports and solves at scale. We created a three-part framework to do this, with each pillar coming together to solve the problem end-to-end. The three pillars of this framework are:

  • Sensei Training Framework
  • Sensei Inference and Content Processing Framework
  • Sensei On-Device Framework

A deeper look at the Adobe Sensei Training Framework

The goal of our training framework is to get researchers, working in AI and ML, started in the minimum amount of time while at the same time giving them flexibility to use the tools they are used to. This framework provides researchers with SDKs and templates for all popular ML frameworks like Tensorflow, Pytorch etc. so they can start working immediately without worrying about setting up the environments. These researchers start by choosing the framework and the type of compute they would need and the Training Framework has it provisioned for them in an instant.

Once the environment is ready, the first big step to solving any machine learning problem is data. Most of the time is spent in getting data in a form that the model can be trained with. The framework provides common datasets to researchers in forms that they don’t have to spend time creating from scratch. It also provides workflows that bring in new datasets and runs distributed spark jobs for data processing and cleaning.

The other aspect of the modeling framework is around running experiments to train a model. Supporting Adobe’s full research organization, which runs hundreds of training experiments at the same time, is no easy task. Along with the cost of GPU machines, we needed to develop a training framework that gave the researchers an ability to track and reproduce their experiments, without worrying about the changes that are done between the experiment runs. We created workflows that save the information around experiment runs and helps researchers compare results from various experiments. This is accomplished by encapsulating all the information around an experiment in an entity called “engine.”

Looking to the future, we are developing more standardized evaluation and visualization tools to make it easier for researchers to find common ways of evaluating the models. We are also looking at emerging technologies like AutoML for hyperparameter tuning and Neural Network architecture selection.

A flow chart highlighting the Adobe Sensei Training framework and how it interacts with the Adobe Experience Platform.
A flow chart highlighting the Adobe Sensei Training framework and how it interacts with the Adobe Experience Platform.
Inference and Content Processing Framework: Dealing with unique scaling challenges.

Inference and Content Processing Framework: Dealing with unique scaling challenges

Once a model is trained, it is ready to be deployed; however, deploying a model for a product integration has lot of challenges — it needs to be scaled properly, all the environment details need to be replicated in the production, and the right monitoring and alerting needs to be in place. All of these challenges lead to a long tech transfer time between when the model is ready and when it becomes part of the product.

For Adobe Sensei ML Framework, we created an infrastructure that is cloud agnostic and can scale up and down as needed. It also provides capabilities to configure right access controls, rate limiting policies, and more. This is done by extending the ‘engine’ to have a service specification file, generated by the framework and based on your choices. This helps in deploying the model in online and offline modes with just a few clicks.

Along with this, the ‘engines’ are configured in a way that they can be stacked together to build more complicated workflows and services. This required creating a special language to do the stitching. Soon, we realized that ML models don’t work in isolation; what we needed was a lot of pure content processing in the same infrastructure that could be stitched together. This led to developing more capabilities in the framework to support streaming and complex fan-in and fanout scenarios.

One of the other key requirements that emerges when a new model is ready, is the ability to re-index existing assets. There are billions of assets that need to be re-indexed every time a new analyzer comes in; we needed a batch inference pipeline to optimize on the resources without affecting the real-time pipeline. To solve for this, we optimized at different levels, starting with the downloading and caching of assets to memory and RAM optimizations to successfully re-indexing 100 million assets in less than a day. The following figure shows the architecture of the current pipeline.

A flow chart highlighting Adobe’s Inference and Content Processing Framework.
A flow chart highlighting Adobe’s Inference and Content Processing Framework.
Adobe’s Inference and Content Processing Framework.

Creating an on on-device framework for machine learning

The third pivotal pillar of the framework is the On-Device Framework. One of the biggest challenges for Adobe and its suite of products is to support many different kinds of hardware that users are using across the globe. Given the data privacy concerns and latency requirements, a need for ML on devices is a very important aspect of scaling ML for Adobe.

Since there are already hundreds of models deployed in cloud, starting from scratch and building new on-device models is not possible. We needed a solution that make these features available on device quickly. We are seeing innovations across the industry in this field and we wanted a solution that uses the best possible stack, while also eliminating the stress that app developers face in having to understand the specific details of the stack. To accomplish this, we developed an abstraction of the SDK that runs on-device, taking care to encapsulate the best framework for the device while keeping the same API across devices. Based on the device, it automatically chooses the optimized framework.

We are also working towards improving conversion and compression pipelines — this would automate the conversions of models for device-specific runtimes with the best performance and accuracy. Federated learning is also an important research topic, and we are already seeing some great results that will become part of the framework soon.

SDK that uses a specific h/w layer optimization and ML framework combo based on the device type.
SDK that uses a specific h/w layer optimization and ML framework combo based on the device type.
SDK that uses a specific h/w layer optimization and ML framework combo based on the device type.

Adobe Sensei Content Framework: Accelerating AI innovation

Adobe Sensei ML Framework is a complex large-scale distributed system. Along with data scientists, machine learning engineers, distributed system experts, device engineers, and experts in software engineering, great product managers and program managers were also needed to make it a reality. Our investment in building Sensei ML Framework shows how Adobe is accelerating AI innovations throughout the company. The framework addresses the unique needs of our efforts to achieve AI innovation at scale; it makes it easier than ever for internal Adobe teams to build and produce AI and ML features and deliver the best experience to our customers.

Divya Jain

Written by

Passionate about ML/AI, Startups, Innovative Tech, Women in Tech, Family and Friends. ML Director @adobesensei, but views here are mine

Adobe Tech Blog

News, updates, and thoughts related to Adobe, developers, and technology.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade