Adobe Sensei Stories: Meet Brian Eriksson, Engineering Manager Creating the Creative Assistant of the Future
Personal assistant technology is everywhere: from your smartphone to your car to your home. These AI-powered assistants are there to make your life easier, helping you accomplish tasks easier and quicker thanks to their responsive and predictive capabilities. But how can AI technology amplify your creativity make your work life easier in the process? That’s the key question being asked on the Sensei & Search Agents team, where Engineering Manager Brian Eriksson is leading a team developing creative assistants for Adobe apps.
“Our goal is to bring insights from AI and content intelligence to aid the user during their design process,” said Brian. “At a high level, this assistant observes the users context, for example what they are working on and what they have done in the past, and uses this to proactively recommend relevant content/tools/workflows. We envision that the creative assistant will help with design inspiration, reducing repetitive tasks, and aiding the user in on-boarding onto a new application.” We asked Brian to share more about his work with Sensei Agents, and how a background in AI and ML led him to the next frontier in developing intuitive, creative workflows for the 21st century.
What is your approach to assistant technology, and how are you tailoring it to specifically help creative professionals?
Our initial Assistant implementation has been focused on Adobe Illustrator. We exploit application extensibility to surface a side panel inside the app that contains the Assistant and all relevant insights we extract from the user behavior data and content understanding. These insights are displayed as ‘cards’; small summaries of the insights that allows the user to take some action (for example, change a set of colors, remove objects). To determine these insights, we analyze a stream of behavior events like tool changes, commands, and document properties and we use AI to infer the most relevant insights to proactively surface to the user.
These insights can be considered as chaining some form of content intelligence (like object recognition, color analysis) with application tools (like color replace, crop). The result is a powerful formula for analyzing assets and performing novel and relevant transformations.
For example, changing colors inside Adobe Illustrator can be a time-consuming process. In addition, the assignment of colors for an asset should ideally follow rules and guidelines from design color theory (which colors should be used together, which colors clash, etc.) The Agents team has developed a feature inside the Assistant to proactively perform color variations, which (1) detects when the user could potentially use color variations, (2) analyzes the current asset under edit and determines the best reassignment of color using design theory, and (3) allows the user to apply potentially thousands of color changes to the asset using a single click.
Building a flexible Assistant platform that can handle multiple different applications, different tools, different interfaces, and different user expectations is understandably a challenging task. There are three main parts to the Assistant:
- ML/AI back-end — The Assistant platform is deployed as a microservice architecture, where the assistant insights can be stood up, updated, or removed with little change to the Assistant as a whole. The implementation of these services is up to the engineer, using Sensei’s ML framework.
- Front-end implementation and application interface — To funnel data to our ML/AI back-end, this requires listening to user behavior events and content changes inside the application, for those that have opted-in and want to share their behavior. We implement this functionality in React.JS.
- Design — As the user operates a Creative Cloud application, their context is constantly changing; tools are being used, files are being opened and closed. A key challenge of the Assistant is to keep up with the pace of the user, offer temporally relevant suggestions, and remove suggestions that are out-of-date or no longer relevant to the user’s workflow. We require a design that is intuitive to the user, creates a mental model of expectations using the Assistant, all while not being distracting.
This stack of AI microservice architecture, front-end implementation, and novel UI/UX can be considered a form of full stack AI — where the machine learning and intelligence insights directly modify the user experience. I see Adobe as a pioneer in this emerging space; this is the key difference between the standard challenges of developing back-end only AI models, where the focus is only on gather data, data cleaning, and model development/training. To build an Assistant we must take the necessary steps to then bring those AI insights to life for the user with significant front-end and design work.
What kind of unique challenges are you facing developing Assistant technology for the creative world?
Other assistants are often solely focused on inferring user intent given a verbal command (just think of “Alexa play…”). At Adobe, we differ in the need to additionally understand the current user context; as in, what is currently on the artboard in Photoshop or Illustrator, what is the content of the video currently being edited in Premiere Pro, etc. Our perspective is that understanding both user intent and user context will drive powerful Assistant experiences.
This, of course, adds requirements to the Assistant platform to both derive user intent (a challenge to begin with), while at the same time using content intelligence to discover the user’s context. Luckily, at Adobe we have invested heavily in recent years on creating a wide selection of content intelligence APIs courtesy of Adobe Sensei that we can use to infer context.
What got you so interested in assistant technology and how’d you get to this point in your career?
I started my career at the University of Wisconsin, focusing on machine learning and artificial intelligence. My PhD research centered around resolving portions of Internet topology (i.e., the “map of the Internet”) using a limited number of observations. Oddly enough, this technology was also well suited for recommender systems and personalization problems, such as trying to infer user movie preferences.
This led me to joining an entertainment company, Technicolor, and working as a corporate researcher trying to find new applications of personalization algorithms. I worked on a variety of problems, including using wearable biometrics (e.g., heart rate, skin conductance) to infer an individual’s reaction to film content. With another researcher, we created an incubator company and ran around Hollywood pitching our technology and trying to disrupt how movie market research was performed.
When the market for this technology didn’t materialize as quickly as I liked, I took a role in management — helping direct innovative research in AI and data analytics. With the help of my team, we delivered models and algorithms for making film special effects production more efficient, improving yields in DVD/Blu-ray manufacturing, and conversational chat-bots in consumer electronics.
After leading a research lab for several years, I wanted to get closer to shipping innovative technologies to a large population of users. The Adobe Sensei & Search team offered both the ability to have massive customer impact, while still focusing on innovating and cutting-edge technology. By joining the Sensei Agents team at inception, I was able to build my team from scratch, getting the right team to tackle the challenges of building an Assistant platform inside products with a devoted user base.
What is your best advice for anyone who wants to break into assistant technology?
Be flexible. By definition, a functioning assistant platform will require technology in every component of the stack. From novel UI/UX, to efficient front-end architecture, to the microservice fabric to route data, to the actual microservices that perform the underlying ML/AI insights. On my team, every member has responsibilities across the stack. The last thing I want to hear is “I don’t want to do <X>. It isn’t in my area.” In the Assistant space, given the number of challenges and the speed of innovation, that is a luxury we don’t have.
What do you hope you and your team are achieving with Sensei Agents?
Our goal is to remove the barriers to creativity. The ability to create should not be limited by your access to software classes or even your capability to invest time to explore a particular application. Not everyone has access to custom classes, not everyone has the ability to set aside time to dedicate to learning.
If you have a creative vision, we want to couple both Adobe Creative Cloud products and the creative assistant to bring that vision to life. Via the assistant, we expect the barrier to entry to lower, for inspiration content to be more readily available, and for the experience of users inside Adobe product to be more personalized and contextual to what they are currently working on.
For more on how Adobe is using cutting edge AI and machine learning technology to revolutionize creative workflows, head over to the Adobe Sensei hub on our Tech Blog and check out Adobe Sensei on Twitter for the latest news and updates.