Adobe Sensei Stories: Meet Tracy King, Adobe Principal Scientist Working on the Future of AI and Search

Patrick Faller
May 21 · 7 min read

At Adobe, finding new and innovative ways to harness Adobe Sensei, our artificial intelligence and machine learning technology, to help digital creatives is always a top concern. On the Adobe Sensei & Search team, that means finding applications for our technology that make it easier for those creatives to actually be creative, without having to worry about the more mundane tasks of digital content creation. One big focus for this team is ‘search and discovery’: how to make it easier to find that perfect asset, in services like Adobe Stock, without wasting a lot of time manually sorting through images.

Tracy Holloway King is a principal scientist on the Adobe Sensei & Search team focusing on search and natural language processing (NLP) at Adobe and works in conjunction with research and product teams. She admits to being a tad “over enthusiastic” about search; she is deeply passionate about the ways AI can improve this basic but crucial aspect of our digital lives. We asked Tracy to share some of her work, her career journey, and some of her best advice for anyone looking to follow in her footsteps.

Tracy King takes part in an event at A9, a firm specializing in search technology.

What kind of challenges are you tackling in ‘search and discovery’?

For many of our use cases, we need to seamlessly integrate keyword and image search. Adobe Stock is a prime example of this. Customers may issue a one or two word query to get to the rough area they want, and then move to using image based similarity to find variants of the one most similar to what they like. So we look at how we can enable arbitrary combinations of keyword and image search (since different people prefer to express what they are looking for differently) and guide people through the large set of assets we have.

To do this, we need to better understand what is in the images and map this understanding to language. For example, if we just match the terms “blue” and “house” independently, in addition to finding images of blue houses we will also find images of white houses with blue doors or under blue skies. Even once we have only blue house images, we want ways for people to drill down further (houses on hills, houses at night, two-story houses), and ways to explore related items that they might not have thought of (just doors, house interiors, grey houses, illustrations as well as photos). And we have to do this across multiple languages.

This is an area where image-based search can provide an advantage since the image is a language-independent resource that can help us learn about how customers refer to objects. For example, a photo with a dog in it may come back for queries for “dog” in English and “chien” in French; if we then have another photo that matches “dog,” we can hypothesize that it should also match “chien,” especially if we have image processing which tells us both images contain a “dog” in the abstract sense.

What about outside of large content repositories? How can AI improve search functions in our own apps?

We have numerous search cases that involve smaller, personalized document collections, for example, your Lightroom photos, your Doc Cloud text documents, your AEM assets. To provide high quality search on these, we cannot depend on aggregated customer behavior, which is how the big search engines (e.g. Google, Bing, Amazon) get such good results, especially for popular queries.

Instead, we have to truly understand the documents (images, text) and the queries and match those. When we cannot find a match, we have to guide the customer to figure out what they were looking for (can they use a simpler query with fewer words, can they browse via structured data attributes, etc.). Conversely, when we find lots of matches, we have to figure out how to display them to the customer so that not only is the most likely result prominently displayed, but if it is not, they can drill down effectively to find what they need.

You also focus on natural language processing. How can advancements in NLP improve how we search for things?

I am working with the Adobe Sensei & Search team on plans for their NLP platform, especially on what core technologies we can provide so that product teams can focus on deeper, application-specific models, potentially feeding these models back to the platform. By providing these building blocks, not only can the researchers focus on the more complex components from the outset but the components they build are more likely to be interoperable.

For example, we can provide named entity recognizers for general (e.g. person names, place names) and Adobe-specific (e.g. Adobe product names, Adobe tools) entities, which in turn can be used in document summarization, sentiment analysis, topic modeling, etc.

People effortlessly and naturally use language to communicate with each other and by extension with machines — — hence the phrase ‘natural language.’ NLP helps people to be more effective and efficient when interacting with machines. It removes the grunt work in creating and consuming documents, for example by checking spelling and grammar during creation, by summarizing large document collections, extracting sentiment, and searching. It also enables us to interact with machines as we would with human experts, guiding us through decision making processes and helping us to learn new skills (what item to buy, how to alter a photo). Keyword-based search is a proxy for these language-based interactions with human experts: advances in NLP will allow us to make search more natural.

Even without a perfect understanding of natural language by machines, we can take advantage of existing NLP to more rapidly search for information on the web, enable voice interactions with apps on our phones, or use machine translation to get the gist of a document written in another language. And NLP capabilities are expanding rapidly, raising the bar to human-like performance on existing tasks and enabling new ones.

How did you get to this stage of your career in AI?

I started off as a researcher at Xerox PARC, working in the Natural Language Theory and Technology group. I focused on symbolic methods for syntactic and semantic analysis and applications that could use these, such as machine translation (focusing on producing grammatical translations) and question answering (not just retrieval of passages but reasoning over those passages to create an answer). Our technology was used in a startup called Powerset that focused on semantic search.

Working with the startup, I realized that I was interested in how to take research and balance it with engineering realities, looking at what optimizations we could make to create a system that is fast enough to use at scale, and what NLP components can be used to solve customer problems. So, when they were acquired by Microsoft Bing, I was their first post-acquisition employee, managing a natural language understanding team focused on query processing. I became captivated by the idea of how people encode their intent in a couple of words, how we decode it to find relevant results and then encode those as a search results page which the customer has to decode again. While we have made great strides, there are always ways to make search better: better document understanding, better query understanding, adding reasoning, providing navigation and dialog.

I then moved to eBay as a PM for the Search Science team focusing on query understanding. The eCommerce space is particularly interesting due to the strong mix of textual (e.g. product titles and descriptions) and structural (e.g. price, category, brand) data and the fact that customers are demonstrating their intent not just by clicks but by purchases. That someone is willing to spend their hard-earned money on a product shown in a search result is an amazingly strong signal. Even with this signal and with the textual and structural data, search engines still make fundamental mistakes, like showing men’s dress shoes for the query “black dresses.”

Amazon (A9) then approached me to build up a query understanding team that would work hand-in-hand with their ranking team. From there, I moved onto Amazon Sponsored Products (promoted Amazon products on the search result and product detail pages). This is partially a standard search problem, but you have to match both the shopper and the advertiser (seller) intent and instead of a standard search ranking algorithm you run an auction taking into account both the auction bid and the relevance of the item.

This is a fascinating space, but it was Seattle-based and I wanted to return to the Bay Area and so I approached Adobe, which has the interesting customer and technical challenges discussed above. And, here I am.

What’s your best advice for anyone wanting to pursue a career in AI and search?

Focus on building an in-depth understanding of the underlying problem you want to solve and the data and insights you can bring to that problem. The technologies used in AI and search are constantly changing. Understanding what your problem space requires will allow you to choose and invent the right technologies to solve it.

For more on how Adobe is using cutting edge AI and machine learning technology to revolutionize creative workflows, head over to the Adobe Sensei hub on our Tech Blog and check out Adobe Sensei on Twitter for the latest news and updates.

Adobe Tech Blog

News, updates, and thoughts related to Adobe, developers, and technology.

Patrick Faller

Written by

Content/community creator, strategist, and award-winning journalist with a passion for technology, art, development, and design 🚀

Adobe Tech Blog

News, updates, and thoughts related to Adobe, developers, and technology.