Building for Inclusivity: The Technical Blueprint of Pinterest’s Multidimensional Diversification

Pinterest Engineering
Pinterest Engineering Blog
6 min readSep 20, 2023


Pedro Silva | Sr. ML Engineer & Inclusive AI Tech Lead; Bhawna Juneja | Sr. Machine Learning Engineer; Rohan Mahadev | Machine Learning Engineer II; Sujay Khandagale | Machine Learning Engineer II; Abhay Varmaraja | Machine Learning Engineer II

An example of a Pinterest feed, showcasing the platform diversity of skin tones, body types and hair patterns.

Pinterest’s mission as a company is to bring everyone the inspiration to create a life they love. “Everyone” has been the north star for our Inclusive AI and Inclusive Product teams. These teams work together to ensure algorithmic fairness, inclusive design, and representation are an integral part of our platform and product experience.

Our commitment is evidenced by our history of building products that champion inclusivity. In 2018, Pinterest announced the skin tone signal and skin tone ranges. In 2020, we announced the integration of skin tone ranges into Try on for Beauty. In 2021, we announced hair pattern search. In early 2023, we announced how we have been using our skin tone signal to shape our recommendations to increase skin tone representation across several surfaces. Now, we are expanding the latter to also include body type representation in fashion related results across search and closeup recommendations (AKA related feeds).

Body image and representation in the media, online and offline, has been a part of the cultural dialogue for decades. For Pinterest, a visually inspiring platform with a mission to give everybody ideas fit for them, we saw an opportunity to start tackling this issue head-on. We know from experience that building for marginalized communities helps make the product work better for everyone. As a first step, we took on the challenge of building a visual body type signal which will help us surface diverse content and also help ensure our recommendations are more representative of various body types.

Signal Development and Indexing

The process of developing our visual body type signal essentially begins with data collection. In this case, thousands of fashion Pins¹ publicly available on Pinterest are gathered to serve as the raw dataset. The aim is to identify unique patterns and characteristics within these images that may provide a basis for meaningful groupings. Bias-aware guidelines are established in order to determine uniformity in terms of how these images should be grouped. Additionally, we partnered with external organizations, such as the National Association to Advance Fat Acceptance (NAAFA) and Pinterest Creators, to help us understand the nuances of size representation. These external partnerships along with our internal fashion specialists and labellers were fundamental in helping us design the experience from both a technical and human-centric perspective. The resulting structured dataset becomes the foundation to train and evaluate the machine learning model known as the body type signal.

To ensure an unbiased approach, we also leveraged our skin tone and hair pattern signals when building this dataset. This inclusion helps us create a model that is uniquely representative of diverse human attributes, giving us a more precise way to gauge and mitigate biases, if needed, across disparate segments in order to improve fairness and accuracy. With high quality labeled data, the next critical phase in the ML development cycle is training the model. Again, building on top of previous work, we use our in-house state of the art transformer-based unified visual embedding as the basis for this model (as seen in Figure 1).

The overall architecture for Unified Visual Embeddings, consisting of one backbone convolutional neural network model consuming a variety of datasets including classification and metric learning across a set of loss and regularization functions. The embedding is consumed by a variety of customers across retrieval, as an input feature, and for fine-tuning domain-specific models, such as the skin tone and body type models.
Fig 1. The multi-task Unified Visual Embedding model which powers the body type signal

After initial training, we continue to have sessions with internal and external experts for feedback and further human validation. Their inputs are incredibly valuable in fine-tuning the ML model to improve its accuracy. This approach, alongside fairness evaluations, are fundamental to uncover areas where the model may be underperforming. This iterative process facilitates the evolution of the model, enhancing its capability to make increasingly accurate predictions over time. The development cycle is recurrent, with constant iterations providing continuous improvements to the model, contributing to its performance gradually. This process will continue indefinitely to ensure we improve data coverage, quality, and account for possible domain shifts.

Lastly, we index the signal at the content side as a discrete feature, associating all women’s fashion Pins with the prevalent body type present in them. This helps us fetch data at serving time for our recommendations and use it to diversify various Pinterest surfaces.

Diversifying Search Results and Recommendations

Building on top of our previous work on multi-stage diversification in search and recommender systems, we leveraged the existing Determinantal Point Process (DPP) algorithm to enable diversification at the ranking stage, but this time using both skin tone and body type signals.

Since DPP takes into account both the utility scores from ranking models and similarity scores with respect to the diversification dimensions, we are able to balance their trade-off and tune it appropriately for different surfaces and use cases. In our scenario with multiple diversity dimensions, DPP can be operationalized with a joint similarity matrix to account for the intersectionality between different dimensions. A simpler option, which also offers more flexibility in terms of how similarity between items is defined, is to add a new diversity term per dimension in the weighted sum between the utility term and the, now several, diversity terms used to solve the DPP optimization. Given this flexibility, we used the latter approach on search and closeup recommendations.

On search, we introduced this technique in women’s fashion and wedding related results, adding a new body type objective to our existing DPP Blender Node, which re-ranks the top search results to optimize for diversity objectives. Through an A/B experiment that we ran for users in the US who searched for fashion related queries, we saw a 454% improvement in the representation of all body types and a statistically significant impact on some engagement metrics on search, such as click throughs. To further enhance the effectiveness of body type diversification efforts in search, we also improved retrieval diversity. We leveraged the Strong-OR logic, which we had previously added for skin tone diversification, in order to surface content with more diverse body types from our candidate generation phase. Improving Strong-OR for body diversity also means we are surfacing more Pins with all visible skin tones. Given this, we also observed a statistically significant increase in the representation of all skin tones in the top recommendations².

Likewise in closeup recommendations, we added an additional diversification objective to the existing DPP Node as the final step in our blending pipeline prior to returning ranked results. Body type diversification in closeup recommendations takes place when the query Pin is in the women’s fashion or wedding interests categories. In this experiment we observed a 772% increase in all body types represented in the top recommendations. Furthermore, for the countries where we launched this approach, we observed a positive statistically significant impact in some engagement metrics³.

Body type diversification has been rolled out on search and closeup recommendations within the United States, New Zealand, United Kingdom, Ireland, Canada, and Australia. This shift towards inclusive and saveable content leads to increases in relevance, engagement, and user value as people come back to act on the ideas that represent them.


Through so many iterations with different inclusive signals like skin tone, hair pattern, and now body type, we continue to recognize the significance of building ML systems that prioritize inclusion and respect user privacy in our technical choices. With this multi-disciplinary collaboration between engineering and teams spanning many organizations, we will continue to build on our foundation adding more diversity signals, integrating them to diversify search results and recommendations, and expanding the inclusive product experience to more content and domains globally.

This work is the result of a cross-functional collaboration between many teams. Many thanks to Shloka Desai, Huizhong Duan, Travis Ebesu, Katie Elfering, Nadia Fawaz, Jean Garcia-Gathright, Kurchi Subhra Hazra, Kevin Bannerman Hutchful, Dmitry Kislyuk, Helene Labriet-Gross, Sergey Malyutin, Sudeep Paul, Chuck Rosenberg, Ivan Shpuntov, Ashudeep Singh, Yan Sun, Annie Ta, Catie Marques Teles, Yuting Wang, Jiajing Xu, David Xue.

¹Pinterest internal data, global, Q3 2023
²Pinterest internal data, US, Q2 2023, comparing pre-launch to post launch.
³Pinterest internal data, US, IE, NZ, UK, CA, AU, Q3 2023.

To learn more about engineering at Pinterest, check out the rest of our Engineering Blog and visit our Pinterest Labs site. To explore and apply to open roles, visit our Careers page.