I’m a Data Scientist at Facebook, and I keep people safe

Yasho Kote
8 min readOct 11, 2021

--

I lead the data science team for Business Integrity at Facebook, and I have now been at the company for almost four years. Our team’s mission is to build trustworthy connections between people and businesses. When I first talked to Facebook, I was clear that this was the job I wanted for two reasons. One, I have spent most of my career in data science applied to tough problems like risk management at financial institutions. As a professional in this field, protecting the users of a platform as complex, vast, and important as Facebook was very appealing to me. Two, I was concerned about how social media could be misused to impact society, so I wanted to be part of the solution.

The journey has been nothing if not deeply fulfilling. Along with our partner teams, we have made significant impact in areas ranging from securing political advertising to taking down scam operations that exploit vulnerable users. We have also helped build products like the Ad Library, which brings industry leading transparency into ads, including political ads, on Facebook. In the process, we have developed innovative data science to help our partner teams enforce our Ads and Commerce policies. How do we identify, monitor, and mitigate our vulnerabilities around political advertising? How do we know which of the millions of ads on our platform are advertising potentially counterfeit goods, or indeed, dangerous ones? How can we tell which advertisers mislead users with fake COVID protection? These and others are major challenges in an environment where adversaries are constantly trying to beat our systems.

One of these challenges relates to the narrative in recent news about how Facebook profits from engagement. Working as closely as we do with the revenue generating parts of our company, my team has developed a good understanding of how negative experiences impact our business. Our studies so far indicate that people engage less with ads following a negative experience with other ads, which in turn leads to lower revenue for us. Advertisers too have spoken loud and clear — they do not want their ads near content that violates our standards. Independent of our values and safety goals, our company knows that our business simply cannot succeed by prioritizing engagement and growth at the expense of positive experiences.

Our work is important and has been backed by strong investment over the last few years. Like my colleague Veronika’s, my own team has grown seven times since I joined Facebook, with partner teams such as engineering and operations seeing similar growth. Typical of Integrity work, the impact of our team is in what we prevented, but the opportunity is in what we didn’t. Therefore, our main metrics for measuring success in reducing negative experiences are all prevalence based, i.e., what we missed.

One of the biggest challenges in our work is that reasonable people can and do disagree on what is acceptable and what isn’t. People also disagree on how to strike the right balance between the decisions we make — stifling legitimate activity vs removing problematic content or actors. There are no perfectly accurate decisions, and we make incredibly difficult tradeoffs every day. Once we get past the allegations of omission or commission, these are the issues that need a robust public debate.

Here are some members of my team sharing their own motivations for why they work in Business Integrity. I am incredibly proud of them and all my colleagues for their work in keeping our platform safe.

Lindsay, Data Scientist

When I left my academic career to join Facebook as a data scientist, I wasn’t intending to stay. I wanted to work on interesting problems and figured Facebook would look good on my resume, so I might as well give it a try. When I started, I had no idea what I wanted to work on, so it was by chance that I ended up joining an integrity team trying to detect fraud. I was thrilled to discover that not only was the work intellectually stimulating, but the people I got to work with were brilliant and passionate about building a safe platform. But my eyes were truly opened when I attended my first Integrity Summit within Facebook. I met people from across the company and heard them tell their stories about how they were solving integrity problems, including the successes and missteps along the way. I knew I had found my tribe.

In 2018, I joined the team working to protect political ads from foreign interference. My team built the tools that verified the authenticity of advertisers before they could run ads about social issues, elections, or politics. Even though we knew this requirement would deter some advertisers from doing business with Facebook, we knew it was the right thing to do and at the end of the day, that was most important. There’s no question that the time I spent on election integrity was the most important work of my career (so far!). The gravity of our responsibility was not lost on me or my colleagues. I am proud to have been a small part of the company-wide effort to protect the 2020 US election. The opportunity to do such meaningful work is what’s kept me at Facebook for more than 5 years. And all those years after joining an integrity team on a whim, I can’t imagine doing anything else.

Patrick, Data Scientist

As a data scientist at Facebook, you get a large say in what you work on. Part of the process for coming on board as a new employee is to pick the area you want to focus on. The area that I picked was the political ads team within Business Integrity: identifying the political ads purchased on the platform in order to ensure that they are allowed to run on the platform. The seed to join an integrity team had actually been planted when I was completing my on-site interview process. While taking a break to eat lunch, I met with a data scientist who was working on political misinformation. I remember the passion that she brought to the area and was intrigued by the challenge of working on a problem that can have such a serious impact on billions of people around the world. The backlash from foreign interference in 2016 was at the forefront of my mind, and I was curious to work on the systems that would block that from happening again.

The systems that my colleagues and I have worked on enable us to not only prevent foreign interference in elections around the world, but to increase the transparency of political ads for the advertisers that run ads in their own country. One of my favorite memories was working to identify potentially adversarial advertisers before US2020. It was a balancing act, as we created a dynamic logic that was specific enough to not accidentally harm benign advertisers, but general enough to catch as many adversarial advertisers as we could. Digging through the resulting ads that we had taken down, it was incredible to find some ads specifically targeting the US election and knowing that our systems were working to prevent the same type of manipulation that had occurred years before. Political advertisement is undoubtedly a powerful tool. I am proud to work on the systems that allow this tool to support a nation’s own electoral process and prevent adversaries from creating harm.

Andrew, Data Scientist

When I started working in Business Integrity at Facebook in 2013, I was excited to work in a relatively new space (Facebook Ads) where there was the opportunity to help design new systems from a fairly basic starting point. There was a lot we needed to figure out. During my first weeks, the primary direction was something like, “Make sure users don’t see bad ads”. Within this wide-open mandate, we had basically one guiding principle: Users come first, well below that comes Businesses, and at the very bottom is Facebook. If we could execute at the top of that list, the rest would follow.

Over the last 8 years I’ve worked on many different projects but protecting users has been the common theme. I’ve focused particularly on keeping outright scams off the platform. Currently, I work to understand, identify, and measure the most adversarial advertisers from the moment they begin to advertise. There are millions of potential products, and millions of ways to market them, and simply looking at an image and some text in an ad is often insufficient for us (or for users!) to differentiate a legitimate business from a scam. We use deeper, more behavioral signals from these advertisers to bifurcate those who are actively working against our systems and those who might just be a first-time advertiser looking to grow their new business.

The first order consequence of this work is to simply prevent bad advertisers from gaining unjust profits, which is pretty motivating in its own respect. More than that though, we have the opportunity to instill trust and remove friction in the marketplace. If we do our jobs well, we can allow users to discover smaller, newer businesses that they might not otherwise encounter or engage with, without having to worry about whether these advertisers deserve their business or not. Trust is central to this vision; retaining the confidence of our users is still at the top of the list.

Harish, Data Scientist

I joined Facebook almost three years ago, eager to join the Business Integrity team and make social media safer for its users. With Facebook’s scale, at what other job could I possibly impact the world more positively? After I joined, my passion for the job has only grown.

Some of the challenges I have been involved with are reducing instances of poor purchase experiences after clicking on an ad and preventing offensive content. One challenge that plagues many of these questions is agreeing on an objective definition of the problem. How can the validity of all claims be assessed at massive scale without consulting an expert for each? How late can a purchased product arrive before it is deemed a bad experience? How do cultural norms help determine whether something is acceptable? These are all very difficult questions without a definitive answer, but we grapple with them. The team makes these decisions with the best intent and the best information available at the time.

We have received strong encouragement from our leadership to mitigate a broad array of these types of issues. An incident I witnessed illustrates this best. The team was presenting to senior leadership its approach to mitigating harm caused by the purchase of low-quality products after clicking on an ad. This was a hard problem to begin with. After all, how would Facebook even know what products are being purchased by users after interacting with an ad? Rather than giving us a pat on the back for having taken on a difficult problem, a senior leader challenged us, “Do only purchase related poor experiences matter? What about other poor experiences following a click on an ad? How about downloading malware? Or identity theft?” These questions pushed us to expand our research into the types of poor experiences people have, and to strategize how we could mitigate them. When the team went back with an expensive method to measure the problem, leadership approved a large budget that has continued to grow since. We have since made significant progress in mitigating these types of bad experiences.

I began my journey at Facebook with a vision to help the world, and I am happy that the company has given me the opportunity to realize it in my own small way. This is a journey which has no end, but it is this journey to help people that drives me every day.

--

--