Role Models in AI: Stephanie Lampkin

AI4ALL Team
AI4ALL
Published in
8 min readJun 3, 2019

In her role as technical founder and CEO of Blendoor, Stephanie Lampkin is passionate about leveraging augmented intelligence and people analytics to match a diverse workforce of candidates to companies, ensuring candidate anonymity in the process. This, she believes, can mitigate unconscious bias, giving qualified people more opportunities and creating better companies and economies as a result. Stephanie also collects data for BlendScore, which is a report published by Blendoor. BlendScore rates tech companies on their recruitment, compensation, retention, and promotion of women and underrepresented minorities.

Stephanie started coding when she was 13. Her aunt Greta, who was studying computer science at the University of Maryland at the time got her involved in the Black Data Processing Association (BDPA) and the rest is history.

Learn more about why she started Blendoor, what inspired her passion for CS, augmented intelligence, and business, and what she believes people should be doing to create a more positive future for AI.

We interviewed Stephanie as part of AI4ALL’s Role Models in AI series, where we feature the perspectives of people working in AI in a variety of ways. Check back here for new interviews.

As told to Eunice Poon of AI4ALL by Stephanie Lampkin

EP: What prompted you to start Blendoor?

I started Blendoor three months after I had a pretty surprising interview experience with Google. I was living in New York at the time and had just graduated from MIT Sloan. I was interviewing for an Analytical Lead position at Google, and the hiring team came back to me with the consensus that I would be better suited for a Sales or Marketing position because they didn’t see me as quite technical enough for the role. Prior to this interview, I had been coding and exposed to CS from an early age, got an engineering degree from Stanford, worked at Microsoft for 5 and a half years, and had just completed an MBA. Something about their feedback didn’t seem quite right to me, but I politely declined the Sales/Marketing position and moved on.

Later, I found out that Google at the time was only 2% African American, 3% Latino, and about 25% women. The narrative they were telling the media was that it was a pipeline problem, they just can’t find enough qualified women and people of color. This led to an aha moment for me — to create a platform that makes it really easy for both candidates and companies to connect so that companies can get access to a broader pool of diverse talent. That was how Blendoor came to be.

EP: What does a typical day look like for you as the founder and CEO of the company and how do you use AI in your work?

A typical day for me varies depending on what our current sprint or area of focus is. Right now I’m focusing on closing a small round of funding, so a typical day for me is following up on emails, maybe having a couple of meetings, and then also doing a little bit of data work. We’re also releasing our next version of BlendScore, which is a comprehensive diversity equity and inclusion rating for the top 200 tech companies. So I’m building the algorithms and selecting all the data for that. So a significant part of my day is getting all that together.

At Blendoor, we’re basically leveraging what we call augmented intelligence and not artificial intelligence because we realized that we can’t completely remove recruiter decision making from the process. So my focus area as of late is to find ways we can leverage augmented intelligence for better people operations and decision making. I do this by figuring out areas in which we can provide users with information that can predict what candidates are a good fit for a role, or which company is a good fit for a candidate.

EP: How did you get interested in CS and how did you find yourself in the tech and business world?

I always tell people that I think a significant influence in my trajectory into tech was that I had early exposure to a computer scientist who looked like me, and not just a guy in a hoodie and flip flops.

My mom was homeless for a brief period while pregnant with me and ended up migrating up to DC from the South because her sister was living there at the time and studying CS at the University of Maryland.

Through my aunt, I took a very early liking to CS, because she always had the best gadgets. She also got me involved in a nonprofit called Black Data Processing Associates (BDPA), which basically had these summer coding camps that would culminate in a big hackathon. Back then it was called a “high school computer competition,” and we would compete in an all-day programming contest with other kids from around the country. I did that for three years such that by the time I was a junior in high school, I was a full stack developer. By the time I was in my senior year, I took AP CS. So all that sort of led me on track to get into Stanford. I got an engineering degree, had a software-related internship every summer, and was recruited by Microsoft through a National Society of Black Engineers Database. I spent about 5 and a half years at Microsoft before going to MIT for grad school where I got an MBA focused specifically in entrepreneurship and innovation because I knew that I wanted to take a shot at starting a tech company.

My first company coming out of MIT Sloan was Hoowenware, a travel management platform. The idea was to take out a lot of the stress and strain that comes from planning and organizing group travel. I wanted to help people streamline the planning, the logistics, and all the nightmarish things that happen when you’re trying to coordinate different people for travel. This is still something I’m passionate about pursuing, but it’s on hold at the moment because the travel market is a complicated one.

EP: What are some of the important things people should be doing to create a positive future for AI?

One of my favorite books is by Dr. Cathy O’Neil called Weapons of Math Destruction. I think she’s done one of the best jobs I’ve seen in articulating some of the dangers AI has in impacting the lives of people who historically haven’t done really well as-is, and so AI stands to potentially exacerbate those problems. I’ve dedicated a lot of my energy into promoting the importance of 1. ensuring that there is a considerable amount of diversity in the product development and design that goes into AI and 2. doing a lot of the hard groundwork necessary to ensure our ML algorithms are being trained on diverse data sets.

One of the things that we’re trying to do with Blendoor is think very creatively about how we can get access to data, even data that may not be currently present on the web. I think a lot of companies, particularly here in Silicon Valley, are overly optimistic about how truly representative data that currently exists on the internet is. But there are a lot of people that are not represented on social media platforms. So I’m a big fan of tackling that problem.

We also have to be very intentional about leveraging demographic information instead of hiding it. I think the initial reaction to the negative outcomes that AI has on people of color, for example, is to create more race-blind algorithms. But I think that is the opposite of what needs to happen. I think we need to take into consideration things like race, age, gender, sexual orientation, socio-economic background so that the algorithms are taking those factors into consideration when they’re making certain judgments and evaluations.

Otherwise, if you try to apply the same standard model to all types of people, ultimately some people are going to be left out.

So I’m really bullish on this idea that we don’t need to be color blind or gender blind or sexual-orientation blind. If anything, we need to bake that into the algorithms that are driving a lot of the decision making.

EP: Who were your role models growing up? And who are your role models now?

While I was growing up I was most inspired by the usual big tech nerds, like Bill Gates and the founders of Yahoo. But as I got older and a bit more exposed, it sort of changed. I started to realize that okay, these guys actually don’t have rags-to-riches stories. So I started to explore the backgrounds of people who really started from nothing and pulled themselves up from their bootstraps.

Now, I’m a big fan of Serena Williams. She came from absolutely nothing. I am also drawn to Oprah Winfrey. I find myself drawing power from the stories of people who weren’t necessarily given all the resources they needed to succeed, but who made the best out of what they had.

EP: What advice do you have for young people who are interested in AI who might just be starting their career or academic journeys?

I would say, be very diligent and intentional about finding your niche. AI is still fairly new. It’s a term that is thrown around a lot, but there aren’t really any standards about who is truly leveraging AI and ethical AI, and so there is still quite a bit of a grey area there. So the more that students, aspiring engineers, designers, or policymakers go into this space, it’s going to be important to figure out what your niche is, what your specialty will be and the impact that you want to make. Then, reverse engineer your career and your education that way.

I would also recommend taking advantage of as many free online resources as possible.

There is so much content out there, and I don’t think you need a Stanford, Berkeley, or MIT degree to be really successful in this. So figure out a way to get a “minimum viable education” in this space as you begin the journey.

About Stephanie

Stephanie Lampkin, TEDx speaker and former downhill ski racer, has graced the cover of The Atlantic, Fortune 40 under 40, MIT Tech Review 35 under 35 and Forbes to name a few. She is the founder & CEO of Blendoor, software that mitigates unconscious bias in hiring. With a 15-year career in the tech industry founding two startups and working in technical roles at Lockheed, Microsoft, and TripAdvisor, Stephanie is now using her talents to build augmented intelligence and people analytics that help us see people better.

--

--

AI4ALL Team
AI4ALL

AI4ALL is a US nonprofit working to increase diversity and inclusion in artificial intelligence.