LinkedIn’s algorithm removed posts about inclusion initiatives: something is broken.

Sarah Cordivano
DEI @ Work
Published in
4 min readMar 12, 2024

--

Today on LinkedIn, I reposted an invitation for women founders to join “Female Founders Office Hours.” The LinkedIn algorithm took it down automatically, as it was considered “discriminatory.” They also threatened to restrict my account if I continue to post such “discriminatory” content. Obviously, this is an algorithm misbehaving. But it’s a human who developed the algorithms and approved their rollout. What does this mean for us, and automation technology? Full story and discussion below:

An alert from LinkedIn that states “your post doesn’t comply with our policies”

This story is really interesting and worth discussing.

Here’s the story:

🚩 LinkedIn’s automatic systems flagged and removed a repost I made today because it was against their policy due to “Job Discrimination”.

The LinkedIn post was regarding the “Female Founders Office Hours” posted originally by Luis Shemtov of Lunar Ventures. What is this event? “Started by the Playfair Capital team in 2019, FFOH has already brought together over 2,200 founders for 9,000 one-to-one mentoring and pitch meetings with over 180 investors across ten editions to date. Founders participating in the event have raised over £1.4bn.”

The event, as you can see, is not a job offer, but an opportunity for founders to participate in volunteer based one-on-one mentoring sessions with investors.

Here’s the discussion:

👀 Why is it interesting that this post was automatically flagged for Job discrimination?

What can we learn from this situation?

A mentorship opportunity is not a job offer.

The post was not a job offer though was flagged for “Job Discrimination” likely due to the text I added: “Great opportunity for Female Founders.” It was advertising a great initiative to support female founders with mentorship. By the way, did you know that women receive a disproportionately small share of VC funding? VC capital for female co-founded startups was only 15.4% of total US VC funding in 2022. (sources: one, two, three).

This is not an isolated incident.

One other person commented that their post for the side by side Mentoring Programm for women and refugees also got flagged and when it was reposted, it got fewer views than similar posts.

False positives require appeals.

We can see that this automatic flagging of posts clearly finds false positives for discrimination. I appealed the decision and it was reversed within 20 minutes. My guess is a manual review by a human resulted in the decision reversal. The post is now available. But, I suspect that many people whose posts are flagged would not go through the trouble of reading the justification for the flag and appealing the decision. Ultimately, this automatic flagging may result in less content promoting inclusion initiatives on LinkedIn.

What is the trigger for “discrimination”?

It’s not clear how sensitive the automatic flagging system is. Would it flag a post as discriminatory for simply suggesting women apply for an opportunity that is open to everyone? Is that discrimination? Not in my view.

Interestingly, we can read LinkedIn’s policies on job discrimination and gender-based job discrimination. It’s obvious to me that my post did not qualify as gender-based job discrimination, but it’s unclear to me what may trigger this (correctly based on their policy, or incorrectly due to keyword tagging). I honestly wonder what the policy’s implication would be for a post that encourages People of Color to apply for a role. In my interpretation, simply encouraging people to apply does not indicate a preference but I think it’s likely those posts would be flagged incorrectly.

What guardrails are needed?

📌 Automation using AI certainly flags harmful and dangerous content successfully but can it understand the difference between a discriminatory job offering and a post promoting an initiative for mentorship? What kind of additional guardrails or training need to be done to teach the systems nuance or is LinkedIn’s approach to cast a wide net and let the false positives be reviewed in the appeal process?

Humans build these tools

Of course, you can say AI tools won’t get it right 100% of the time and they likely save a lot of manual effort that can be spent elsewhere. But remember, it’s a human who developed the algorithms and approved their rollout. A human was involved in testing the algorithms and setting the keywords for flagging. Ultimately, it’s the responsibility of LinkedIn and their developers to develop tools that enforce their policies but don’t cause unnecessary damage and harm.

metal bars lit with purple to red gradient
Photo by Michael Dziedzic on Unsplash.

Conclusion

I don’t have answers, but I wanted to bring visibility to this issue and start a discussion about why this happens and why more work needs to be done. If you are interested in this topic, I can recommend the following books and articles about how technology can amplify inequity in society:

And if you are a startup building technology and curious how you can consider DEI from the beginning of your company’s journey, check out this blogpost: How to approach Diversity, Equity and Inclusion for your startup.

Note: this post originally appeared in a shortened form as a LinkedIn post. I recommend you also check that out as there are some great comments regarding AI concerns and implementation.

--

--