In May 2025, Cindy Gallop and Jane Evans were noticing a big drop in the reach of their LinkedIn content. They’d not changed their posting cadence or subject matter, but were questioning the integrity of the algorithm. A growing number of women were reporting similar observations in comments. How could someone with over 136,000 followers only be reaching a hundred or so people, many of whom had clicked the bell icon to ensure visibility?
In July 2025, Matt Lawton proposed an experiment to Cindy and Jane out of sheer curiosity. How would the same post perform if published by all of them? Matt’s colleague Steve, in Los Angeles, agreed to participate as well so a date and time was set and the content published.
In the same window of time, Cindy’s post reached 0.6% of her followers, Jane’s reached 8.6% of her followers, Steve reached 51% of his followers and Matt reached 143% of his followers.
Far from scientific, the test couldn’t really prove anything for certain but intrigued a lot of people. Matt’s post sharing the results attracted over 259 reposts, 473 comments and 1,436 reactions from more than 69,300 impressions. Jane talked about the test in a webinar series called ‘Wrangling the Algorithm’ hosted via her thought leadership website: the7thtribe.com.

A second test was planned and executed in August that attempted to include people of colour to test for a racial bias that global majority LinkedIn users had reported. The test involved 34 users and a 10 page report was published which included finding “the UK data indicates women reached only 19% of their followers vs 47% achieved by men.”
Cynics were right to point out that there were still many variables that could be impacting the results: number and profile of followers, location, level of user activity prior or post publication and whether the algorithm began to punish what it perceived to be duplicated content as the timezone window for posting elapsed.
LinkedIn isn’t intended to be just a regular social network where your sister posts cat videos and your uncle learns how to play better bunker shots. LinkedIn is a platform for economic and career opportunity; a place to advance your professional profile, access investors and funding, connect with peers and be recommended for jobs that aren’t yet advertised. The algorithm should be rewarding thoughtful, quality content in universally the same way regardless of gender or race.
Nobody really knows how LinkedIn’s complex algorithms work, but it is commonly accepted that algorithms typically have a built in bias* and that social norms often compound that.
Some people have also expressed the concern that Microsoft (owners of LinkedIn) may have adopted the Trump Administration’s policy to ban hundreds of words including ‘women’ and ‘transgender’ as reported in the New York Times on March 7th 2025.
In October 2025, Martyn Redstone explained the presence of bias with helpful clarity:
“The algorithm isn’t coded to IF (gender = ‘female’) THEN (demote_post). That’s not how this works. Instead, the algorithm is coded to IF (content = ‘high-quality professional’) THEN (promote_post). The problem is how the machine learned to define “high-quality professional.” It learned from historical data, and that data reflects a world of existing, systemic, and often unconscious biases. The algorithm has, in effect, learned a narrow, historically male-centric model of what “professional” looks like. This is proxy bias: the algorithm isn’t penalizing the gender; it’s penalizing neutral characteristics that are correlated with gender.”
Martyn outlines the challenge LinkedIn may face in complying with UK and EU regulation as they can be seen to fuel discrimination despite their neutral intent. The algorithm may be trained to reward “hard” business topics posted more commonly by men; it relies on professional language that’s known to be heavily gendered; and it may penalize non linear career patterns that affect more women than men.
In September 2025, a third test was coordinated by Jane Evans and Matt Lawton that sought to address many of the variables perceived to be undermining the previous tests. Users were invited to buddy-up in male-female pairings based on sharing the same location, similar follower numbers and willingness to avoid posting for 24 hours either side of their experimental post which needed to be identical. The pairings were invited to work together to create their post and publish it at the same time.
Sadly, a lot of women reported being unable to convince men to participate with them in this test. We ended up with just 9 pairs of participants reporting their data and the results, based on averages, contradicted previous findings.
This website is intended to help run this third test (Test A) at scale to assess the level of bias in the Linkedin algorithm more confidently. We’re also exploring the impact of topic and language by running the 7th Tribe Pattern Recognition Test (referred to as Test B). If you are curious, we’d urge you to read how both tests work and register now to contribute to the data that’s being used to make LinkedIn more accountable.
You can also use one of the Fairness in the Feed assets to post about this campaign on LinkedIn.
*Here are four citations to support this claim:
https://www.theguardian.com/technology/2025/aug/11/ai-tools-used-by-english-councils-downplay-womens-health-issues-study-finds
https://link.springer.com/article/10.1007/s00146-023-01675-4?
https://aclanthology.org/2023.emnlp-main.525.pdf?
https://www.pnas.org/doi/pdf/10.1073/pnas.2204529119?
Check The Latest Blog Posts To Bring Yourself Up To Speed On Developments