Have you been applying to jobs day and night without any reverts? Well, the job market is in turmoil, but there’s something worse that’s pulling women down. LinkedIn’s AI algorithms are biased against women, and data proves this. Women have been conducting an experiment: they’re changing their gender to male on the platform and watching their engagement skyrocket. Some are reporting increases as high as 818 per cent in post impressions. All from ticking a different box. It sounds absurd, doesn’t it? It is, and it reveals something deeply uncomfortable about how algorithmic bias works in 2025.
Why is LinkedIn’s algorithm biased?

When copywriter Megan Cornish changed her gender to male on LinkedIn and asked ChatGPT to rewrite her posts in a more “agentic” male voice, her post views jumped 400 per cent within a week. Lucy Ferguson saw her impressions rise by 818 per cent after changing her name for just 24 hours. The pattern is striking. Business consultant Cindy Gallop has been documenting these cases for months, finding that two male participants with a combined following of just 9,400 saw their posts achieve significantly more reach than two female participants with over 154,000 followers combined. That’s not a marginal difference. That’s a systemic advantage.
LinkedIn denies everything. Sakshi Jain, the platform’s head of responsible AI, insists that their algorithms “do not use demographic information (such as age, race, or gender) as a signal to determine the visibility of content.” And they’re telling the truth. So, why is this discrimination happening and how?
The invisible hand of proxy bias
Here’s where it gets interesting. The algorithm doesn’t need to explicitly discriminate against women to disadvantage them. This is what researchers call “proxy bias,” and it’s far more insidious than old-fashioned discrimination because it’s harder to spot and nearly impossible to prove. The algorithm is trained to recognise “high-quality professional content.” The problem is that it learned what “high-quality professional” looks like from historical data, and that data reflects a deeply sexist world.
Research has shown a significant gender bias in LinkedIn profiles across most demographic groups when analysing technical positions. The differences weren’t just in what skills women listed, but in how they presented themselves textually. In the United States alone, women include 11 per cent fewer skills on their profiles than men at similar occupations and experience levels.
The algorithm picks up on these patterns. It may favour “hard” business topics like tech, finance, and sales over “soft” topics like diversity and inclusion, workplace culture, or burnout as subjects more frequently discussed by women. It likely rewards “agentic” language (words like “executed,” “strategic,” “leader”) over “communal” language (“collaborative,” “supportive,” “helpful”). That’s exactly what Amazon’s notorious AI recruiting tool did in 2018 when it taught itself to downgrade CVs containing the word “women’s” and penalise graduates of all-women’s colleges.
When changing your gender becomes career advice

Women are literally being advised to present as men to succeed professionally. It’s 2025, and we’re back to using male pseudonyms like the Brontë sisters did in 1847. Some might argue this is about user behaviour, not the algorithm. A well-known example is a 2014 Stanford study involving 126 scientists who evaluated identical CVs — one under the name “John” and the other “Jennifer.” Despite the resumes being the same, “Jennifer” was consistently rated as less competent, seen as a poorer mentoring or hiring prospect, and was even offered a salary that was 13 per cent lower than “John’s.”
The uncomfortable truth is that even if LinkedIn’s algorithms aren’t explicitly discriminatory, they’re amplifying real-world biases at scale. And as Cindy Gallop puts it: “Algorithmic suppression of women’s voices = economic oppression of women. Algorithmic prioritisation and elevation of men’s voices = economic advantage to men.” This matters because LinkedIn isn’t just social media; it’s where deals are made, jobs are found, and professional reputations are built. If women’s voices are systemically dampened, that’s not just unfair. It’s economically damaging.
Some argue that biased AI might still be better than biased humans. Perhaps. But that’s a remarkably low bar. The point of AI was to remove human prejudice from decision-making, not industrialise it at scale.
In the meantime, women will keep doing what they’ve always done: working twice as hard for half the recognition, finding creative workarounds, and calling out the bullshit when they see it. Because apparently, in 2025, the most effective career advice for women is still: be more like a man. Even if that means literally pretending to be one.
Images Source
Featured Image Source
Related: Is LinkedIn Unsafe For Women Now? Cybersecurity Expert’s Online Safety Tips To Follow
Web Stories