What Is Algospeak?

Self-Censorship in Social Media

You’ve probably encountered a social media post that seemed cryptic – like someone was talking about “watermelons” or using 🍉 emojis in a context that didn’t make sense. It might have made you pause and wonder, What are they really trying to say?

That’s algospeak in action.

Algospeak is a portmanteau of “algorithm” and “speak,” and it refers to the replacement of flagged or disfavored words with seemingly innocent ones. It refers to the coded language, euphemisms, and creative phrasing social media users adopt to avoid triggering content moderation algorithms. It's a way for creators to talk about sensitive or controversial topics—like mental health, politics, or current events—without risking their posts being taken down or their accounts being penalized (Narvas, 2025).

Why Algospeak Exists

Platforms like TikTok, Instagram, and YouTube use automated moderation systems to enforce community guidelines. These systems scan for words and phrases that are flagged as inappropriate, sensitive, or not brand-safe. While the intention is to protect users and maintain advertiser-friendly environments, these algorithms often work in ways that lack consistency or nuance. They have a tendency to remove or suppress content that uses words deemed unsafe, regardless of context.

A post about mental health, reproductive rights, or political conflict can get flagged simply for using specific keywords. In this case, users can opt to use algospeak to get past the algorithmic censors and get their message across to their audience.

This means that even thoughtful, nuanced discussions around serious topics can be flagged, simply for using the “wrong” word. For example:

(Some of these terms may have already fallen out of fashion, or may have already been flagged by content filters.)

Why People Use Algospeak

Users employ algospeak as a workaround, a way to communicate while staying under the algorithm’s radar. It’s a form of digital self-censorship that lets creators preserve their message without sacrificing reach or risking penalties such as:

  • Getting their post taken down

  • Shadow bans (where your content is suppressed without your knowledge)

  • Outright account suspensions or bans, and

  • Demonetization

Examples of Algospeak on TikTok (Steen et al., 2023)

Examples of Algospeak on TikTok

On TikTok, the use of algospeak goes far beyond emojis. Creators often replace potentially flagged words with harmless-sounding alternatives or intentional misspellings to slip past the platform’s content moderation algorithm. Steen et al. (2023) compiled a few examples of some commonly used phrases and codes by content creators on TikTok, which you can find in the table below.

One well-known example is the use of the watermelon emoji on TikTok and Instagram. At first glance, it might seem like someone is just talking about fruit. But for many users, especially in recent years, 🍉 has become a symbol of support for Palestine. 

The association isn’t random. The red, green, white, and black colors of a watermelon closely resemble those of the Palestinian flag. By using the emoji, users can signal support or comment on the situation without triggering automated content moderation or risking their posts being removed.

The Political Side of Algospeak

Interestingly, this kind of workaround isn’t needed when users discuss the Russian invasion of Ukraine. This can only mean one thing: that content moderation doesn’t happen in a political vacuum. Decisions around what’s considered sensitive, controversial, or “brand unsafe” often reflect broader geopolitical interests, government influence, platform ownership, and corporate priorities.

A 2023 Al Jazeera report revealed that Twitter under Elon Musk increased its compliance with government censorship requests, especially from countries with poor human rights records (Hale, 2023). Meanwhile, Meta has been accused by human rights groups of suppressing voices in support of Palestine (Younes, 2023).

This environment creates fertile ground for algospeak. It enables users to talk around banned topics without being flagged—but it also means critical conversations are being pushed underground. When users must rely on euphemisms to talk about their own identities or lived experiences, the line between moderation and censorship becomes dangerously thin.

Is Using Algospeak Ethical?

Whether algospeak is ethical depends on how you frame the issue.

On one hand, it’s a form of linguistic creativity – a way for users to express themselves under restrictive conditions. In many cases, algospeak is used to advocate for human rights, raise awareness about injustice, or create safer spaces for discussing personal struggles. It’s not unlike how Swardspeak (or “bekinese”) emerged in the Philippines during the ‘60s and ‘70s, affording LGBTQIA+ folks a means to communicate safely and assert identity in a hostile environment.

On the other hand, critics argue that relying on coded language can obscure meaning, confuse outsiders, or even enable misinformation to spread under the radar. But this critique often misses the point: algospeak is a response to overly rigid, profit-driven systems, not the cause of the problem.

The Bigger Picture: Moderation, Power, and Profit

Automated moderation lacks nuance. Algorithms are not equipped to understand tone, context, or cultural sensitivity. They’re trained to identify and remove risk, and they do so with blunt force.

Historically, platforms present themselves as neutral and apolitical. But their moderation policies – and the enforcement of those policies – tend to reflect the interests of advertisers, investors, and even governments. What’s deemed "sensitive content" is shaped by what’s considered risky to advertisers and governments, not necessarily what’s harmful to users. In this sense, "platform moderation” can be weaponized to suppress dissent, marginalized voices, or politically inconvenient narratives.

So when users turn to algospeak, it’s not just about staying online. It’s about resisting a system that prioritizes profit over context, visibility over truth.

Does It Work?

Sometimes. Algospeak is an arms race. As moderation algorithms become more advanced, they also learn to recognize new euphemisms. In turn, users invent new ones. This cycle leads to constant linguistic churn, where the language of resistance is always changing.

According to Steen et al. (2023), algospeak often works best when combined with other tactics like video editing tricks, alternate spellings in captions, or burying keywords deep in hashtags.

But it’s not foolproof. Algorithms are inconsistent, and moderation decisions are unevenly applied. Some posts still get flagged. Others slip through the cracks.

What’s the Future of Algospeak?

Algospeak isn’t going anywhere. As long as algorithmic moderation remains the norm, users will continue to adapt how they speak. But that adaptation comes at a cost: clarity, authenticity, and freedom of expression.

Ultimately, algospeak is a symptom, not the disease. The real issue lies in how content moderation is designed and enforced. If platforms want to support meaningful, inclusive dialogue, they’ll need to invest in better tools—ones that prioritize context over keyword suppression, and community safety over corporate risk aversion.

References

Hale, E. (2023). Twitter fulfilling more government censorship requests under Musk. Al Jazeera. Retrieved April 14, 2025, from https://www.aljazeera.com/economy/2023/5/2/twitter-fulfilling-more-government-censorship-requests-under-musk

Narvas, J. M. (2025). Decoding Algospeak on Social Media. https://janisnarvas.com/decoding-algospeak/ 

Steen, E., Yurechko, K., & Klug, D. (2023). You can (not) say what you want: Using algospeak to contest and evade algorithmic content moderation on TikTok. Social Media + Society, 9, 20563051231194586. https://doi.org/10.1177/20563051231194586 

Reply

or to participate.