‘the students make the university’

Unknown, 1895. “Ode.” T.C.D: A College Miscellany.


The Algorithm Will See You Now

PUBLISHED ON

Imagine you’re in the exam hall. The room is silent, the audience waiting in anticipation. You step up to the podium, ready to deliver a moving speech about how much you love Misc. (and rightly so), only to realise that the microphone isn’t working. No one can hear you. You don’t know whether this is a technical glitch or if someone unplugged the mic, but either way it’s too late, the audience has left, and you deliver your speech to the abyss. Now shift that scene to the online world, and you have just experienced shadowbanning.

 

Shadowbanning has become one of the strangest and most quietly powerful forms of online control. It isn’t a ban, and it isn’t a warning. There’s no message telling you you’ve crossed a line, no one telling you your mic is muted on Zoom. Your posts remain visible to you, but without ever being informed, almost no one else is seeing them. In a time when debates about free speech feel more relevant than ever, there’s something deeply unsettling about a form of censorship that can turn down your mic silently, algorithmically, and without acknowledgement.

 

The thing that differentiates this type of censorship from anything before is that it is seemingly blameless. Shadowbans are enacted not by a person, but by an algorithm, an enormous, constantly evolving machine-learning system trained on extensive datasets. Crucially, these systems are built by teams of engineers making thousands of small design decisions: what datasets to use, what counts as “sensitive”, which patterns advertisers dislike, how strictly to detect “harmful” speech. None of these choices are neutral. The final model doesn’t understand context, humour, satire, or culture. It only understands correlations. It learns that some keywords tend to appear in “problematic” posts, and in recognising patterns that resemble past violations, it attempts to keep content “clean” by overcorrecting and punishing anything that fits the statistical profile of trouble. Whether the content is actually harmful is irrelevant — the algorithm is designed to minimise controversy for its Silicon Valley creator, freedom of speech be damned.

 

This is where algorithmic censorship becomes particularly slippery. The models that rank and remove content are trained on huge datasets in which certain words are disproportionately associated with violations. This means a post using a word like “violence”, “sex”, “protest”, or phrases like “mental health” or “self-harm” can be flagged, down-ranked, or quietly suppressed. Technically speaking, content moderation systems have no concept of intention. They simply learn that certain words produce “bad” outcomes according to the training data, and so these words become radioactive. An educational post about sexual health can be treated like explicit content, a political critique can get lumped in with hate speech, an activist organising a protest can be mistaken as an extremist. The censorship happens before a human ever sees a post, vanishing into algorithmic limbo.

 

As awareness of shadowbanning increases, people are beginning to try to find ways around it. Many creators have taken to substituting flagged words with emojis, replacing vowels with asterisks, or inventing entire codes of euphemisms to avoid suppression (“unalive” for “kill”, blood emoji for period, “seggs” for “sex”, etc.). There is no issued list of “taboo” topics that the algorithm tends to flag, which has led to people self-censoring a wide variety of words. This self-policing is perhaps the most dystopian part: the feeling of being in a digital panopticon, where every word feels like it might be algorithmically incriminating. When online users see anatomical terms or sexual identities being asterisked into oblivion, it perpetuates the narrative that these are things that shouldn’t be talked about -that is, if it makes it to your feed at all.

 

What’s even more troubling is that platforms rarely admit this. Instead of “shadowbanning”, they use palatable phrases like “reduced discoverability”, “limited distribution”, or “content sensitivity classification”. No matter what you call it, the effect is identical: your speech is technically allowed, but practically invisible. You’re speaking, but the algorithm has turned down the volume. From the platform’s point of view, this is ideal. A ban is messy, dramatic, and opens them to backlash, while soft suppression is tidy and deniable. Platforms argue that moderation is necessary to protect users, but their method of doing so merely creates a form of censorship that is neither accountable nor contestable. 

 

This phenomenon affects everyone in the digital world. Students, societies, activists, and minority communities often rely heavily on social media to make their voice heard. When an algorithm decides that certain political terms are “controversial”, or that Irish language posts resemble spam, or that event posters look like bot-generated content, entire conversations can be stifled before they even begin. For example, during protests or referendums, posts with specific keywords can experience sudden drops in reach simply because the algorithm is dialling up its sensitivity. No one explains or even confirms this. You’re left guessing whether you’ve been penalised, randomised, or whether the system simply decided your words were too risky to distribute.

 

This creates a strange psychological pressure, a sense that the algorithm is always monitoring, always judging, always reacting to signals you can’t see. People start changing their language — avoiding certain words, censoring themselves, softening political statements — not because they fear punishment, but because they fear invisibility. The algorithm becomes a silent editor, shaping what gets said and what gets abandoned before it’s even typed. 

 

This brings us to the uncomfortable truth at the centre of algorithmic censorship: your speech doesn’t need to be deleted to be controlled, it need only be hidden. The modern day public squares of Instagram, TikTok, and Twitter are governed by unseen moderators who decide who can be town crier on any given day. These systems prioritise whatever creates the least friction for the platform — whatever keeps engagement high and advertisers comfortable — not what is meaningful for society. 

 

In a world where the right to free speech is increasingly fraught, the question becomes: what does freedom of speech mean when the very infrastructure of public conversation is mediated by opaque algorithms? When a machine, rather than a moderator or community, decides which words are too risky and which ideas are too inconvenient to surface? When your right to speak is intact, but your ability to be heard depends on whether you’ve used a word the algorithm has been taught to punish?

 

Shadowbanning shows that in the digital age, censorship isn’t always loud, dramatic, or ideological. Sometimes it’s statistical, accidental, or the result of a machine learning the wrong lesson from the wrong data. Whatever its cause, the effect is the same: speech without reach, expression without audience, and a public sphere shaped by systems we aren’t allowed to see or understand. 

 

So as you stand in the empty exam hall, crumpled speech in hand, it hits you: perhaps the real threat to free expression isn’t the platform removing your voice — it’s the algorithm deciding it’s not worth amplifying.

Author

Leave a Reply

Discover more from TCD Misc. Magazine

Subscribe now to keep reading and get access to the full archive.

Continue reading