Substack, the popular newsletter platform, faces renewed criticism over its refusal to ban purported extremists. But the company remains unmoved in its commitment to letting readers choose what information to consume rather than having paternalistic guardians of morality police content.
The latest round stems from an Atlantic article titled “Substack Has a Nazi Problem.” It highlighted authors on the site who allegedly promote white supremacist ideology using neo-Nazi imagery. In response, a coalition named Substackers Against Nazis formed urging the company to stop “platforming and monetizing Nazis.”
Of course, Substack prohibits illegal content like threats of violence. But offensive opinions that don’t directly incite harm have always been allowed. Substack sees itself as a neutral conduit for writers to reach opted-in audiences rather than a governor of acceptable rhetoric. Reader preferences dictate which content thrives rather than top-down diktats.
This purist free speech ethos frustrates advocates of aggressive content moderation. They blast Substack for providing any visibility to views they deem unacceptable. But Substack contends censorship often backfires by driving bigoted communities underground while allowing them to paint themselves as persecuted truth-tellers. Hate tends to fester in dark corners hidden from public scrutiny and counterargument.
Banning extremists also frequently starts down a slippery slope where social media platforms expand restrictions further and further at the behest of complaining interest groups. Sex workers, LGBT creators, critics of police, abortion advocates, and more get systematically marginalized once tech companies appoint themselves morality arbiters.
Not to mention widening the scope of “forbidden” content requires enormous person-hours of monitoring, investigation, and subjective judgment calls around inflammatory rhetoric. Substack has neither the resources nor the inclination to police microaggressions and problematic speech instead of focusing on core functions.
As their arguement goes, it is randomly encountering fringe Nazi propaganda on Substack ranges from unlikely to impossible for most readers. You must specifically seek it out because creators have total curation powers over their own content while recommendation algorithms are nonexistent. Hyperbolic Nazi accusations get flung at mainstream politicians and activists routinely as well, muddying exactly who qualifies as beyond the pale.
Substack’s founders have directly addressed controversies around their minimal moderation before. They wrote in 2020, “In most cases, we don’t think that censoring content is helpful, and in fact it often backfires. Heavy-handed censorship can draw more attention to content than it otherwise would have enjoyed…”
Their statement went on to reject positioning Substack’s employees as “moral police” and suggested alternatives exist for those seeking more speech restrictions. Recent comments from executives indicate no deviation from this mindset despite external pressures.
Many popular Substack authors across the political spectrum also released a letter supporting the status quo around expression. Signees included high-profile writers like Edward Snowden, Bari Weiss, Freddie deBoer, and Michael Moynihan. They noted feeds won’t surface extremist perspectives randomly like on social networks thanks to Substack’s opt-in model. Individual writers can also tailor their own comment policies however they choose.
While suboptimal content frequently arises in systems protecting open discourse, Substack believes readers can navigate information and misinformation through critical thinking rather than centralized gatekeeping. Workers should focus on empowering users through technological innovation rather than attempting to shield them from uncomfortable ideas.
If societal progress depends on countering bigotry with truth rather than suppression, then Substack intends to provide a platform for those battles to unfold.