In late 2024, UNESCO shared that 71% of young people globally access Comprehensive Sexuality Education (CSE) information online, so just why are the educators, sexperts, and content creators sharing digital sex ed materials online talking about ding-dongs, nip-nops, follicle friends, and sucking cork?! You might more easily recognise what these words refer to if the correct terms were used – penis, nipples, pubic hair, and oral sex. All of these words are normal, anatomically and physically correct, and describe the topic clearly (and essential for talking about intimate matters of the body), yet if you post content mentioning these words on TikTok, Instagram, and many other platforms or even in emails, you risk being (shadow)banned or your account deleted without warning.
The answer to why this happens lies in Automated Content Recognition (ACR) (or as we like to call it, digital suppression and censorship), which is applied at-scale by Big Tech. This relies on Artificial Intelligence (AI) which utilises advanced Machine Learning (ML) methods to identify ‘inappropriate’ content over time, learning what is/not appropriate to share, Natural Language Programming (NLP) to understand language and text added to posts, and image-recognition to analyse visuals posted online. These techniques are applied to the very platforms that 71% of young people use to access CSE information. This phenomenon does not stop at CSE materials; content featuring LGBTQIA+ topics, abortion rights and healthcare, reproductive rights and health(care), maternal and post-partum care, menstrual health, menopause products, sexual wellness, sex workers, gender-affirming care, and basically anything that mentions sex, reproductive organs, or bodily autonomy, is flagged, blocked, and censored by ACR mechanisms on social media platforms, digital advertising platforms, search engines, email service providers, and other digital platforms. Frequently, such SRHR content is miscategorised as ‘mature’ or ‘pornographic’ and removed, subsequently eliminating access to essential CSE, abortion, and general reproductive health and rights information online.
For SRHR practitioners, pleasure advocates, and sexuality educators, sharing information online is a crucial aspect of their work and an uphill battle, forcing creators to do things like creating private, login-only platforms like Frida uncensored to share intimate health content, and use AlgoSpeak—coded language used by content creators and those sharing CSE and SRHR content online – to navigate digital censorship, content take-downs, and shadowbanning. At Share-Net International, five members of the Digital Rights for SRHR Community of Practice addressed this very topic at RightsCon 2025, the world’s leading conference for human rights in a digital age, hosted by digital rights organisation, Access Now.
The session – It’s that f0rkin’ horny AI again – AlgoSpeak and the evolution of how we talk about Sex online – explored how algorithmic censorship silences SRHR content, reinforces and promotes stigma, fuels misinformation, and disproportionately impacts marginalized communities. Rhian Farnworth, SRHR Advisor and Digital Rights Specialist at KIT Institute, opened the panel by unpacking the history of online sex censorship and its relation to Big Tech suppression today. The online suppression of sex-related communications goes far back, rooted in the American 1927 Radio Act – the building block of the 1934 Telecommunications Act and subsequently translated into the Internet Act and Communications Decency Act of 1996 – which was some of the first legislation to regulate the internet. The Communications Decency act strictly “prohibited posting “indecent” or “patently offensive” materials in a public forum on the Internet, including web pages, newsgroups, chat rooms, or online discussion lists”1, while Communications act banned ‘obscene, lewd, laviscious, filthy, [or] indecent’ materials from being shared. Since then, regular iterations of new law, acts, and legislation have been implemented regularly, taking us to the regulated digital spaces we know today.
Shannon Mathew of Share-Net International discussed creative resistance to censorship on- and offline, exploring the creative and joyful methods used by Netizens to bypass censorship. Quoting Chinese-America artist and engineer Xiaowei Wang of the Future of Memory Project “I think poetry and humor allow us to constantly subvert power’, Shannon elaborated on the use of poetry, humor, and memes as a powerful tool to fight censorship and oppressive systems shutting down free speech, sharing examples of how creative formats and designs for Tarot Card readings on YouTube can be utilized. Linking back to circumventing censorship, we can take lessons from Turkish TV dramas using language like ‘i’m falling into grapes’, indicating drinking wine, a nice example of another media forma using AlgoSpeak!
Using a real-life examples of sex education censorship, Al Albertson, Co-Director of Queer Sex Ed Community Curriculum (QSEDCC) shared the difficulties of sharing sex-education content. QSEDCC has been repeatedly shadow-banned, content removed and blocked, resulting in their curriculum and content reaching few people, the delegitimization of their work by being pushed offline, their content restricted as platforms deem it ‘inappropriate’, and their work and creators left with deep feelings of isolation.
RNW Media’s Media Innovation Lead Surabhi Srivastava, discussed the work RNW is doing with digital content creators like Love Matters Kenya, and digital sex education programs in Georgia. Both programs experienced multiple content violation warnings, and much of their content was removed on platforms including TikTok, Instagram, and Facebook. While such restrictions are deeply disappointing, this additionally results in content creators being left with feelings of burnout and imposing measures like self-censorship to stop their content from being removed.
But what are the mechanics of these examples of censorship that play out with sex ed and SRHR information? The team’s resident Data Scientist and Developer, Tabi Trahan, of inroads, explained the basics of ACR, offering insights into how the automated and programmatic suppression of SRHR content is orchestrated from a technical perspective
Cleary, there is a systemic issue of digital sex-education and SRHR censorship, but what can content creators do to fight back and get SRHR content out there, and what workarounds and tools can we utilise to combat digital suppression? In response to our session, we launched AlgoSpeak.net, a living, crowd-sourced dictionary of AlgoSpeak terminology used by content creators, and information source about AlgoSpeak and the history of coded language. The site, sparking in-depth discussion in the RightsCons session, serves as an information hub for those wishing to decipher AlgoSpeak, learn what AlgoSpeak terminology means, and help the documentation of evolving AlgoSpeak terms used by submitting entries to the dictionary.
After the session, many questions and discussions about AlgoSpeak.net were raised, bringing rich perspectives and experience to the table, and directions for AlgoSpeak.net’s future.