Against the Harmful Digital Satire
Free Speech, Digital Algorithms, and Queer Harm in Yevstifeyev v. Russia
In a previous piece for EJIL: Talk, I examined the European Court of Human Rights’ (ECtHR) judgment in Yevstifeyev and Others v. Russia, where the Court addressed two applications. The first concerned homophobic verbal assaults by a politician, which the Court rightly found to violate Articles 8 and 14 of the Convention. The second application, Petrov v. Russia, dealt with a satirical video depicting a ‘Gay hunt,’ which the Court held did not cross the ‘threshold of severity’ to trigger Convention protection. In my piece, I criticised the Court’s reasoning in Petrov, arguing that the judgment reflected an inconsistent application of the severity threshold, a problematic privileging of satirical context over violent content, and an inadequate consideration of collective harm to the queer community.
In response, a recent piece at Völkerrechtsblog has defended the Court’s reasoning in Petrov, contending that the video constituted protected satire and that the Court correctly applied the ‘reasonable reader’ standard while discounting audience hostility. This reply critiques that defence. While the defence raises valid concerns regarding doctrinal coherence and digital platform complexities, it ultimately overlooks how digital satire, when intertwined with harmful and homophobic tropes and symbolic violence, can perpetuate ‘structural harm’ and ‘discrimination’ under Article 14 of the Convention. This reply revisits three aspects: the misapplication of the ‘reasonable reader’ standard, the privileging of ‘intent’ over ‘effect’, and the misconstruction of the threshold of severity.
Not just ‘Digital Satire’
Before engaging with doctrinal standards, it is critical to examine the nature of the speech in Petrov. While it is cloaked in digital satire, the video also bears all the hallmarks of what scholars define as dangerous speech, referring to an expression that can increase the risk that its audience will condone or participate in violence against another group. In Petrov, the ‘Gay Hunt’ video dehumanises queer individuals through staged killings, slurs, and caricatures, turning satire into symbolic violence that normalises harm. To consider such an expression as satire solely ignores its role in encouraging hostility, particularly in Russia, where anti-queer prejudice is already normalised. The power of satire is its ability to render harmful narratives more palatable (See Godioli, Young, & Fiori). As Roman Zinigrad argues, when hate and harm are presented humorously, it becomes more likely to be accepted, even by those who would reject the same content if expressed seriously. The ECtHR’s failure to engage with this legitimising function of humour, its ‘digestibility’, renders its reasoning in Petrov especially problematic.
Consider a hypothetical scenario where a video titled ‘Jew Hunt’ is released on social media in which actors dressed as Nazi soldiers jokingly capture and execute Jewish characters, mocking the victims as part of an alleged parody of antisemitic regimes. Even if the creators claimed the intent was to ridicule historical bigotry, such a video would rightly trigger condemnation, outrage, and criminal liability under hate speech and Holocaust-related laws in multiple jurisdictions. The reason is simple: some narratives, even when presented as satire, carry such a historically violent and dehumanising charge that humour cannot sanitise them. These laws target not only the denial of atrocities but also their trivialisation, glorification, or approval, and a parody of persecution can fall squarely into that category. The digital satire in the ‘Gay Hunt’ video follows this template, yet it received judicial indulgence under the guise of parody.
The ECtHR itself recognised that humour can constitute symbolic violence when it perpetuates harmful stereotypes. In Canal 8 v. France, the Court upheld financial penalties against a broadcaster whose sketches, though humorous, stigmatized queer individuals and trivialised sexual harassment, particularly because unsuspecting individuals were “used” without consent. Similarly, in Féret v. Belgium, the Court held that xenophobic jokes during an election campaign could provoke public contempt and hate, while in Sousa Goucha v. Portugal, it emphasised the private status of those depicted and the importance of voluntary participation. Together, these cases demonstrate that the ECtHR weighs the real-world harms and risks of incitement behind ostensibly comedic content. By contrast, in Petrov, the ECtHR treated the ‘Gay Hunt’ video primarily as satire, downplaying the symbolic violence and risk of incitement.
The Reasonable Reader Standard: Misapplied and Under-Theorised
Supporters of the Petrov judgment argue that the Court implicitly invoked the ‘reasonable reader’ standard, derived from cases like Sousa Goucha and Verlagsgruppe v. Austria, to assess whether the satirical video incited hatred. Yet the invocation in Petrov is at best implicit and at worst doctrinally unmoored. Unlike Sousa Goucha, where the satire was directed at a public figure and the Court carefully weighed the broadcast’s social function, Petrov involved a vulnerable minority group historically (and contemporarily) subject to marginalisation and violence all around the European states. The absence of any express discussion of how a ‘reasonable viewer’ in Russia, a country with institutionalised homophobia, would interpret satire that depicts the killing of a gay man renders the Court’s analysis dangerously superficial.
Crucially, the digital nature of the dissemination requires a recalibration of the ‘reasonable reader’ framework. One critical argument of the supporters of the Petrov judgment, that the audience was non-identifiable due to online circulation, overlooks how, in digital ecosystems, even heterogeneous exposure leads to foreseeable audience clusters. Instagram’s algorithms, like those of other major platforms, foster ideological micro-communities (see here and here). A post that engages with homophobic tropes is unlikely to circulate randomly; it is more likely to be shared within networks where prejudicial or exclusionary attitudes toward queer community already find resonance, thereby heightening the risk of discriminatory interpretation and endorsement.
In Féret, the ECtHR held that speech targeting members of a ‘less informed public’, though not named, was still identifiable, and the speaker could foresee its likely impact. This principle translates powerfully to digital speech, where hashtags, follower networks, reposts, and comment cultures render the audience predictable, even if not individually named. In this light, the legality of the original post cannot be assessed in isolation from its foreseeable digital trajectory. The Court’s neglect of this foreseeability principle in Petrov is a doctrinal and empirical gap. Moreover, digital speech is not only received by primary viewers but is continuously redistributed, reframed, and meme-ified by secondary users. This spread is not accidental but built into digital communication, which means that satirical hate content is likely to evolve into harsher forms once posted. What matters is not only the initial 120,000 views but also the predictable afterlife of the content in algorithm-driven echo chambers that fuel anti-queer violence.
The argument that Instagram provides an ideologically fragmented space where satire is unlikely to reinforce bigotry misconstrues how hate surfaces digitally. As shown in Petrov, the video generated hundreds of affirming comments endorsing homophobic violence. Zinigrad notes that this reception magnifies ‘harm’ by legitimising bigotry, especially where state narratives themselves reinforce discrimination. The presumption that digital platforms diffuse meaning, rather than concentrate bias, is a fiction unsupported by digital media research (see here, here, and here).
Intent v. Effect: An Overcorrection of Precedent
The defence of the Court’s judgment places excessive emphasis on the satirical intent of the video creator, citing Jersild v. Denmark, where a journalist was exonerated for broadcasting racist speech as part of critical reporting. However, Jersild involved explicit editorial distancing and a journalistic framework. In Petrov, the speaker was a comedian, not a journalist, and the violent fantasy against the queer community was not marked as parody in a way that would negate its harmful effect. This distinction is crucial because the ECtHR has consistently held that intent is not determinative when evaluating hate speech. In Erbakan v. Turkey, the Court affirmed that even religiously framed speech with political aims may be restricted if it incites division. Similarly, in Soulas v France, the Court upheld convictions despite the authors’ claim that their xenophobic writings were meant to provoke debate, not hatred.
In Belkacem v. Belgium, the Court upheld criminal sanctions against a video that incited hatred against non-Muslims, rejecting the defense that it was merely polemical. Likewise, in Perinçek v. Switzerland, the Court acknowledged that the expression denying historical atrocities, even under the guise of political cover, could provoke deep social harm and be limited accordingly. This logic applies with equal force to satirical denial or parody of queer suffering. In the digital context, intent becomes even less reliable as a safeguard. Digital speech is fast-moving and decontextualised, across varied interpretive lenses. Thus, a jurisprudence that privileges intent over effect is particularly ill-suited for digital platforms. The fact that the video was framed in satire does not dilute its real-world resonance as dangerous digital speech.
Threshold of Severity: Conceptual Inconsistency and Selective Application
The ECtHR has long recognised that negative stereotyping of a group can, under certain circumstances, implicate Article 8. In Aksu v. Turkey, the Grand Chamber held that such stereotyping may impact an individual’s ‘private life’ by undermining their self-worth and social standing. In Budinova and Chaprazov, and Behar and Gutman, the Court reiterated that public statements stigmatising vulnerable communities may meet the threshold of severity even when not personally directed. In Petrov, however, the Court found that the satirical video, though containing homophobic language and imagery, did not meet this threshold. It justified this conclusion on the basis that the video was not targeted at the applicant, that it formed part of a broader political debate, and that it parodied state-sponsored homophobia rather than promoting it. However, this rationale ignores how digital satire, especially when violent, can validate offline harm. The video’s indirect dissemination, through downloads, shares, and algorithmic visibility, amplified its reach among those predisposed to act on its message. The 714 comments were not abstract digital chatter; they were proof of uptake, alignment, and potential mobilisation.
The ECtHR’s contrasting approach in Canal 8 is instructive. There, the Court recognised that humour may constitute symbolic violence when it perpetuates stereotypes and normalises discriminatory behaviour. It upheld domestic sanctions, highlighting the importance of communication context, audience impact, and the absence of contribution to public discourse, all of which were equally present in Petrov, yet disregarded by the Court. The Court’s distinction between satire that parodies discrimination and speech that reinforces it becomes untenable when the satire adopts the very hateful language and imagery of persecution. In Lilliendahl v. Iceland, homophobic Facebook comments, despite being allegedly humorous, were found to justify criminal sanction. Likewise, in Women’s Initiatives Supporting Group v. Georgia, the Court faulted authorities for failing to prevent homophobic violence, emphasising that governments must not remain passive in the face of speech that risks enabling broader discrimination.
By contrast, the Petrov judgment adopts a ‘minimalist threshold’ and treats digital harmful satire as immunised from scrutiny regardless of context or consequence. This approach threatens to insulate future instances of coded hate speech that rely on irony, parody, or humour to veil discriminatory narratives. The result is a jurisprudence that undervalues digital speech’s virality, endurance, and harm.
ECtHR & Future of Digital Satire
In essence, the defence of Petrov adopts a conventional structure, intent-based analysis, satire defence, and a vague notion of audience neutrality. But in digital spaces, this framework collapses. The digital space has fundamentally reshaped the way harmful content, especially satirical expression, operates, circulates, and inflicts harm. In this environment, satire is not a neutral or universally understood literary device. It is a form of expression that can be rapidly decontextualised, algorithmically promoted, and socially legitimised, especially when targeting already marginalised communities. In this regard, the ECtHR’s rationale in Petrov neglects how digital architecture intensifies ideological clustering, how dissemination pathways are foreseeable, and how digital satire can operate as a vector for hate. To treat digitally harmful satire as legally benign based on outdated assumptions of audience unpredictability is not just an error; it is jurisprudential negligence. It is not enough to ask whether a video intends to offend or provoke; we must also ask what it does in the world it enters. That, ultimately, is the ‘threshold of severity’ the ECtHR should have measured, but did not.

Sarthak Gupta is a lawyer currently serving as a Judicial Law Clerk-cum-Research Associate to Justice Sandeep Mehta at the Supreme Court of India. He is a Helton Fellow at the American Society of International Law and an editor at the Global Freedom of Expression at Columbia University.