Online Content Moderation – Current challenges in detecting hate speech

Online Content Moderation – Current challenges in detecting hate speech

The EU has updated its laws and implemented policies to tackle illegal content online, such as through the Digital Services Act (DSA), to more effectively regulate online content, including hate speech. However, these changes are relatively recent. In addition, there are still uncertainties concerning how to better protect human rights online, with regard to combating online hate while protecting freedom of expression, and how to efficiently implement existing and newly developed laws.

This report aims to better understand whether standard tools to address online hate speech, hereafter referred to as ‘online hate’, are effective by looking at manifestations of online hate after social media platforms have applied their content moderation controls. This report presents findings covering four social media platforms – Telegram, X (formally Twitter), Reddit and YouTube. The platforms were selected based on their accessibility for research purposes, their popularity (i.e. audience reach) and the assumed magnitude of hate speech on them.

Read the full report here.