Skip to main content

Online content moderation

Current challenges in detecting hate speech

Online hate speech is a growing problem in today’s digitalised societies. Connecting with the world online can be a wonderful way to engage with others and bring us closer as a society. But we know the internet has a darker side, as a space for hate and division. People use online platforms to insult and offend, to harm and threaten. Women, Black people, Jews and Roma are often targets of online hate speech. Online hate proliferates where human content moderators miss offensive content. Also, algorithms are prone to errors. They may multiply errors over time and may even end up promoting online hate. This report presents the challenges in identifying and detecting online hate. Hate of any kind should not be tolerated, regardless of whether it is online or offline. The report discusses the implications for fundamental rights to support creating a rights-compliant digital environment.

CSD contributed to the report by carrying out the research in Bulgaria and producing background material for the comparative analysis.

The full report is available on the FRA website here.

This website uses cookies for functional and analytical purposes. By continuing to browse it, you consent to our use of cookies and the CSD Privacy Policy. To learn more about cookies, incl. how to disable them. View our Cookie Policy.