The virus of hate
Read the article by CEJI Communication Officer and Hate Speech Advisor Julia Mozer about hate speech during the Corona crisis, published in French in June 2020 in Signes des temps, the magazine of BePax.
Hate speech is a term we all know, or we all think we know, but it is more complex than it seems at first sight. There has been an increasing amount of research on the topic in the past years as hate became more prominent online. The Coronavirus pandemic did not leave communities unaffected, with new types of hate speech appearing and an unimaginable amount of misinformation flooding social media – and despite all the efforts from policymakers, social media companies, CSOs and activists, the issue of hate speech persists. Why is that so and what can we do about it? We will discuss below.
What is hate speech
Even though the term is being used more and more, hate speech still lacks a universal definition: it is defined differently in international legislation, national legislation, in the Terms and Conditions of social media companies, or in the circles of CSOs and activists. Since there is no single definition, it is better to view the variety of definitions on a spectrum where stricter, legal definitions are on one side, while broader definitions, preferred by NGOs and victim support services, take place on the other side of the spectrum.
An example of a broad definition is of the Facing Facts Partnership, that perceives hate speech to be “any communication which is potentially harmful in a given context to an individual or group based on one or more of their characteristics. It may be illegal or legal according to local laws. We recognise the fundamental right to free speech and encourage positive and proportionate responses that balance free speech with the right to be protected from targeted abuse.”
Broader definitions tend to include a victim-centered approach by allowing the victim to decide whether a piece of content or certain speech is hateful or not. This is also the key to why hate speech is hard to define universally: the harm that it implies can be perceived very subjective, as often those who do not belong to the targeted group will not be able to fully grasp it. It’s natural: we all understand better the context and nuances we are familiar with.
Legal definitions are strongly encouraged for recording or advocacy purposes. Public authorities, prosecution and police tend to prefer these stricter definitions which are less subjective, but also give less space and agency to the victim. However, it is worthwhile to remember that stricter, legal definitions can omit a significant amount of hate speech that also needs to be tackled because hate speech causes harm regardless of its legality.
One of the most important legal definitions stems from the EU Framework Decision 2008/913/JHA, that defined hate speech as “public incitement to hate or to violence”. Importantly, the definition also includes “publicly condoning, denying or grossly trivialising crimes of genocide, crimes against humanity and war crimes”.
Whether it is illegal hate speech or legal one (in the ‘grey zone’ as it’s often called), hate speech does not exist in isolation. For it to become hateful, it often needs context as on its own it may not convey the hateful message that it intended. A simple example for this is an image of a bomb exploding, which, when posted under an article about migration, can convey a hateful message. Context is also the reason why moderators who might have different cultural background that the content they are looking at can struggle to identify if something is hateful or not. Finally, context can also be understood as the wider context of the online world where hate speech intertwines with other phenomena.
Hate speech in the context of the web
Hate has always existed and it’s unlikely to ever go away. However, with the rise of social media, hate speech has become much more visible and it’s been evolving into new forms and directions. When combined with other online harms, it becomes even more harmful. It often goes hand in hand with harassment and cyber-bullying, when individuals are targeted and repeatedly messaged with the same hateful content that attacks one or more of their characteristics. These attacks can be particularly painful as they tend to target the essence of the individual, with repetition and escalation worsening the impact. Hate speech has also been an essential component of radicalization, applied meticulously in creating an “us versus them” narrative.
Finally, hate speech, implicit or explicit, has been part of misinformation, contributing to a hostile environment against certain groups and channeling fringe ideas into the mainstream. When reading false news about “dirty migrants bringing in diseases” or an “international group is making huge profit from this”, these statements play into our unconscious bias and drip-by-drip enable the development of negative feelings towards a given group. Such messages contribute to a hostile environment and when one belongs to a targeted community, it can increase the sense of insecurity and alienation. Ultimately, hate speech causes serious damage to the fabric of society by undermining trust in each other.
Hate speech during the Corona crisis
The current Corona crisis has put our global communities into an unprecedented situation, impacting the lives of every single individual one way or another. Our online spheres are no exceptions. How does this pandemic impact hatred online and where will it lead us?
As the virus and its spread intensified in Europe, with lockdowns being introduced first to Italy, followed soon by other countries, there seemed to have been a pause in hatred online. As if people, who usually disseminate hatred, were too busy to figure out how to act in light of an unusual enemy that is invisible to human eye and cannot be connected to a group of people. For a short period, there was space to breathe, as we all tried to adapt to a series of new measures aimed to protect everyone: it seemed that for once, we were all in this together.
It did not last long.
The moment people realized the challenges posed by the crisis and their economic consequences, they rushed online to find someone to blame and to find an explanation why this is happening. Online misinformation spiked like never before as social media companies has been trying to accelerate their fact-checking activities and increase their transparency. Hate, as usual, is lurking in the background. Old stereotypes got revived, blaming minority groups for the spread of the virus, from myths that the Jews, or Asian minorities are behind it to blaming Roma people and refugees for bringing in the virus and spreading it further in Europe. In a recent study, the SCAN project where CEJI is a partner, examined thoroughly current hate trends in light of the Corona crisis, investigating the different ways scapegoating appears in countries, often shifting the blame from one group to another, or at different ones simultaneously. The resurgence of antisemitism and anti-elite sentiments has been intensifying across EU countries along with a specific type of hatred: Anti-Asian.
From the very beginning of the pandemic, Asian communities across the globe have been targeted by hate attacks[i], both verbal and physical, as people desperately looked for someone to blame in times of uncertainty. Chinese and Asian communities reported significantly more racist and hostile behaviors than before, leading to the birth of the hashtag #JeNeSuisPasUnVirus in France. As hate crime often goes unreported, it can be safely assumed that the number of cases is even higher in reality.
It is certain that the current health crisis will be followed by an even greater economic crisis, affecting millions across the continent. We know that historically economic depressions are a breeding ground for hatred and scapegoating, therefore we can except to see a continuation and escalation of hatred online certainly in the coming months, if not years.
What can we do?
First of all, it is essential to raise awareness across all spectrum of society on hate speech, related online phenomena and their harms, starting from youth all the way to educating older generations who are most likely to share misinformation online.
Civil society, which has a significant role in monitoring social media and in supporting victims, needs to be strengthened so that it can continue its vital work. There is a clear need for specialized trainings on the topic for law enforcement agencies, combined with training on unconscious bias, so that they can fulfil the role they have in handling reports on hate speech. While our health and economic systems stagger, policymakers also need to prioritize fighting hate, as there will inevitably be a resurgence when recession hits.
Finally, what can we do, as individuals? Each of us have our own spheres of influence where we can make a change. We can start by educating ourselves, by completing one of the Facing Facts Online courses on hate speech, which is available in English, French, German and Italian. We can also encourage others not to be bystanders online when seeing a hateful comment. The work might seem overwhelming, but we don’t need to be an expert in countering hate speech; we can simply start by expressing solidarity with those who are being targeted so that slowly, step by step, we can strengthen our collective mental immunity and overcome the virus of hate.
 https://www.haaretz.com/us-news/.premium-the-jews-control-the-chinese-labs-that-created-coronavirus-1.8809635, https://www.timesofisrael.com/conspiracy-theory-that-jews-created-virus-spreads-on-social-media-adl-says/