What is the aim of this Code of Conduct?
Each of the IT companies (Facebook, Google, Twitter, Microsoft) that signed this Code of Conduct is committed to countering the spread of illegal hate speech online, and to having rules that ban the promotion of violence and hatred.
When they receive a request to remove content from their online platform, the IT companies will assess the request against their rules and community guidelines and, where applicable, national laws on combating racism and xenophobia. They then decide if the content can be considered as illegal online hate speech and if needs to be removed.
The aim of the Code is to make sure that requests to remove content are dealt with speedily. The companies have committed to reviewing the majority of these requests in less than 24 hours and to removing the content if necessary.
What is the definition of illegal hate speech?
Illegal hate speech is defined in EU law (Framework Decision on combating certain forms and expressions of racism and xenophobia by means of criminal law) as the public incitement to violence or hatred on the basis of certain characteristics, including race, colour, religion, descent and national or ethnic origin.
Will the Code of Conduct lead to censorship?
No. The Code of Conduct's aim is to tackle online hate speech that is already illegal. The same rules apply both online and offline. Content that is illegal in the offline should not be allowed to remain legal in the online world.
The Code's aim is also to defend the right to freedom of expression. The results of a 2016 European survey showed that 75% of those following or participating in online debates had come across episodes of abuse, threat or hate speech aimed at journalists. Nearly half of these people said that this deterred them engaging in online discussions. These results show that illegal hate speech should be effectively removed from social media, as it might limit the right to freedom of expression.
Isn't it for courts to decide what is illegal?
Yes, interpreting the law is and remains the responsibility of national courts. At the same time, IT companies have to act in line with national laws, in particular those transposing the Framework Decision on combatting racism and xenophobia and the 2000 e-commerce Directive.
When they receive a valid alert about content allegedly containing illegal hate speech, the IT companies have to assess it, not only against their rules and community guidelines, but, where necessary, against applicable national law (including that implementing EU law), which fully complies with the principle of freedom of expression.
Should one take down ‘I hate you'?
Not every offensive or controversial statement or content is illegal. As the European Court of Human Rights said, ‘freedom of expression ... is applicable not only to “information” or “ideas” that are favourably received or regarded as inoffensive or as a matter of indifference, but also to those that offend, shock or disturb the State or any sector of the population'.
In the Code, both the IT Companies and the European Commission also stress the need to defend the right to freedom of expression.
Assessing what could be illegal hate speech includes taking into account criteria such as the purpose and context of the expression. The expression ‘I hate you' would not appear to qualify as illegal hate speech, unless combined with other statements about for example threat of violence and referring to race, colour, religion, descent and national or ethnic origin, among others.
What prevents government abuse?
The Code of Conduct is a voluntary commitment made by Facebook, Twitter, YouTube and Microsoft. It is not a legal document and does not give governments the right to take down content.
The Code cannot be used to make these IT Companies take down content that does not count as illegal hate speech, or any type of speech that is protected by the right to freedom of expression set out in the EU Charter of Fundamental Rights.
How did the Commission evaluate the implementation of the Code of Conduct?
The Code of Conduct is evaluated through a monitoring exercise set up in collaboration with a network of civil society organisations located in different EU countries. Using a commonly agreed methodology, these organisations test how the IT companies applied the Code of Conduct in practice. They do this by regularly sending the four IT Companies requests to remove content from their online platforms. The organisations participating in the monitoring exercise record how their requests are handled. They record how long it takes the IT companies to assess the request, how the IT Companies' respond to the request, and the feedback they receive from the IT Companies.