Since May 2016, Facebook, Twitter, YouTube and Microsoft have committed to combatting the spread of such content in Europe through the Code of Conduct. The third monitoring round shows that the companies are now increasingly fulfilling their commitment to remove the majority of illegal hate speech within 24 hours. However, some further challenges still remain, in particular the lack of systematic feedback to users.
Google+ announced today that they are joining the Code of Conduct, and Facebook confirmed that Instagram would also do so, thus further expanding the numbers of actors covered by it.
Andrus Ansip, European Commission Vice President for the Digital Single Market, welcomed these improvements: "Today's results clearly show that online platforms take seriously their commitment to review notifications and remove illegal hate speech within 24 hours. I strongly encourage IT companies to improve transparency and feedback to users, in line with the guidance we published last year. It is also important that safeguards are in place to avoid over-removal and protect fundamental rights such as freedom of speech."
Vĕra Jourová, EU Commissioner for Justice, Consumers and Gender Equality, said:" The Internet must be a safe place, free from illegal hate speech, free from xenophobic and racist content. The Code of Conduct is now proving to be a valuable tool to tackle illegal content quickly and efficiently. This shows that where there is a strong collaboration between technology companies, civil society and policy makers we can get results, and at the same time, preserve freedom of speech. I expect IT companies to show similar determination when working on other important issues, such as the fight with terrorism, or unfavourable terms and conditions for their users.
Since its adoption in May 2016, the Code of Conduct has delivered steady progress in the removal of notified illegal content, as today's evaluation shows:
- On average,IT companies removed 70% of all the illegal hate speech notified to them by the NGOs and public bodies participating in the evaluation. This rate has steadily increased from 28% in the first monitoring round in 2016 and 59% in the second monitoring exercise in May 2017.
- Today, all participating IT Companies fully meet the target of reviewing the majority of notifications within 24 hours, reaching an average of more than 81%. This figure has doubled compared to the first monitoring round and increased from 51% of notifications assessed within 24 hours registered in the previous monitoring round.
While the main commitments in the Code of Conduct have been fulfilled, further improvements need to be achieved in the following areas:
- Feedback to users is still lacking for nearly a third of notifications on average, with different response rates from different IT Companies. Transparency and feedback to users is an area where further improvements should be made.
- The Code of Conduct complements legislation fighting racism and xenophobia which requires authors of illegal hate speech offences - whether online or offline - to be effectively prosecuted. On average one in five cases reported to companies were also reported by NGOs to the police or prosecutors. This figure has more than doubled since the last monitoring report. Such cases need to be promptly investigated by the police. The Commission has provided a network for cooperation and for the exchange of good practices for national authorities, civil society and companies, as well as targeted financial support and operational guidance. About two third of the Member States have now in place a national contact point responsible for online hate speech. A dedicated dialogue between competent Member State authorities and IT Companies is envisaged for spring 2018.
The Commission will continue to monitor regularly the implementation of the Code by the participating IT Companies with the help of civil society organisations and aims at widening it to further online platforms. The Commission will consider additional measures if efforts are not pursued or slow down.
The Framework Decision on Combatting Racism and Xenophobia criminalises the public incitement to violence or hatred directed against a group of persons or a member of such a group defined by reference to race, colour, religion, descent or national or ethnic origin. Hate speech as defined in this Framework Decision is a criminal offence also when it occurs online.
The EU, its Member States, social media companies and other platforms, all share a collective responsibility to promote and facilitate freedom of expression in the online world. At the same time, all these actors have a responsibility to ensure that the internet does not become a free haven for violence and hatred.
To respond to the proliferation of racist and xenophobic hate speech online, the European Commission and four major IT companies (Facebook, Microsoft, Twitter and YouTube) presented a “Code of conduct on countering illegal hate speech online" in May 2016.
This third evaluation was carried out by NGOs and public bodies in 27 Member States, which issued the notifications. On 7 December 2016 the Commission presented the results of a first monitoring exercise to evaluate the implementation of the Code of Conduct. On 1 June 2017, the results of a second monitoring round were published.
On 28 September, the Commission adopted a Communication which provides for guidance to platforms on notice-and-action procedures to tackle illegal content online. The importance of countering illegal hate speech online and the need to continue working with the implementation of the Code of Conduct feature prominently in this guidance document.
On 9 January 2018, several European Commissioners met with representatives of online platformsto discuss the progress made in tackling the spread of illegal content online, including online terrorist propaganda and xenophobic, racist illegal hate speech as well as breaches of intellectual property rights (see joint-statement).