On January 1, 2018, the so-called NetzDG came into force in Germany, one of the first laws in the world to force the large social media platforms to promptly remove posts that contain disinformation and the so-called hate speech, a broad category that includes expressions hate speech and calls for violence against people from historically marginalized minorities and groups. Since then, several other countries have begun to reflect on the need for similar rules, and the European Parliament approved a new law on the moderation of dangerous content that should replace the NetzDG in Germany when it enters into force.
The main purpose of the law was to make sure that all social networks with more than two million users complied with Germany’s libel, hate speech and threat laws, which are among the toughest in the world and which were written after the experience of World War II and Nazism. According to the German Justice Minister at the time, Heiko Maas, social media platforms – and in particular Facebook and Twitter – were not doing enough to remove content deemed illegal under German law.
Already at the time, the law had been heavily criticized by digital rights and freedom of expression organizations, because it was written in very vague terms and asked social networks to remove “clearly illegal” content within 24 hours. This, according to experts, could have led the platforms to censor an excessive number of contents in advance to avoid incurring fines, which could reach a maximum of 50 million euros. And it would have inspired more authoritarian countries than Germany to pass similar laws, with the aim, however, of forcing platforms to censor oppositions.
The second fear materialized almost immediately: already in 2017 Russia passed a bill on hate speech that explicitly drew on German law, and in the following years Venezuela, the Philippines, Malaysia and Turkey also did the same. The Electronic Freedom Foundation, a historic non-profit organization for the defense of freedom of expression, has defined the Turkish law as “the worst version of the NetzDG ever issued”. In Germany, however, contrary to what was feared, the law has not led to a situation of systematic censorship.
The only platform to have been fined so far has been Facebook. According to the German Federal Office of Justice, which on 2 July 2019 had requested that the company be fined a total of 2.3 million euros, only a small part of the illegal content containing hate speech and disinformation that had been reported a Facebook had effectively been removed from the platform. The fine was very small compared to the billions of euros in revenues obtained by Facebook each year but, as the journalist Janosch Delcker wrote in Politico giving the news, it had “a symbolic weight, given that it marked the first case of a European country sanctioning a giant social media American for not being transparent about how it handles hate speech.” It is not clear whether Facebook has paid the fine or if the German justice is still considering its appeal.
According to the German magazine Capital, the problem is not so much censorship as the fact that, contrary to what lawmakers hoped when writing the law in 2017, the NetzDG does not seem to have made much progress in solving or even curbing the problem of incitement to hate online. On the contrary, in recent years Germany has been crossed by several cases of tangible violence whose perpetrators had already repeatedly shown signs of strong radicalization towards far-right ideals on their social profiles: the murder of left-wing politician Walter Lübcke, in 2019 , but also the attacks on the synagogue in Halle and in Hanau, in 2019 and 2020 respectively.
– Read also: Should we entrust content moderation to users?
To allow law enforcement agencies to more easily identify potential violations, at the beginning of 2022 the German parliament had tried to amend the 2017 text to oblige platforms not only to remove hate speech, but also to communicate the address IP – i.e. the number that identifies an internet connection – of the people who had published it to the Federal Criminal Police Office, which would then have passed the data to a unit dedicated specifically to illegal online content. The new law, however, was first blocked by a German court questioned by Meta and Google, according to which the text was partially in violation of European legislation.
Meanwhile, also given the vagueness of the terms contained in the law, individuals who experience hate speech online continue to depend mainly on the platforms’ content moderation decisions, which are sometimes made by real people, but are often made by algorithms that are still a long way from understanding the context and tone in which a comment is posted or a message is sent.
– Read also: It’s not easy to make the Internet a safer place for children
Leave a Reply