BERLIN — Social media companies operating in Germany face fines of as much as $57 million if they do not delete illegal, racist or slanderous comments and posts within 24 hours under a law passed on Friday.
The law reinforces Germany’s position as one of the most aggressive countries in the Western world at forcing companies like Facebook, Google and Twitter to crack down on hate speech and other extremist messaging on their digital platforms.
But the new rules have also raised questions about freedom of expression. Digital and human rights groups, as well as the companies themselves, opposed the law on the grounds that it placed limits on individuals’ right to free expression. Critics also said the legislation shifted the burden of responsibility to the providers from the courts, leading to last-minute changes in its wording.
Technology companies and free speech advocates argue that there is a fine line between policy makers’ views on hate speech and what is considered legitimate freedom of expression, and social networks say they do not want to be forced to censor those who use their services. Silicon Valley companies also deny that they are failing to meet countries’ demands to remove suspected hate speech online.
“With this law, we put an end to the verbal law of the jungle on the internet and protect the freedom of expression for all,” Mr. Maas said. “We are ensuring that everyone can express their opinion freely, without being insulted or threatened.”
“That is not a limitation, but a prerequisite for freedom of expression,” he continued.
The law will take effect in October, less than a month after nationwide elections, and will apply to social media sites with more than two million users in Germany.
It will require companies including Facebook, Twitter and Google, which owns YouTube, to remove any content that is illegal in Germany — such as Nazi symbols or Holocaust denial — within 24 hours of it being brought to their attention.
The law allows for up to seven days for the companies to decide on content that has been flagged as offensive, but that may not be clearly defamatory or inciting violence. Companies that persistently fail to address complaints by taking too long to delete illegal content face fines that start at 5 million euros, or $5.7 million, and could rise to as much as €50 million.
Every six months, companies will have to publicly report the number of complaints they have received and how they have handled them.
In Germany, which has some of the most stringent anti-hate speech laws in the Western world, a study published this year found that Facebook and Twitter had failed to meet a national target of removing 70 percent of online hate speech within 24 hours of being alerted to its presence.
The report noted that while the two companies eventually erased almost all of the illegal hate speech, Facebook managed to remove only 39 percent within 24 hours, as demanded by the German authorities. Twitter met that deadline in 1 percent of instances. YouTube fared significantly better, removing 90 percent of flagged content within a day of being notified.
Facebook said on Friday that the company shared the German government’s goal of fighting hate speech and had “been working hard” to resolve the issue of illegal content. The company announced in May that it would nearly double, to 7,500, the number of employees worldwide devoted to clearing its site of flagged postings. It was also trying to improve the processes by which users could report problems, a spokesman said.
Twitter declined to comment, while Google did not immediately respond to a request for comment.
The standoff between tech companies and politicians is most acute in Europe, where freedom of expression rights are less comprehensive than in the United States, and where policy makers have often bristled at Silicon Valley’s dominance of people’s digital lives.
But advocacy groups in Europe have raised concerns over the new German law.
Mirko Hohmann and Alexander Pirant of the Global Public Policy Institute in Berlin criticized the legislation as “misguided” for placing too much responsibility for deciding what constitutes unlawful content in the hands of social media providers.
“Setting the rules of the digital public square, including the identification of what is lawful and what is not, should not be left to private companies,” they wrote.
Even in the United States, Facebook and Google also have taken steps to limit the spread of extremist messaging online, and to prevent “fake news” from circulating. That includes using artificial intelligence to remove potentially extremist material automatically and banning news sites believed to spread fake or misleading reports from making money through the companies’ digital advertising platforms.