The new dilemma for Google and Facebook
Rush Limbaugh, the doyen of right-wing talk radio, credited Daesh with being Paddock’s ideological home, arguing that it was disguised by the liberal media because “for the American left, there is no such thing as militant Islamic terrorism.” Pat Robertson, the socially conservative activist and televangelist, said the shooting stemmed from the news media’s and liberal protesters’ “profound disrespect for our president” and other institutions.
On the other side of the American culture war, a CBS vice president and legal counsel, Hayley Geftman-Gold, said she was “not even sympathetic” to the victims because “country music fans often are Republican gun toters.” Unlike her right-wing opposites, she suffered for her opinion: she was fired.
Should any of these comments be the concern of the state? The general opinion, especially in the US, is that governments should stay out of it. For Washington, the anger or distress such remarks may cause must be endured in deference to the near-absolute right of free speech protected by the First Amendment to the Constitution.
But should we tolerate such verbal brutality? Do people have to suffer distress because of the voiced prejudices of others who often — as Limbaugh does — make a rich living from their display? There’s a growing faction saying no, and it has reached, at least in Europe, the stage of state action. The EU Justice Commissioner, Vera Jourova, has told social media giants such as Facebook and Twitter that they must eliminate both hate speech and fake news, or face legislation criminalizing them for not doing so. That’s a sweeping statement: unpacking what it might mean in practice takes us deep into an area that should be marked with signs saying: “Danger! Free speech in Peril!”
Fake news is not the same as hate speech, but it can also be used to inflame social tensions. In Italy, the anti-trust chief Giovanni Petruzella has said that EU countries should create government-appointed bodies to remove fake news and even fine the media for violations. But how is fake news to be distinguished, by either artificial or human intelligence, from true news? It’s a delicate operation, since much news striving to be “true” contains false information, and much fake news has the ring of truth and would take careful investigation to disprove.
In Germany, a new law came into force this month criminalizing some digital platforms being used for hate speech. Called, challengingly, the Netzwerkdurchsetzungsgesetz, NetzDG for short, it commands that Facebook and Twitter take down “blatantly illegal” hate speech within 24 hours or, if the offending material is less obviously illegal, in a week — on pain of a fine of up to 50 million euros. The problem with it, critics claim, is that it is imprecise about what constitutes hate speech. It merely points to the passage in the German Criminal Code that declares the “defamation of religions, religious and ideological associations” illegal. What is defamation? When is one person’s unbearable insult another’s opinion?
Lisa Feldman Barrett, professor of psychology at Northeastern University, argues that “there is a difference between permitting a culture of casual brutality and entertaining an opinion you strongly oppose. The former is a danger to a civil society (and to our health); the latter is the lifeblood of democracy.” Speech of the first kind, which “bullies and torments,” is “from the perspective of our brain cells … literally a form of violence.”
Is it possible to curb hate speech while protecting free speech? The tech giants will have to try, or take a hit to their profits.
Put that way, it appears obvious: the speech that harms should be criminalized, and in parts of Europe it is being so. Facebook, Twitter and Google are now under increasing state and public pressure to stop posting material that causes more than distress but, apparently, real damage to the brain. UK Prime Minister Theresa May spoke out at the UN last month, calling on the tech companies to go much farther and faster in combating the dangerous messages they carry.
At a meeting with Google staff in London last week, I was told that the concerns of governments and the public were registered, and reform was on the way.
When I quoted the view of Fiyaz Mughal, head of the anti-extremist British advocacy organization Faith Matters, that tech companies were “not dealing with the problem” because their “bottom line is money,” I was assured this was not so. The default of the communications behemoths to absolutism in free speech has been replaced, it was said, by a finer-grained examination of cause and effect, and of what could reasonably be done to address concerns.
It’s true that to juggle the demands of free speech and security is now one of the largest ethical and practical problems facing democratic states — and the tech corporations. And it’s also true that even if Mughal is right that the companies’ first care is the bottom line — for which corporation is that not true? — the large fines now being prepared for failing to reform would be a powerful incentive to change.
Yet in the course of this complex balancing act, between security and liberty, profit and regulation, there is the danger of substantial damage to the freedoms of speech and the news media which democracies have been able to safeguard for most of the past 70 years.
Liberals have a tricky task ahead, to address two different publics: one alarmed by hate speech and militant messages, the other by measures to stop them. Confusingly, these two publics are sometimes one.
• John Lloyd co-founded the Reuters Institute for the Study of Journalism at the University of Oxford, where he is senior research fellow.
Disclaimer: Views expressed by writers in this section are their own and do not necessarily reflect Arab News' point-of-view