Google takes aim at fake news ‘bad actors’
Google takes aim at fake news ‘bad actors’
Now, in light of Russian interference in the 2016 US election and the recent Senate committee hearings in Washington, it’s all about politics.
The US-based tech company has admitted that the Kremlin-linked Internet Research Agency spent $4,700 on advertising as part of a misinformation campaign during the election. It also revealed that 1,108 Russian-linked videos uploaded to YouTube generated 309,000 views in the US.
Although Google has since launched initiatives to provide increased transparency and enhance security in light of the revelations, Brittin told reporters at Google’s European headquarters in Dublin that more needs to be done.
“Any misrepresentation is bad and we have continued to look at how we can improve our policies and transparency,” he said. “Any time there’s (an) electoral process we really want to make sure that our role in that is clean and legitimate and supportive in all of the things that you would expect. And we work hard to do that.”
According to Brittin, who is president of EMEA business and operations at Google, “bad actors” were attempting to use Google’s systems and platforms for “bad purposes”, and had been trying to do so for some time.
“We’ve constantly tried to put in place policies and controls and verifications to try to stop that from happening,” he said. “We’ve made some good progress and we obviously need to do more.”
In light of Russia’s suspected interference in the 2016 presidential election, Google has undergone a deep review of how its ad systems are used. Although some changes have already been made to the company’s policies, a transparency report for election ads to be released in early 2018 should shine more light on the topic.
The furor over political ads, however, is far from Google’s only problem. Concerns over privacy, tax evasion, ad fraud and brand safety have shadowed the company over the past few years. In March, for example, Brittin had to issue an apology to the advertising industry after brands found their ads appearing next to controversial content on YouTube.
All of which goes hand-in-hand with a discernible backlash against the tech industry. While Facebook has received the greatest levels of flack, Google stands accused of being too big and too powerful. It is an accusation that Brittin acknowledges.
“Because of the pace of change in how everyday people are using technology, communicating, accessing information, creating and sharing their own content, that change throws up a whole bunch of new questions for all of us,” said Brittin. “And what I want to make sure that Google does is [be] in the room when there’s a conversation about those things going on and we can explain what we do today. Because quite often that’s misunderstood or not researched that thoroughly.”
Brittin uses fake news as an example.
“Fake news has become a topical term — an umbrella term spanning everything from what people don’t like that’s written about them to genuinely misrepresentative stuff,” said Brittin. “So in a world where 140 websites about US politics come from a Macedonian village, that’s clearly misrepresentative and fake and we need to work hard to tackle that. Bad actors and anyone with a smartphone being able to create content is a challenge.
“We’ve tried to do two things in this category. We try to help quality content thrive, and we have tried to identify and weed out the bad actors. The amount of work we do on weeding out the bad actors is phenomenal and not that widely known.”
Google said it took down 1.7 billion ads for violating its advertising policies in 2016, a figure that represents double the amount taken down in 2015. It also removed over 100,000 publishers from AdSense and expanded its inappropriate content policy to include dangerous and derogatory content. It is also using artificial intelligence and machine learning tools to better detect suspicious content.
Meanwhile, projects such as the Digital News Initiative, a partnership between Google and publishers in Europe, are supporting high-quality journalism through technology and innovation.
“I think about three groups really: users, creators (in the broader sense, whether it’s entrepreneurs or journalists or content creators of videos or app developers), and advertisers,” said Brittin.
“And if we want the next five billion people to come online to have the benefits of the services and the content that we enjoy today, we need to make sure that that ecosystem continues to work well.
“The online world is just like the world. There are complexities and challenges and there are bad actors there too, and what we need to do as an industry is come together to make it as safe as we can do. We can’t always guarantee 100 percent safety, but what we can do is put in place rules and principles and practices and so on that help people to use this and navigate the highway safely.”
WhatsApp seeks to stem fake news ahead of Pakistan election
- Pakistan’s leading English-language daily listed ten tips on differentiating rumors from fact
- WhatsApp had come under pressure from Indian authorities to put an end to the spread of rumors
ISLAMABAD: The hugely popular WhatsApp messaging service began a week-long publicity campaign in Pakistan Wednesday offering tips to spot fake news, days before the country holds a general election.
“Together we can fight false information,” says the full-page ad in Dawn, Pakistan’s leading English-language daily, listing ten tips on differentiating rumors from fact.
“Many messages containing hoaxes or fake news have spelling mistakes. Look for these signs so you can check if the information is accurate,” it says.
“If you read something that makes you angry or afraid, ask whether it was shared to make you feel that way. And if the answer is yes, think twice before sharing it again.”
WhatsApp also announced the implementation in the country of a new feature allowing recipients to see if a message is original or forwarded.
The company had bought full-page advertising in India on July 10 after a wave of lynchings in the country were linked to viral “fake news” spread by WhatsApp about alleged child kidnappings.
WhatsApp, owned by Facebook, had come under pressure from Indian authorities to put an end to the spread of rumors, which have caused the deaths of more than 20 people in the past two months.
Millions of people use WhatsApp in neighboring Pakistan, where rumors, false information and conspiracy theories are ubiquitous. Such messages spread quickly, with no real way for recipients to check their veracity.
Pakistan also has a history of mob violence, and videos such as the murder of Mashal Khan — a journalism student accused of blasphemy who was killed by a mob in April 2017 — circulate rapidly.
Parliamentary elections are scheduled for July 25.