Google takes aim at fake news ‘bad actors’

Matt Brittin, Google’s head of Europe, the Middle East and Africa, believes more needs to be done to tackle the exponential growth of fake news. (Photo courtesy of Google)
Updated 07 November 2017
0

Google takes aim at fake news ‘bad actors’

LONDON: Matt Brittin, Google’s head of Europe, the Middle East and Africa, is used to being in the spotlight. Extremist content, brand safety, corporate tax avoidance — he has publicly faced questioning about it all.
Now, in light of Russian interference in the 2016 US election and the recent Senate committee hearings in Washington, it’s all about politics.
The US-based tech company has admitted that the Kremlin-linked Internet Research Agency spent $4,700 on advertising as part of a misinformation campaign during the election. It also revealed that 1,108 Russian-linked videos uploaded to YouTube generated 309,000 views in the US.
Although Google has since launched initiatives to provide increased transparency and enhance security in light of the revelations, Brittin told reporters at Google’s European headquarters in Dublin that more needs to be done.
“Any misrepresentation is bad and we have continued to look at how we can improve our policies and transparency,” he said. “Any time there’s (an) electoral process we really want to make sure that our role in that is clean and legitimate and supportive in all of the things that you would expect. And we work hard to do that.”
According to Brittin, who is president of EMEA business and operations at Google, “bad actors” were attempting to use Google’s systems and platforms for “bad purposes”, and had been trying to do so for some time.
“We’ve constantly tried to put in place policies and controls and verifications to try to stop that from happening,” he said. “We’ve made some good progress and we obviously need to do more.”
In light of Russia’s suspected interference in the 2016 presidential election, Google has undergone a deep review of how its ad systems are used. Although some changes have already been made to the company’s policies, a transparency report for election ads to be released in early 2018 should shine more light on the topic.
The furor over political ads, however, is far from Google’s only problem. Concerns over privacy, tax evasion, ad fraud and brand safety have shadowed the company over the past few years. In March, for example, Brittin had to issue an apology to the advertising industry after brands found their ads appearing next to controversial content on YouTube.
All of which goes hand-in-hand with a discernible backlash against the tech industry. While Facebook has received the greatest levels of flack, Google stands accused of being too big and too powerful. It is an accusation that Brittin acknowledges.
“Because of the pace of change in how everyday people are using technology, communicating, accessing information, creating and sharing their own content, that change throws up a whole bunch of new questions for all of us,” said Brittin. “And what I want to make sure that Google does is [be] in the room when there’s a conversation about those things going on and we can explain what we do today. Because quite often that’s misunderstood or not researched that thoroughly.”
Brittin uses fake news as an example.
“Fake news has become a topical term — an umbrella term spanning everything from what people don’t like that’s written about them to genuinely misrepresentative stuff,” said Brittin. “So in a world where 140 websites about US politics come from a Macedonian village, that’s clearly misrepresentative and fake and we need to work hard to tackle that. Bad actors and anyone with a smartphone being able to create content is a challenge.
“We’ve tried to do two things in this category. We try to help quality content thrive, and we have tried to identify and weed out the bad actors. The amount of work we do on weeding out the bad actors is phenomenal and not that widely known.”
Google said it took down 1.7 billion ads for violating its advertising policies in 2016, a figure that represents double the amount taken down in 2015. It also removed over 100,000 publishers from AdSense and expanded its inappropriate content policy to include dangerous and derogatory content. It is also using artificial intelligence and machine learning tools to better detect suspicious content.
Meanwhile, projects such as the Digital News Initiative, a partnership between Google and publishers in Europe, are supporting high-quality journalism through technology and innovation.
“I think about three groups really: users, creators (in the broader sense, whether it’s entrepreneurs or journalists or content creators of videos or app developers), and advertisers,” said Brittin.
“And if we want the next five billion people to come online to have the benefits of the services and the content that we enjoy today, we need to make sure that that ecosystem continues to work well.
“The online world is just like the world. There are complexities and challenges and there are bad actors there too, and what we need to do as an industry is come together to make it as safe as we can do. We can’t always guarantee 100 percent safety, but what we can do is put in place rules and principles and practices and so on that help people to use this and navigate the highway safely.”


Surge in anonymous Asia Twitter accounts sparks bot fears

Updated 22 April 2018
0

Surge in anonymous Asia Twitter accounts sparks bot fears

HONG KONG: It has been jokingly referred to as “Botmageddon.” But a surge in new, anonymous Twitter accounts across swathes of Southeast and East Asia has deepened fears the region is in the throes of US-style mass social media manipulation.
Maya Gilliss-Chapman, a Cambodian tech entrepreneur currently working in Silicon Valley, noticed something odd was happening in early April.
Her Twitter account @MayaGC was being swamped by a daily deluge of follows from new users.
“I acquired well over 1,000 new followers since the beginning of March. So, that’s approximately a 227 percent increase in just a month,” she told AFP.
While many might delight in such a popularity spike, Gilliss-Chapman, who has previously worked for tech companies to root out spam, was immediately suspicious.
The vast majority of these new accounts contained no identifying photograph and had barely tweeted since their creation.
But they all seemed to be following prominent Twitter users in Cambodia including journalists, business figures, academics and celebrities.
She did some digging and published her findings online, detailing how the vast majority of accounts were recently created in batches by unknown operators who worked hard to hide their real identities.
She wasn’t alone.
Soon prominent Twitter users in Thailand, Vietnam, Myanmar, Taiwan, Hong Kong and Sri Lanka noticed the same phenomenon — a surge in follows from anonymous, recently created accounts, adopting local sounding names but barely engaging on the platform, as if lying in wait for someone’s command.
While Facebook has received the lion’s share of international opprobrium in recent months over allegations it has been slow to respond to people and state actors manipulating its platform, Twitter has also faced accusations it has not done enough to rid the platform of fake users.
Most bots are used for commercial spam. But they have been deployed politically in Asia before. During the 2016 Philippines presidential election, there was a surge of organized bots and trolls deployed to support the man who eventually won that contest, the firebrand populist Rodrigo Duterte.
And after Myanmar’s military last year launched a crackdown against the country’s Rohingya Muslim minority, there was a wave of accounts that cropped up supportive of the government on Twitter, a platform that until then had very few Burmese users.
With elections due in Cambodia, Malaysia, Thailand and Indonesia in the next two years, many hit by the Twitter follow surge in Asia are asking whether the Silicon Valley tech giants are doing enough to stop fake accounts before they are given their marching orders.
So far Twitter has found nothing untoward.
A spokesperson for the company said engineers were “looking into the accounts in question and will take action against any account found to be in violation of the Twitter Rules.”
A source with knowledge of the probe said they believe the accounts are “new, organic users” who were likely being suggested prominent Twitter users across Asia to follow when they sign up.
“It’s something we’re keeping an eye on, but for now, it looks like a pretty standard sign-up/onboarding issue,” the source told AFP.
But many experts have been left unconvinced by such explanations.
“Are there really this many new, genuine users joining Twitter, all with the same crude hallmarks of fake accounts?” Raymond Serrato, an expert at Democracy Reporting International who has been monitoring the suspicious accounts, told AFP.
The issue of fake users is hugely sensitive for Twitter because a crackdown could severely dent its roughly 330 million audience — the company’s main selling point.
In a 2014 report to the US Securities and Exchange Commission, Twitter estimated some 5-8.5 percent of users were bots.
But Emilio Ferrara, a research professor at the University of Southern California, published research last year suggesting it could be double that: 9-15 percent.
Last week Pew Research Center released a report analizing 1.2 million English language tweets which contained links to popular websites. Two-thirds of the tweets came from suspected bot accounts.
Twitter Audit Report, a third party company that scans people’s followers using software to estimate how many are fake, suggests as many as 16 million of Donald Trump’s 51 million followers are not real people.
Jennifer Grygiel, an expert on social media at Syracuse University, New York, said the US presidential election has provided a blueprint for others to copy.
“Bad actors around the world have really followed the potential of social media to influence the political process,” she told AFP.
Twitter, she said, is a minnow compared to Facebook’s more than two billion users. But it can still be influential because many prominent opinion formers such as journalists, politicians and academics have a major presence on the platform.
“If you can get information within this population, then you’ve scored,” she said.
Serrato, from Democracy Reporting International, said the fake accounts could still pose a threat even if they are currently inactive.
“The accounts can be used at a later date to amplify certain tweets, hijack hashtags, or harass people,” he said.
Grygiel used a more blunt metaphor.
“The risk is the accounts are sitting there like a cancer,” she said.