Outgoing Twitter worker responsible for Trump account outage

The masthead of US President Donald Trump's @realDonaldTrump Twitter account is seen on July 11, 2017. (@realDonaldTrump/Handout/File Photo via REUTERS)
Updated 03 November 2017
0

Outgoing Twitter worker responsible for Trump account outage

WASHINGTON: US President Donald Trump’s well-known Twitter account briefly vanished on Thursday evening — with the social media company blaming “human error by a Twitter employee.”
Visitors to @realDonaldTrump around 7:00 p.m. (2300 GMT) were greeted with the message “Sorry, that page doesn’t exist!“
“Earlier today @realDonaldTrump’s account was inadvertently deactivated due to human error by a Twitter employee. The account was down for 11 minutes, and has since been restored,” the official Twitter Government account said.
“We are continuing to investigate and are taking steps to prevent this from happening again.”
The outspoken US president has 41.7 million followers on his personal Twitter account, from which he blasts his most controversial and attention-grabbing comments — often in the form of early morning “tweetstorms.”WASHINGTON: A Twitter employee on their last day with the company was responsible for taking down Donald Trump’s account, the social network said Thursday, as the president resumed tweeting after the 11-minute outage.
Visitors to @realDonaldTrump around 7:00 p.m. (2300 GMT) were greeted with the message “Sorry, that page doesn’t exist!“
Twitter initially said the account had been “inadvertently deactivated due to human error,” but later indicated it was done intentionally by a departing worker.
“Through our investigation we have learned that this was done by a Twitter customer support employee who did this on the employee’s last day,” it said.
“We are conducting a full internal review,” it said on the official Twitter Government account.
The outspoken president has 41.7 million followers on his personal Twitter account, which he uses to blast controversial and attention-grabbing comments.
Trump has even used the social media site to announce policy, and surprised Pentagon chiefs in July by tweeting that transgender people would be barred from serving “in any capacity” in the US military, a ban that has since been blocked by a US court.
The outage sparked discussion of the security of Trump’s account, given the potentially dire consequences of messages falsely attributed to the president.
“It is shocking that some random Twitter employee could shut down the president’s account. What if they instead had tweeted fake messages?” Blake Hounshell, the editor-in-chief of POLITICO Magazine, wrote on Twitter.
He added: “Seriously, what if this person had tweeted about a fictional nuclear strike on North Korea?“
Many praised the temporary shutdown of Trump’s account, with users saying the unnamed employee responsible “deserves a medal” and that “not all heroes wear capes.”
“Trump’s Twitter deactivated for 11 min, and I suddenly thought I’d jumped back into the real timeline where things aren’t so damned absurd,” tweeted Star Trek actor turned social media personality George Takei.
But the temporary disappearance of the account — and the glee this prompted among the president’s detractors — drew fire from others.
“Liberals were celebrating for the 15 minutes that Trump’s Twitter disappeared, proving once again they love censorship and hate free speech,” one popular tweet read.
Trump’s official White House account, @POTUS, which has 20.9 million followers, was apparently not affected by the outage.
After the account was restored Trump did not tweet about the vanishing act, but made several posts on other topics.
Trump’s official White House account, @POTUS, which has 20.9 million followers, was apparently not affected.
Trump did not tweet about his account’s vanishing act, but after it was restored made a post about his party’s tax plan.


After Facebook scrutiny, is Google next?

Updated 21 April 2018
0

After Facebook scrutiny, is Google next?

MENLO PARK, California: Facebook has taken the lion’s share of scrutiny from Congress and the media for its data-handling practices that allow savvy marketers and political agents to target specific audiences, but it’s far from alone.
YouTube, Google and Twitter also have giant platforms awash in more videos, posts and pages than any set of human eyes could ever check. Their methods of serving ads against this sea of content may come under the microscope next.
Advertising and privacy experts say a backlash is inevitable against a “Wild West” Internet that has escaped scrutiny before. There continues to be a steady barrage of new examples where unsuspecting advertisers had their brands associated with extremist content on major platforms.
In the latest discovery, CNN reported that it found more than 300 retail brands, government agencies and technology companies had their ads run on YouTube channels that promoted white nationalists, Nazis, conspiracy theories and North Korean propaganda.
Child advocates have also raised alarms about the ease with which smartphone-equipped children are exposed to inappropriate videos and deceptive advertising.
“I absolutely think that Google is next and long overdue,” said Josh Golin, director of the Boston-based Campaign for a Commercial-Free Childhood, which asked the Federal Trade Commission to investigate Google-owned YouTube’s advertising and data-collection practices earlier this month.
YouTube has repeatedly outlined the ways it attempts to flag and delete hateful, violent, sexually explicit or harmful videos, but its screening efforts have often missed the mark.
It also allows advertisers to avoid running ads on sensitive content — like news or politics — that don’t violate YouTube guidelines but don’t fit with a company’s brand. Those methods appear to have failed.
“YouTube has once again failed to correctly filter channels out of our marketing buys,” said a statement Friday from 20th Century Fox Film, which learned that its ads were running on videos posted by a self-described Nazi. YouTube has since deleted the offending channel, but the Hollywood studio says it has unanswered questions about how it happened in the first place.
“All of our filters were in place in order to ensure that this did not happen,” Fox said, adding it has asked for a refund of any money shared with the “abhorrent channel.”
YouTube said Friday that it has made “significant changes to how we approach monetization,” citing “stricter policies, better controls and greater transparency.” It noted it allows advertisers to exclude certain channels from ads. It also removes ads when it’s notified they are running beside content that doesn’t comply with its policies.
“We are committed to working with our advertisers and getting this right,” YouTube said.
So far, just one major advertiser — Baltimore-based sports apparel company Under Armor — had said it had withdrawn its advertising in the wake of the CNN report, though the lull lasted only a few days last week when it was first notified of the problem. After its shoe commercial turned up on a channel known for espousing white nationalist beliefs, Under Armor worked with YouTube to expand its filters to exclude certain topics and keywords.
On the other hand, Procter & Gamble, which had kept its ads off of YouTube since March 2017, said it had come back to the platform but drastically pared back the channels it would advertise on to under 10,000. It has worked on its own, with third parties, and with YouTube to create its restrictive list.
That’s just a fraction of the some 3 million YouTube channels in the US that accept ads, and is even more stringent than YouTube’s “Google Preferred” lineup that focuses on the most-popular 5 percent of videos.
The CNN report was “an illustration of exactly why we needed to go above and beyond just what YouTube’s plans were and why we needed to take more control of where our ads were showing up,” said P&G spokeswoman Tressie Rose.
The big problem, experts say, is that advertisers lured by the reach and targeting capability of online platforms can mistakenly expect that the same standards for decency on network TV will apply online. In the same way, broadcast TV rules that require transparency about political ad buyers are absent on the web.
“There have always been regulations regarding appropriate conduct in content,” says Robert Passikoff, president of Brand Keys Inc., a New York customer research firm. Regulating content on the Internet is one area “that has gotten away from everyone.”
Also absent from the Internet are many of the rules that govern children’s programming on television sets. TV networks, for instance, are allowed to air commercial breaks but cannot use kid-oriented characters to advertise products. Such “host-selling” runs rampant on Internet services such as YouTube.
Action to remove ads from inappropriate content is mostly reactive because of lack of upfront control of what gets uploaded, and it generally takes the mass threat of boycott to get advertisers to demand changes, according to BrandSimple consultant Allen Adamson.
“The social media backlash is what you’re worried about,” he said.
At the same time, politicians are having trouble keeping up with the changing landscape, evident by how ill-informed many members of Congress appeared during questioning of Facebook CEO Mark Zuckerberg earlier this month.
“We’re in the early stages of trying to figure out what kind of regulation makes sense here,” said Larry Chiagouris, professor of marketing at Pace University in New York. “It’s going to take quite some time to sort that out.”