Twitter makes money for first time ever, but problems remain
Twitter makes money for first time ever, but problems remain
The company is still struggling to get people to sign up, despite the attention President Donald Trump’s no-holds barred tweets have drawn to the service. One problem: Anyone can read tweets without signing up. As a result, Twitter’s user base pales compared with Facebook and the Facebook-owned Instagram.
And that means fewer advertising opportunities.
Beyond that, Twitter has been dealing with policing hate speech and abusive comments, fake accounts and attempts by Russian agents to spread misinformation. Every time Twitter tries to respond to a problem, it’s either not good enough, or some other problem emerges.
“They are playing whack-a-mole with these problems,” said Michael Connor, whose Open Mic group helps investors push tech companies to address privacy, abuse and other issues. “They say they have the problem under control, but they don’t know what the problem is exactly.”
Add to that a revolving door of executives, including an influential chief operating officer leaving after Thursday’s earnings report.
Twitter said it had an average of 330 million monthly active users in the final three months of last year, unchanged from the previous quarter and below Wall Street’s estimate of 333 million. By contrast, Facebook has 2.2 billion and Instagram has more than 800 million.
Twitter hadn’t turned a profit until now because — competing with Facebook, Google and others for digital ad dollars — it didn’t attract enough advertising revenue to make up for its expenses. But it’s been cutting costs and focusing on new revenue streams, such as live video.
In some good news, the company grew revenue by 2 percent to $732 million in the final three months of 2017. That’s above the $687 million that analysts polled by FactSet were expecting. Its net income — a first — was $91 million, or 12 cents per share. Adjusted earnings were 19 cents, above analysts’ expectations of 14 cents.
After the results came out, the company’s stock jumped more than 17 percent in morning trading to $31.64, its highest level since 2015.
The quarter “was a breath of fresh air for investors that have patiently awaited for this turnaround story to manifest after years of pain,” said Daniel Ives, head of technology research at GBH Insights.
Nonetheless, Twitter has big challenges ahead. Connor said that while investors don’t want to micromanage Twitter, they at least want the company “to show that there is a level of management and governance on the senior level in place willing to address these issues.”
While Twitter is well-known, it remains difficult to use, making it difficult for the company to explain to people why they need it. Twitter also has an “image problem,” Wedbush analyst Michael Pachter said in a recent research note, “as it has been slow to act on harassment and other hostile behavior.”
The company has enacted a slew of new policies, and Pachter says this renewed focus should help. But enforcing them will be a bigger hurdle .
Connor’s group recently helped two large Twitter and Facebook shareholders file resolutions asking the companies to take more responsibility for fake news, abuse and hate speech. The companies have not formally responded, though Twitter has introduced a slew of new measures to weed out abusive account and has said that it “cares deeply” about misinformation and its harmful effect on civic discourse.
Then there’s the issue of automated accounts made to look like real people. In the days after a New York Times report on the “shadowy global marketplace” of brands and celebrities buying fake retweets and followers, prominent Twitter users collectively lost more than a million followers, suggesting that Twitter either didn’t know or didn’t act until the expose.
Fake accounts aren’t a new problem. Last June, Twitter said it has been “doubling down” on its efforts to weed out such accounts by “expanding our team and resources, and building new tools and processes.” It estimates that less than 5 percent of monthly active users are fake. But the Times referenced a report saying it could be as high as 15 percent.
One chief problem: more fake accounts keep popping up, and those behind them are getting smarter, so Twitter’s countermeasures haven’t made much of a dent.
Forrester Research analyst Erna Alfred Liousas said that while rival social networks such as Facebook deal with fake accounts, too, it may be “more elevated for Twitter” because there has been so much focus on its monthly user numbers. Anything that could jeopardize advertisers’ ability to see how many people they will reach, she said, “is going to cause concern.”
Another concern: last month Chief Operating Officer Anthony Noto announced his resignation from the company following Thursday’s earnings report. Noto, who was also finance chief until last July, has served an influential and important role at the company and had led its venture into live video. Twitter said it is not replacing Noto, and instead will split his duties between executives.
“Now (that) he’s gone, who’s running the company?” Pachter said.
Technically, that’s CEO Jack Dorsey. But Dorsey splits his time as head of payments company Square.
Twitter has “less than Jack’s undivided attention,” Pachter said, adding that nonetheless Dorsey runs the company with a “benevolent autocracy” that leaves little room for innovation.
By contrast, Pachter said Facebook CEO Mark Zuckerberg “is not afraid if they alter his baby, his invention, to make it better,” even if in the end Zuckerberg may be the final arbiter.
Twitter declined to comment. But Dorsey said at a conference late last year that it’s “not about the amount of time I spend at one thing but how I spend the time and what we’re focused on.”
Facebook says it was ‘too slow’ to fight hate speech in Myanmar
YANGON: Facebook has been “too slow” to address hate speech in Myanmar and is acting to remedy the problem by hiring more Burmese speakers and investing in technology to identify problematic content, the company said in a statement on Thursday.
The acknowledgement came a day after a Reuters investigation showed why the company has failed to stem a wave of vitriolic posts about the minority Rohingya.
Some 700,000 Rohingya fled their homes last year after an army crackdown that the United States denounced as ethnic cleansing. The Rohingya now live in teeming refugee camps in Bangladesh.
“The ethnic violence in Myanmar is horrific and we have been too slow to prevent misinformation and hate speech on Facebook,” Facebook said.
The Reuters story revealed the social media giant for years dedicated scant resources to combating hate speech in Myanmar, which is a market it dominates and where there have been repeated eruptions of ethnic violence.
In early 2015, for instance, there were only two people at Facebook who could speak Burmese monitoring problematic posts.
In Thursday’s statement, posted online, Facebook said it was using tools to automatically detect hate speech and hiring more Burmese-language speakers to review posts, following up on a pledge made by founder Mark Zuckerberg to US senators in April.
The company said that it had over 60 “Myanmar language experts” in June and plans to have at least 100 by the end of the year.
Reuters found more than 1,000 examples of posts, comments, images and videos denigrating and attacking the Rohingya and other Muslims that were on the social media platform as of last week.
Some of the material, which included pornographic anti-Muslim images, has been up on Facebook for as long as six years.
There are numerous posts that call the Rohingya and other Muslims dogs and rapists, and urge they be exterminated.
Facebook currently doesn’t have a single employee in Myanmar, relying instead on an outsourced, secretive operation in Kuala Lumpur – called Project Honey Badger – to monitor hate speech and other problematic posts, the Reuters investigation showed.
Because Facebook’s systems struggle to interpret Burmese script, the company is heavily dependent on users reporting hate speech in Myanmar.
Researchers and human rights activists say they have been warning Facebook for years about how its platform was being used to spread hatred against the Rohingya and other Muslims in Myanmar.
In its statement on Thursday, Facebook said it had banned a number of Myanmar hate figures and organizations from the platform.