Misinformation woes could multiply with ‘deepfake’ videos

1 / 4
An AFP journalist views an example of a "deepfake" video manipulated using artificial intelligence, by Carnegie Mellon University researchers, from his desk in Washington, DC January 25, 2019. (AFP)
2 / 4
"Deepfake" videos that manipulate reality are becoming more sophisticated and realistic due to advances in artificial intelligence, creating a potential for new kinds of misinformation with devastating consequences. (AFP)
3 / 4
Paul Scharre views in his offices in Washington, DC January 25, 2019 a manipulated video by BuzzFeed with filmmaker Jordan Peele (R on screen) using readily available software and applications to change what is said by former president Barack Obama (L on screen), illustrating how “deepfake” technology can deceive viewers. (AFP)
4 / 4
A AFP journalist views a video on January 25, 2019, manipulated with artificial intelligence to potentially deceive viewers, or "deepfake" at his newsdesk in Washington, DC. (AFP)
Updated 30 January 2019
0

Misinformation woes could multiply with ‘deepfake’ videos

  • Carnegie Mellon University researchers last year revealed techniques that make it easier to produce deepfakes via machine learning to infer missing data

WASHINGTON: If you see a video of a politician speaking words he never would utter, or a Hollywood star improbably appearing in a cheap adult movie, don’t adjust your television set — you may just be witnessing the future of “fake news.”
“Deepfake” videos that manipulate reality are becoming more sophisticated due to advances in artificial intelligence, creating the potential for new kinds of misinformation with devastating consequences.
As the technology advances, worries are growing about how deepfakes can be used for nefarious purposes by hackers or state actors.
“We’re not quite to the stage where we are seeing deepfakes weaponized, but that moment is coming,” Robert Chesney, a University of Texas law professor who has researched the topic, told AFP.
Chesney argues that deepfakes could add to the current turmoil over disinformation and influence operations.
“A well-timed and thoughtfully scripted deepfake or series of deepfakes could tip an election, spark violence in a city primed for civil unrest, bolster insurgent narratives about an enemy’s supposed atrocities, or exacerbate political divisions in a society,” Chesney and University of Maryland professor Danielle Citron said in a blog post for the Council on Foreign Relations.
Paul Scharre, a senior fellow at the Center for a New American Security, a think tank specializing in AI and security issues, said it was almost inevitable that deepfakes would be used in upcoming elections.
A fake video could be deployed to smear a candidate, Scharre said, or to enable people to deny actual events captured on authentic video.
With believable fake videos in circulation, he added, “people can choose to believe whatever version or narrative that they want, and that’s a real concern.”

Video manipulation has been around for decades and can be innocuous or even entertaining — as in the digitally-aided appearance of Peter Cushing in 2016’s “Rogue One: A Star Wars Story,” 22 years after his death.
Carnegie Mellon University researchers last year revealed techniques that make it easier to produce deepfakes via machine learning to infer missing data.
In the movie industry, “the hope is we can have old movie stars like Charlie Chaplin come back,” said Aayush Bansal.
The popularization of apps which make realistic fake videos threatens to undermine the notion of truth in news media, criminal trials and many other areas, researchers point out.
“If we can put any words in anyone’s mouth, that is quite scary,” says Siwei Lyu, a professor of computer science at the State University of New York at Albany, who is researching deepfake detection.
“It blurs the line between what is true and what is false. If we cannot really trust information to be authentic it’s no better than to have no information at all.”
Representative Adam Schiff and two other lawmakers recently sent a letter to National Intelligence Director Dan Coats asking for information about what the government is doing to combat deepfakes.
“Forged videos, images or audio could be used to target individuals for blackmail or for other nefarious purposes,” the lawmakers wrote.
“Of greater concern for national security, they could also be used by foreign or domestic actors to spread misinformation.”

Researchers have been working on better detection methods for some time, with support from private firms such as Google and government entities like the Pentagon’s Defense Advanced Research projects Agency (DARPA), which began a media forensics initiative in 2015.
Lyu’s research has focused on detecting fakes, in part by analyzing the rate of blinking of an individual’s eyes.
But he acknowledges that even detecting fakes may not be enough, if a video goes viral and leads to chaos.
“It’s more important to disrupt the process than to analyze the videos,” Lyu said.
While deepfakes have been evolving for several years, the topic came into focus with the creation last April of video appearing to show former president Barack Obama using a curse word to describe his successor Donald Trump — a coordinated stunt from filmmaker Jordan Peele and BuzzFeed.
Also in 2018, a proliferation of “face swap” porn videos that used images of Emma Watson, Scarlett Johansson and other celebrities prompted bans on deepfakes by Reddit, Twitter and Pornhub, though it remained unclear if they could enforce the policies.
Scharre said there is “an arms race between those who are creating these videos and security researchers who are trying to build effective tools of detection.”
But he said an important way to deal with deepfakes is to increase public awareness, making people more skeptical of what used to be considered incontrovertible proof.
“After a video has gone viral it may be too late for the social harm it has caused,” he said.


Facebook targets fake news in Arabic language media

Updated 19 February 2019
0

Facebook targets fake news in Arabic language media

  • Social media giant reveals plans to roll out further initiatives across the Arab world
  • “We want to empower people to decide what to read, trust and share”

LONDON: Facebook has again found itself under scrutiny amid global efforts to stamp out fake news circulating on social media sites. Nashwa Aly, Facebook’s head of public policy for the Middle East and North Africa, spoke to Arab News about the company’s new Arabic-language fact-checking service.
Q: Has the fact-checking service in Arabic already started? If so, are there any results as to how many articles are being flagged as false?
A: The third-party fact-checking in Arabic rolls out as of this month, so still no results to share yet. We recognize the implications of false news on Facebook and we are committed to doing a better job to fight it. More than 181 million people use Facebook every month across the Middle East and North Africa (MENA), so this is a responsibility that we take very seriously, and we’re excited to see through the this launch in partnership with AFP MENA. 
How many people will be working on it and what kind of volume of false stories do you expect to identify daily?
It varies by country, but AFP draws on the resources of multiple local bureaus, as well as centralized Arabic-speaking fact-checkers, to fact-check content.
Why did Facebook choose to enter into this initiative? Is the fake news problem any worse in Arabic compared with other languages? Are there any specific issues in challenging this problem in Arabic compared with other languages?
This expansion with AFP, with whom we already have successful fact-checking partnerships across the Latin American and Asia Pacific regions, is a step forward in our efforts to combat Arabic-language misinformation, and we will continue to take steps to expand our efforts globally this year. This initiative is particularly important across MENA, given that misinformation is a major concern in the region.
The present challenges do not necessarily stem from the Arabic language. However, there are some challenges that can arise, such as how to treat opinion and satire. We strongly believe that people should be able to debate different ideas, even controversial ones. We also recognize that there can be a fine line between misinformation and satire or opinion. This can make it more difficult for fact-checkers to assess whether an article should be rated as “false” or left alone.
It appears from the announcement that Facebook will not be actively removing “fake news” links identified under this initiative with AFP. Is that right, and if so, do you think the initiative goes far enough?
The way this will work is that when fact-checkers rate a story as false, we significantly reduce its distribution in News Feed — dropping future views on average by more than
80 percent. Pages and domains that repeatedly share false news will also see their distribution reduced, and their ability to monetize and advertise removed.
We also want to empower people to decide what to read, trust, and share. When third-party fact-checkers write articles about a news story, we show them in Related Articles immediately below the story in News Feed. We also send people and Page Admins notifications if they try to share a story or have shared one in the past that has been determined to be false.
Finally, to give people more control, we encourage them to tell us when they see false news. Feedback from our community is one of the various signals that we use to identify potential hoaxes. 
Facebook also entered into an initiative with the UAE National Media Council to fight fake news. Is it looking to any other agreements in this field regionally, especially in Saudi Arabia?
The partnership with the UAE National Media Council and the launch of third-party fact-checking in Arabic, in partnership with AFP MENA are both key steps in our efforts against false news but are not nearly done yet. We plan to continue to take steps to expand our efforts this year both globally and regionally.