1 of 9
Ng Han Guan, Associated Press
Computer screens display the fake tweets that online users could self generate at a Chinese website in Beijing, China, Thursday, Jan. 26, 2017. Online users were flocking to a new Chinese website that lets them generate images of fake tweets that look just like those sent by President Donald Trump’s distinctive personal Twitter account, replete with his avatar and a real-time timestamp.

SALT LAKE CITY — Americans say the spread of made-up news is a bigger problem for the country than terrorism, illegal immigration, racism or sexism, according to a new Pew Research Center poll.

Now, Twitter is turning to artificial intelligence to help solve the problem.

The social media company bought Fabula AI, a London-based startup that uses artificial intelligence to automatically detect fake news, Twitter's chief technology officer, Parag Agrawal, announced in a June 3 blog post. The financial terms of the deal between Twitter and Fabula have not been disclosed.

"This strategic investment ... will be a key driver as we work to help people feel safe on Twitter and help them see relevant information," Agrawal wrote.

Heather Miller

The acquisition comes at a time when fake news and disinformation are frequently being deployed as weapons in political warfare. The U.S. government concluded that Russians led a systematic campaign to interfere with the 2016 presidential election using fake social media posts, advertising and videos that seized on divisive issues like race relations, immigration and gun rights. More recently, an edited video made House Speaker Nancy Pelosi appear impaired or drunk, a doctored image suggested Sen. Elizabeth Warren had a racist doll decorating her kitchen cabinet, and false stories have circulated claiming Sen. Kamala Harris had an affair with a married man and accusing South Bend, Indiana, Mayor Pete Buttigieg of sexual assault.

“The impact of made-up news goes beyond exposure to it and confusion about what is factual,” Amy Mitchell, director of journalism research at Pew Research Center, told the Guardian. “Americans see it influencing the core functions of our democratic system.”

But identifying fake news can be tricky, even for humans. The term has been used to describe everything from one-sided reporting that might be misleading to completely fabricated information spread intentionally for malicious reasons. Fake news could be an edited image or a real photo shared in the wrong context. It could be an article with one inaccurate sentence or an article with no truths at all.

Two years ago, Facebook founder Mark Zuckerberg wrote an open letter that said ending the spread of "fake news" would take "many years" because it would require the development of artificial intelligence "that can read and understand news."

Heather Miller

Fabula, however, has taken a different approach, one that Agrawal calls "novel." The company's algorithms analyze how fake news spreads rather than the content itself.

"There is ... a mounting amount of evidence that shows that fake news and real news spread differently,” Fabula co-founder and computing professor at Imperial College London, Michael Bronstein, told TechCrunch. He pointed to a 2018 study by researchers at Massachusetts Institute of Technology that found false news spreads "farther, faster, deeper and more broadly" than true news.

Another Fabula co-founder, Damon Mannion, defines fake news as "stories published on social media containing intentionally false information." To check for accuracy, Fabula relied on data from third-party fact-checkers like Snopes and PolitiFact.

Using machine learning — or computer programs that don't rely on explicit instructions but rather make inferences by analyzing patterns in data — Fabula can detect 93 percent of fake news within hours of dissemination, Bronstein told TechCrunch.

And in contrast to traditional machine learning techniques, Fabula's patented technology works on datasets that are extremely large and complex — think millions of tweets, likes and retweets every single day.

"Our initial focus when Fabula joins the Twitter team will be to improve the health of the conversation, with expanding applications to stop spam and abuse and other strategic priorities in the future," Agrawal wrote.

Politics

Pew finds that Americans think differently about fake news depending on whether they identify as a Republican or Democrat. Sixty-two percent of Republican-leaning individuals say made-up news is a "very big problem," compared to 40 percent of Democrat-leaning people.

Atlantic writer David A. Graham postulates the gap may be due to differing definitions of "made-up news," which Pew defines as "news intended to mislead the public."

Richard Drew, Associated Press
The Twitter logo appears at the post where it trades, on the floor of the New York Stock Exchange, Friday, June 17, 2016.

"Republicans may well be responding not to out-and-out fakery, but to bias — real or perceived — in news coverage," Graham wrote. He attributes conservative ideas about journalism to a "decades-long campaign against the credibility of the mainstream press."

Indeed, 58 percent of Republicans said journalists are responsible for creating made-up news, compared to 20 percent of Democrats. However, people from both parties are more likely to say activists and politicians are to blame.

Some Republicans have criticized Twitter for removing the accounts of controversial right-wing figures including Alex Jones, Paul Nehlen and Milo Yiannopoulos. Twitter said these users violated its terms of use, but some have complained the accounts were banned because the social media site is prejudiced against conservatives.

Last month, the White House shared a 16-part survey on Twitter that asked participants if they've had their social media accounts censored due to their political views.

“Social media platforms should advance freedom of speech. Yet too many Americans have seen their accounts suspended, banned, or fraudulently reported for unclear ‘violations’ of user policies,” read the White House tweet.

Ng Han Guan, Associated Press
Computer screens display the fake tweets that online users could self generate at a Chinese website in Beijing, China, Thursday, Jan. 26, 2017. Online users were flocking to a new Chinese website that lets them generate images of fake tweets that look just like those sent by President Donald Trump’s distinctive personal Twitter account, replete with his avatar and a real-time timestamp.

Cheryl K. Chumley, online opinion editor for the Washington Times, said Twitter's acquisition of U.K.-based Fabula is further evidence of its liberal-leaning.

"(Artificial intelligence) that comes from a global network of scientists who more likely than not lean left on the political scales, and who more likely than not only bounce ideas off their similarly leftist leaning colleagues and acquaintances, doesn’t bode well for free-loving Americans," Chumley wrote.

"This latest partnership is sure to bring a European-tied hammer on rhetoric deemed fake. Which in the eyes and minds of most in the world of science and technology, means — conservative," she added.

Twitter has long defended its political neutrality. However, it has struggled to determine what to do about white nationalists and supremacists, many of whom identify as conservative.

Last month, Twitter announced it will be conducting research into how white supremacist groups use the platform, in part to decide whether or not they should be allowed to remain on the site. Does communicating on Twitter expose people to alternate viewpoints and make them more moderate, or does it popularize dangerous ideology?

In an interview with Vice’s Motherboard, Twitter's head of trust and safety, Vijaya Gadde, said, "Counter-speech and conversation are a force for good, and they can act as a basis for de-radicalization, and we've seen that happen on other platforms, anecdotally.”

Self-policing

While many hope social media companies will be able to solve the problem of fake news on their own with new technology, others say the basic structure of social media sites encourages the spread of disinformation and that further intervention is necessary.

"Our political conversations are happening on infrastructure — Facebook, YouTube, Twitter — built for viral advertising," Renee Diresta wrote for Wired. "The velocity of social sharing, the power of recommendation algorithms, the scale of social networks, and the accessibility of media manipulation technology has created an environment where pseudo events, half-truths, and outright fabrications thrive."

Misinformation now appears in the forms of fake polling, fake fundraisers and fake think tanks, according to the Guardian.

J. David Ake, Associated Press
This April 3, 2017, file photo shows U.S. President Donald Trump's Twitter profile on a computer screen in Washington. President Donald Trump claimed that Twitter removed “many people” from his account. But he appears to have actually gained followers since the beginning of October. According to the Internet Archive’s Wayback Machine, which collects snapshots of web pages over time, Trump had 54.8 million followers on Oct. 1. He had 55.3 million as of Friday, Oct. 26, 2018.

Twitter estimated that in the first quarter of 2019, fake or spam accounts represented fewer than 5 percent of its active user base. Facebook said it removed a record 2.2 billion fake accounts in its first quarter of 2019.

“The platform companies — Facebook, Twitter, Google — are alert to the fact that there’s a problem, and they have taken firm actions of self-policing,” Virginia Sen. Mark Warner, vice chairman of the Senate Intelligence Committee, told the Guardian. “But from a guardrails or rules-of-the-road standpoint, remarkably we’ve done nothing.”

Sarah Miller, the deputy director of Open Markets Institute, a nonprofit that advocates against corporate monopolies, told the Guardian that consumers cannot expect social media companies to self-regulate because their businesses are based on surveillance and data mining.

“It’s the fundamental business model of Facebook and Google to promote content that is sensationalistic and engaging, whether or not it is responsible content,” she said.

More than half of Americans have changed the way they use social media because of fake news by unfollowing certain people or news organizations, according to Pew. But while much of the public discussion around fake news has focussed on social media, survey results showed those who get their news through social media encounter made-up news about as often as those who prefer other news pathways.

35 comments on this story

Warner, who is working on legislation to address these issues, told the Guardian there is bipartisan agreement that social media platforms must be reined in but a lack of consensus on how far to go.

Solutions could range from regulating digital campaign advertisements by online companies, using identity validation and geo-locators to more clearly distinguish between bots and real people, and increasing transparency around how consumer data is used, the Guardian reported.

“I don’t think anybody’s fully figured this out,” Warner said.