Credit: Photo by L. Brian Stauffer
CHAMPAIGN, Ill. — Political disinformation campaigns on social media threaten to sway political outcomes, from U.S. elections to Hong Kong protests, yet are often hard to detect.
A new study, however, has pulled back the curtain on one type of campaign called “astroturfing,” which fakes the appearance of organic grassroots participation while being secretly orchestrated and funded.
The study suggests that the key to uncovering such accounts lies not in finding automated “bots” but in specific traces of human coordination and human behavior, says JungHwan Yang, a University of Illinois communication professor who is part of a global team of researchers on the project.
The starting point for their research was a court case in South Korea that identified more than 1,000 Twitter accounts used in an astroturf campaign to boost one candidate in the country’s 2012 presidential election. Running the campaign was South Korea’s National Information Service, comparable to the U.S. Central Intelligence Agency.
This was possibly the first time such fake accounts had been firmly identified, and that was a gift to research on the topic, Yang said. “Many people are interested in this kind of research, but they don’t have the ground truth. They can’t tell which accounts are run by paid agents and which are not. In this case we had the data.”
By comparing the known astroturf accounts with accounts of average users, users who actively discussed politics on Twitter and social influencers, the researchers uncovered patterns that helped them identify other astroturf accounts, Yang said.
Their findings were published online by the journal Political Communication and they discuss their work in an article for The Washington Post’s Monkey Cage blog on political science.
In subsequent research, they’re finding similar results in analyzing other political disinformation campaigns in other countries, both past and present, including those of Russia in the 2016 U.S. election and of China in the recent Hong Kong protests.
Automated bots have gotten a lot of media coverage related to political disinformation campaigns, Yang said, but the researchers’ work in South Korea and elsewhere has found that only a small fraction of accounts appeared to be bots. That also matches with past reports from “troll farm” insiders, he said.
Bots are easy and cheap to use, can spread a lot of messages and don’t need sleep, but many people reading them can tell they are automated, similar to spam phone calls, Yang said. “If real people are behind the accounts and manage them manually, however, it’s really hard to detect whether the messages are genuine.”
By having an identified set of astroturf accounts, and analyzing data at the system level, the researchers could spot evidence of central coordination between accounts or messages. “When a person or group manages similar accounts in a very similar way for a certain goal, their behavioral activity leaves a trace,” Yang said.
Human nature leaves other traces too, he said. The organizer or “principal” in an activity usually wants things done a certain way, for instance, but the worker or “agent” seeks the path of least resistance while still meeting the organizer’s goals. Social scientists call it the “principal-agent problem.”
In an astroturf campaign on Twitter, that meant agents often would tweet or retweet the same message on multiple accounts they managed within a very short timeframe, or would frequently retweet messages from related accounts.
“The pattern we found in our data is that people try to reduce the amount of work to achieve their goal,” Yang said. “The fact is that it’s really hard to get a retweet (from an unrelated account), it’s really hard to become a social influencer on a social media platform,” so workers looked for shortcuts, and researchers could see the traces.
The study also found that despite the people and resources committed to the South Korean astroturf campaign, the results were limited. “What we found is that even though they managed more than 1,000 accounts and tried to coordinate among themselves, they failed to get the response from the public. The retweet counts and the number of mentions were more similar to average users than influencers,” Yang said.
The study’s results may be of limited value for individuals on social media trying to judge a single account’s authenticity, Yang said. One strategy is to look at the accounts that are following or retweeting a suspicious account, or examine whether those accounts are tweeting the exact same content at the same time, he said. But it’s still often difficult.
The value of their research is mostly at the system level, Yang said. He and his colleagues hope that social media companies, capable of analyzing the behavior of numerous accounts, can implement algorithms that block suspected astroturf accounts and campaigns.
###
Other co-authors on the paper were Franziska B. Keller, Hong Kong University of Science and Technology; David Schoch, University of Manchester; and Sebastian Stier, GESIS-Leibniz Institute for the Social Sciences in Cologne, Germany.
Media Contact
Craig Chamberlain
cdchambe@illinois.edu
217-333-2894
Original Source
http://news.
Related Journal Article
http://dx.