While much of the attention has focused on the 2016 U.S. presidential election and the role played in it by the Internet Research Agency, one of the defendants named in the indictment, Russian social media bots also have been detected inserting discord into the Parkland, Fla., shooting debate on social media. The accounts put themselves into hashtags surrounding the Parkland shooting and mentioned topics such as Parkland, gun control, shooter Nikolas Cruz, the NRA and other related topics. Bhat added that Botcheck.me uses machine learning to build a statistical model using inputs like date, frequency of tweets, bio, follower counts and other stats to determine whether the account is a bot or a person. At least 1.4 million people on Twitter interacted with Russian propaganda during the 2016 presidential election --double the number initially identified, according to a company blog post. Approximately 150 million Facebook users saw inflammatory posts created by the Internet Research Agency, according to a report from Engadget. Additionally, Facebook said it found approximately $50,000 in "potentially politically related ad spending," that was spent on approximately 2,200 ads. The Internet Research Agency's potential involvement with the fraudulent Facebook ad spending was first reported in September 2017 by both The New York Times and The Washington Post. There are also photo and video departments at the Internet Research Agency. Another former Internet Research Agency worker, Lyudmila Savchuk, said her experience there corresponds with the allegations made by Mueller and his team. "The most important principle of the work is to have an account like a real person," Savchuk added.
Russian bots aren’t pro-Republican or pro-Democrat: they’re simply anti-American.
That’s the conclusion many are reaching in the wake of the indictments recently handed out by Special Counsel Robert Mueller against 13 Russian nationals and three Russian entities who allegedly enacted a sophisticated plot to wage “information warfare” against the United States.
Marat Mindiyarov, a former commenter at the Internet Research Agency, says the organization’s Facebook department hired people with excellent English skills to sway U.S. public opinion through an elaborate social media campaign.
His own experience at the agency makes him trust the U.S. indictment, Mindiyarov told The Associated Press. “I believe that that’s how it was and that it was them,” he said.
While much of the attention has focused on the 2016 U.S. presidential election and the role played in it by the Internet Research Agency, one of the defendants named in the indictment, Russian social media bots also have been detected inserting discord into the Parkland, Fla., shooting debate on social media. Russian bots have reportedly been taking both sides in the debate.
Hamilton 68, a website built by Alliance for Securing Democracy, has tracked Twitter activity from accounts that have purportedly been involved with Russian dissuasion campaigns, according to a Wired report. The accounts put themselves into hashtags surrounding the Parkland shooting and mentioned topics such as Parkland, gun control, shooter Nikolas Cruz, the NRA and other related topics.
Other websites, such as Botcheck.me, have also seen an increase in Russian bot activity following the Parkland shooting, using phrases such as “school shooting” and “gun control” and hashtags such as #guncontrol and #guncontrolnow.
“We worked in a group of three where one played the part of a scoundrel, the other one was a hero, and the third one kept a neutral position.”
In an email to Fox News, Ash Bhat, co-creator of Botcheck.me, said the project’s analysis “found that a majority of tweets tagged with #mueller over the weekend [Fri. and Sat.] came from automated accounts.” For comparison purposes, the site also tracked #blackpanther (a hashtag surrounding a superhero movie) and “we found that only a single digit percentage were from these automated accounts.”
Bhat added that Botcheck.me uses machine learning to build a statistical model using inputs like date, frequency of tweets, bio, follower counts and other stats to determine whether the account is a bot or a person.
The site has found that bots will promote certain hashtags over others, including #memonday, which relates to the recently released Devin Nunes memo. “We theorize that this might be because it lets these networks frame the public debate around the events. For example, debating gun violence vs. debating mental illness,” Bhat told Fox News.
He also noted that @realdonaldtrump, @potus and @foxnews (the main Twitter handle for this website) are among the most tweeted-at accounts. @Realdonaldtrump and @potus are “usually in the top 3,” he said, while @foxnews moves around often in the top 10. Bhat added that CNN’s Twitter account also “tends to be in the top 10.”
“The most important principle of the work is to have an account like a real person. They create real characters, choosing a gender, a name, a place of living and an occupation. Therefore, it’s hard to tell that the account was made for the propaganda.”
Bigger than the election and the fight against it
The Internet Research Agency has also allegedly purchased online advertisements and created content for other contentious topics beyond the 2016 U.S. presidential election.
It reportedly used doctored videos to spread false reports about a supposed Islamic State attack on a chemical plant in Louisiana and a purported case of Ebola in the state of Georgia. Seeking to sow division and mistrust ahead of the U.S. election, the agency apparently whipped up a fake video of an African-American woman being shot dead by a white police officer in Atlanta.
The two primary social media companies that have been subject to the influx of bot accounts and propaganda, Twitter and Facebook, are attempting to fight back, with varying degress of success.
In September, the Jack Dorsey-led Twitter gave an update on how it is attempting to stop bots and misinformation on its platform. It said that it had built systems to identify suspicious log-in attempts, catching about 450,000 suspicious logins per day, using machine learning and automated processes. Thanks to the processes put in place, it saw a 64 percent “year-over-year increase in suspicious logins we’re able to detect,” but noted significantly more work needs to be done.
Bhat said that it is “impossible to say whether an account is ‘Russian’ with the data publicly available,” adding that Twitter has access to IP logs and other information that has not been released publicly and could be used to determine an account’s origin.
Data has not yet been released…