Over the past 11 months, someone has created thousands of fake, automated Twitter accounts – possibly hundreds of thousands of them – to offer a stream of praise for donald trump.
In addition to posting adoring words about the former president, the fake accounts have ridiculed bipartisan Trump critics and attacked Nikki Haley, the former South Carolina governor and UN ambassador who is challenging his former boss for the 2024 Republican presidential nomination.
When it came to Ron DeSantisthe bots aggressively suggested the Florida governor couldn’t beat Trump, but would be a great running mate.
As Republican voters measure their candidates for 2024whoever created the botnet is looking to put a thumbs up, using online manipulation techniques pioneer speak Kremlin to influence the digital platform’s conversation about candidates while leveraging Twitter’s algorithms to maximize their reach.
The vast bot network was discovered by researchers from Cyabra, an Israeli tech company that shared its findings with The Associated Press. Although the identity of the people behind the network of fake accounts is unknown, Cyabra analysts have determined that it was likely created in the United States.
To identify a bot, researchers look for patterns in an account’s profile, its list of followers, and the content it posts. Human users typically post on a variety of topics, with a mix of original and reposted material, but bots often post repetitive content on the same topics.
This was the case with many robots identified by Cyabra.
“One account will say, ‘Biden is trying to take our guns; Trump was the best,’ and another will say, ‘January 6 was a lie and Trump was innocent,'” said Jules Gross, the Cyabra engineer who discovered the network. “These voices are not people. For the sake of democracy, I want people to know what’s going on.
Bots, as they are commonly known, are automated fake accounts that became notoriously well-known after Russia employed them in an attempt to meddle in the 2016 election. As big tech companies improved their detection of fake accounts , the network identified by Cyabra shows that they remain a powerful force in shaping online political discussion.
The new pro-Trump network is actually three different networks of Twitter accounts, all created in huge batches in April, October and November 2022. In total, researchers believe hundreds of thousands of accounts could be involved.
The accounts all feature personal photos of the alleged account holder as well as a name. Some of the accounts posted their own content, often in response to real users, while others reposted content from real users, helping to amplify it further.
“McConnell… traitor!” wrote one of the stories, in response to an article in a conservative publication about GOP Senate Leader Mitch McConnell, one of several Republican critics of Trump targeted by the network.
One way to assess the impact of bots is to measure the percentage of posts on a given topic generated by accounts that appear to be fake. The percentage of typical online debates is often less than 10 figures. Twitter itself has stated that less than 5% of its daily active users are fake or spam accounts.
However, when Cyabra researchers looked at negative posts about specific criticisms of Trump, they found much higher levels of inauthenticity. Almost three-quarters of negative posts about Haley, for example, were attributed to fake accounts.
The network also helped popularize a call for DeSantis to join Trump as his vice-presidential running mate — an outcome that would serve Trump well and allow him to avoid a potentially bitter clash if DeSantis enters the race.
The same network of accounts shared overwhelmingly positive content about Trump and contributed to an overall false image of his support online, the researchers found.
“Our understanding of what mainstream Republican sentiment is for 2024 is manipulated by the prevalence of online bots,” the Cyabra researchers concluded.
The triple net was uncovered after Gross analyzed tweets about different national political figures and noticed that many accounts posting the content were created on the same day. Most accounts remain active, despite having a relatively small number of followers.
A message left with a Trump campaign spokesperson was not immediately returned.
Most bots aren’t designed to persuade people, but to amplify certain content so more people see it, according to Samuel Woolley, a professor and disinformation researcher at the University of Texas whose latest book focuses on propaganda. automated.
When a human user sees a hashtag or piece of content from a bot and reposts it, they do the network’s work for that and also send a signal to Twitter’s algorithms to further speed up the delivery of the content.
Bots can also be successful in convincing people that a candidate or idea is more or less popular than reality, he said. More pro-Trump bots can cause people to exaggerate his overall popularity, for example.
“Bots absolutely impact the flow of information,” Woolley said. “They are built to manufacture the illusion of popularity. Repetition is the main weapon of propaganda and bots are really good at repetition. They are really good at getting information right in front of people’s eyes.
Until recently, most bots were easily identified by their clumsy handwriting or their account names that included nonsense words or long strings of random numbers. As social media platforms got better at detecting these accounts, bots got more sophisticated.
so called cyborg Accounts are an example of this: a bot that is periodically taken over by a human user who can post original content and respond to users in a human way, making them much harder to spot.
Robots may soon become much sneakier thanks to advances in artificial intelligence. New AI programs can create realistic profile pictures and posts that look much more authentic. Bots that sound like a real person and deploy deepfake video technology can challenge platforms and users in new ways, according to Katie Harbath, a fellow at the Bipartisan Policy Center and former director of public policy at Facebook.
“Platforms have gotten so much better at fighting bots since 2016,” Harbath said. “But the guys that we’re starting to see now, with AI, they can create fake people. Fake videos.
These technological advances likely ensure that bots have a long future in American politics – as digital foot soldiers in online campaigns and as potential problems for voters and candidates trying to defend themselves against anonymous online attacks. .
“There has never been more noise online,” said Tyler Brown, political consultant and former digital director for the Republican National Committee. “How malicious or even unintentionally unfactual is it? It’s easy to imagine that people can manipulate this.