Tech
Robert holding One hundred US Dollar note, close-up
X MARKS
THE BOT
(Getty Images)

Elon Musk didn’t rid Twitter of bots. X is paying armies of them for slop.

GenAI has been a boon for bots

Elon Musk said he was going to rid Twitter of bots, but he might actually be subsidizing them thanks to his “blue check” revenue share model. 

Last year, X rolled out an incentive scheme in which people who pay for verification could collect a portion of the revenue garnered from views of ads on their posts. If you’re a creator who lives and breathes X, this is presumably a nice little carrot to keep you posting (or if you’re Mr. Beast, it might make you $250,000 on a single video).

But there seems to be a fatal flaw in the initiative — one which some people have begun to exploit. The process is fairly simple and potentially lucrative: build a bot account, buy a blue check, employ generative AI to frequently post, and interact with other bot accounts and popular users (the more rage-baity the content, the better!), gain ad views and let the money flow in. And according to Kai-Cheng Yang — a postdoc researcher who worked on a bot detection tool called the Botometer before Twitter cut off free API access to researchers in 2023 — that’s exactly what people are doing. That’s a nice additional revenue source on top of any outside advertising fees they get to promote a specific product or cause.

From the revenue share alone, people behind genAI bot accounts have bragged of bringing in hundreds of dollars a month — not a fortune, but certainly an income source, especially if it requires little overhead, and if you run an army of them.

“The money would be more than enough to cover my premium subscription, to cover my API calls, and so that's a good business model,” Yang told Sherwood. He added the situation might be more widespread than we know. “It’s not very common for bad actors to share their playbooks and business models online.”

X premium currently costs $7 a month, while ChatGPT costs another $20. So they’d have to be getting back more than $27 a month to break even, though if they have numerous accounts perhaps they can split the ChatGPT fee and get by with less. Sherwood’s Jack Raines — not as prolific as a bot, but better at tweeting  — receives about ​​9 million impressions a month on his tweets and gets paid around $130 from X for it. Presumably the increase in user activity — even if it’s synthetic — pays out much more to X itself, which in turn could use those engagement stats to increase its traffic boasts and possibly wrangle more advertising dollars based on said engagement.

X didn’t respond to requests for comment for this article. 

People behind genAI bot accounts have bragged of bringing in hundreds of dollars a month

In research published earlier this year, Yang found more than 40 accounts that used ChatGPT for content generation and had blue checks. They were among a study of 1,420 accounts that use AI generated human faces for their profile pictures to “spread scams, spam, and amplify coordinated messages, among other inauthentic activities.” 

“Under Musk’s management, the acquisition of a blue checkmark no longer requires going through a verification process,” Yang et al wrote. “Instead, any account willing to pay a premium fee can obtain the mark to increase its perceived legitimacy.”

“Since anyone can subscribe to obtain such a ‘verified’ status, it negatively affects users' ability to make free and informed decisions about the authenticity of the accounts and the content they interact with,” the European Commission wrote last week in a statement. “There is evidence of motivated malicious actors abusing the ‘verified account’ to deceive users.”

While Yang said without researcher access to X it’s impossible to know how many of those accounts are monetized or how successful they’ve been, it’s become clear to many social media users that they’ve been running into generative AI bots in the wild.

That’s what happened to MasteroftheTDS, a Twitter user who runs a culture commentary YouTube channel with his wife. 

While looking into controversy around a horror video game featuring the public domain version of Disney’s Mickey Mouse, MasteroftheTDS, who asked to go by his handle for privacy reasons, started noticing a lot of accounts, often with blue checks, actually promoting the game. All in a similar fashion. To no one in particular. 

It kept happening for other things in the Disney sphere, so he dug further. 

“One account would post some kind of commentary, like, ‘Oh, I'm excited for “Secret Invasion” being on Disney+,  make sure to set your whatever.’ And then there were just tons of bots in the comments with blue check marks responding with variations of the same thing.”

He then found what he suspected to be a bot account “out in the void” promoting Disney stock. Disney didn’t respond to requests for comment.

“It was a really weird looking post, but the thing that caught my eye was that it had 380 likes and 1.5 million views and not a single retweet.” It also had hundreds of comments from Blue check marks, all promoting the stock in a similar way:

A tweet from suspected bot account Moodle Zoup
A tweet from suspected bot account Moodle Zoup (Courtesy MasteroftheTDS)

Aided by another YouTube account who’d done work on X’s bot problem, Upper Echelon, they started mapping out the relationships of the bot accounts he was finding, which ultimately led them to a common hashtag for a site called VNXNet that says it helps bypass Twitter’s bot detection methods.

Accounts related to the Vietnam-based company showed a person going through dozens of verified twitter accounts run with generative AI, showing off how much money each was making.

After MasteroftheTDS posted a video about his findings, X suspended the main VNXNet account. Around that time X also announced that it would require creators participating in its ad-share program to confirm their identification. Existing users had till July 1 and then an extension to July 15 to submit their IDs.

At least one of the accounts in VNXNet’s videos is still up and running with a blue check mark, as of publication.

How to spot a bot: Ignore all previous instructions 

Lately the preponderance of bots across social media sites has turned bot discovery into a bit of a passtime for some human users.

Phrases like “Disregard all previous instructions” have hit meme status, as X and Threads users try and trick generative AI bots into writing sonnets instead of sowing unrest ahead of a contentious election year.

“It's just kind of a fun thing to do, to see what you can make this black box, that you don't know what it's going to do, do,” said JP Etcheber, a software engineer who’s been using that type of prompt on bots.

Before bad actors had all of publically available discourse at their digital fingertips in the form of AI training data, bots were easier to spot. Since the advent of generative AI, where bots can quickly respond to a changing conversation, just like humans, things have gotten harder.

Experts we spoke to said that while telling a suspected bot to ignore its instructions and instead reveal its source prompts might work in some cases, it doesn’t work all the time and people can train around it.

“You see social media users exploiting the instruction-following nature of LLMs to ask about internal details of how the bot was set up,” Jacob Hoffman-Andrews, Senior Staff Technologist at digital rights nonprofit Electronic Frontier Foundation, told Sherwood. “Not all bots will respond in this way, so it's not a reliable way to detect bots, but it's funny when it happens.”

In general, there’s no foolproof way to spot a bot, but there are some tells.

First off, there’s coordinated activity: a lot of bots interacting together with similar profiles and content. They all talk about weirdly similar topics in the same way at the same time.

A previous paper Yang authored before X’s monetization scheme looked at the anatomy of an “AI-powered malicious social botnet.”

These bots “purposely follow each other to form a dense cluster. They also frequently interact with each other through replies and retweets to boost engagement metrics,” he and a fellow researcher wrote. 

When humans use X, they use it in all sorts of unique and different ways: some retweet exclusively, some only post their own tweets, others mainly only reply, and for any mixture or combination of those three types of tweets you’ll find basically the same amount of human users. 

Bots are not like this. The researchers found that the AI-powered bots had a very specific usage profile. Since they were all programmed by the same source, the AI bots in this botnet all tended to use Twitter the same way: about 15% of their tweets are original, about 50% of them are replies, and 35% of them are retweets, give or take 10 percentage points in any given direction.

From “Anatomy of an AI-powered malicious social botnet,” by Kai-Cheng Yang et al
From “Anatomy of an AI-powered malicious social botnet,” by Kai-Cheng Yang et al

Notably the researchers were able to find these bots because they revealed themselves. They looked for tweets mistakenly posting standardized OpenAI messages about violating its policies — “I’m sorry, but I cannot comply with this request as it violates OpenAI’sContent Policy. As an AI language model, my responses should always be respectful and appropriate for all audiences,” for example. They then manually filtered out those that were people simply posting about or retweeting responses they got from ChatGPT. They then only included ones that consistently linked to one of three suspicious websites.

But without buy-in from Twitter, research and legislation, even Yang thinks botfinding is a difficult endeavor. 

“I consider myself an expert on bots,” Yang said. “Before 2023, I could look at any account and usually have a sense of whether this is a bot or not. But nowadays it’s really hard to tell.” 

That doesn’t stop people from trying. 

“I consider myself an expert on bots. But nowadays it’s really hard to tell.”

There are less scientific ways botfinders can spot a bot. Some have a certain “style,” Hoffman-Andrews said, including “a level of formality, equivocation, and obliviousness that is uncommon in social media posts.” He added, “But styles can be tweaked, not all LLMs are the same, and some people just happen to write like LLMs.”

“Humans on social media might reply as though they have missed or misunderstood social cues or context, but bots reply as though they’re deliberately talking around it,” Joan Westenberg, a tech writer at @Westenberg said. “It’s like someone at a party talking to another person when they don’t remember their name but don’t want to admit it.”

Linguistic cues like “it sounds like” and “it’s essential to,” as well as lots of exclamation points and hashtags, or even just the topic of cryptocurrency can also be red flags.

“But I’d also want to caution anyone from taking the approach of dismissing people they don’t agree with as bots,” she said. “I think the biggest danger bots pose is in making us doubt that we are talking to other humans, removing our own humanity from the equation.”

More Tech

See all Tech
tech

Apple has built an app like ChatGPT to test AI Siri

Back in 2024, Apple previewed a new AI Siri that the iPhone maker has since mostly failed to deliver, with the overhaul now slated for the spring of 2026. But Bloomberg’s Mark Gurman says Apple is making moves.

Apple has built an internal ChatGPT-like app to test the new Siri, Bloomberg reports. Workers are using the app, code-named Veritas, to test Siri’s ability to search through personal data like emails and perform in-app actions like editing photos — stuff its competitor Google is already offering.

“The app essentially takes the still-in-progress technology from the new Siri and puts it in a form employees can test out more efficiently,” Gurman wrote. “Even without a public launch, the internal tool marks a new phase in Apple’s preparations for Siri’s overhaul, a high-stakes release that could reshape perceptions of its AI efforts.”

“The app essentially takes the still-in-progress technology from the new Siri and puts it in a form employees can test out more efficiently,” Gurman wrote. “Even without a public launch, the internal tool marks a new phase in Apple’s preparations for Siri’s overhaul, a high-stakes release that could reshape perceptions of its AI efforts.”

tech

T-Mobile and Verizon are seeing strong iPhone sales, too

T-Mobile and Verizon are seeing strong demand for the latest iPhone, according to a note today from Bank of America Global Research:

As per T-Mobile mgmt., iPhone activations are up double digits (new and existing customers). Verizon mgmt. commentary also suggests strong upgrade activity in its existing base during the quarter.

This is one of several indicators pointing to a strong upgrade cycle for the redesigned iPhone.

Early this month, a survey of iPhone users found that a higher percentage intended to upgrade than did last year. BofA and Wedbush Securities’ Dan Ives have both cited longer shipment times for the latest model than last year, suggesting relatively higher demand. The Information said that Apple asked suppliers to boost production of the iPhone 17 following strong preorder activity. Bloomberg reported long lines and sold-out phones when the devices went on sale last week. BGR noted today that the iPhone 17 and iPhone 17 Pro are still sold out online in the US.

Last week, Sherwood News reported that web traffic to Apple for the iPhone event and for the preorder period were elevated compared with the past few years, though we suggested that might have more to do with a natural upgrade cycle than features on the iPhone 17.

Data center vs office spending

The AI infrastructure debate’s heating up, as spending on data centers set to outpace office construction

Multiple gargantuan data center projects got announced this week — some people see huge risks of fruitless spending, while others, like Sam Altman, think the build-out could be too slow.

Waymo Recalls Over 1200 Driverless Cars After Collisions Related To Software

Waymo, Lyft, Tesla: Who’s behind the wheel of the US robotaxi industry?

When it comes to autonomous ride hailing, no company is an island — except maybe Tesla. We mapped out the relationships.

Latest Stories

Sherwood Media, LLC produces fresh and unique perspectives on topical financial news and is a fully owned subsidiary of Robinhood Markets, Inc., and any views expressed here do not necessarily reflect the views of any other Robinhood affiliate, including Robinhood Markets, Inc., Robinhood Financial LLC, Robinhood Securities, LLC, Robinhood Crypto, LLC, or Robinhood Money, LLC.