Saturday, December 3, 2022
Home TECH Bot hunting is all about vibes

Bot hunting is all about vibes

Christopher Bouzy is trying to stay ahead of the bots. As the person behind Bot Sentinel, a popular bot detection system, he and his team continually update their machine learning models out of fear that they will become “stale”. homework? Classification of 3.2 million tweets from suspended accounts in two folders: “Bot” or “No”.

To detect bots, Bot Sentinel models must first learn what the problematic behavior is through exposure to data. And by supplying the model with tweets in two distinct categories (bot or non-bot), Bouzy’s model is able to calibrate itself and supposedly find the very essence of what it says makes a tweet problematic.

Training data is the heart of any machine learning model. In the burgeoning field of bot detection, how bot hunters define and tag tweets determines how their systems interpret and classify bot-like behavior. According to experts, this may be more of an art than a science. “At the end of the day, it’s all about a vibe when you do the tagging,” says Bouzy. “It’s not just about the words in the tweet, the context matters.”

He is a bot, she is a bot, everyone is a bot

Before anyone can hunt bots, they have to find out what a bot is, and that answer changes depending on who you ask. The internet is full of people accusing each other of being bots over petty political disagreements. Trolls are called bots. People with no profile picture and few tweets or followers are called bots. Even among professional bot hunters, the answers differ.

Bot Sentinel is trained to take down what Bouzy calls “problem accounts,” not just automated accounts. Indiana University computer science and informatics professor Filippo Menczer says the tool he helped develop, Botometer, defines bots as accounts that are at least partially controlled by software. Kathleen Carley is a professor of computer science at Carnegie Mellon University’s Software Research Institute and has helped develop two bot detection tools: bot hunter other BotBuster. Carley defines a bot as “an account that is run using fully automated software,” a definition that aligns with Twitter’s own. “A bot is an automated account, nothing more or less”, the company wrote in a May 2020 blog post on the manipulation of the platform.

Just as definitions differ, the results these tools produce do not always align. An account marked as a bot by Botometer, for example, could become perfectly human in Bot Sentinel, and vice versa.

Some of this is by design. Unlike Botometer, which aims to identify automated or partially automated accounts, Bot Sentinel looks for accounts that engage in toxic trolling. According to Bouzy, you know these accounts when you see them. They can be automated or controlled by humans, and engage in harassment or misinformation and violate Twitter’s terms of service. “Just the worst of the worst,” says Bouzy.

Botometer is maintained by Kaicheng Yang, a doctoral candidate in computer science at Indiana University’s Social Media Observatory, who created the tool with Menczer. The tool also uses machine learning to classify bots, but when Yang is training his models, he’s not necessarily looking for harassment or terms of service violations. He is just looking for bots. According to Yang, when he labels his training data, he asks himself a question: “Should I believe Does the tweet come from a person or an algorithm?

How to train an algorithm

Not only is there no consensus on how to define a bot, but there is no single clear criteria or signal that any researcher can point to that accurately predicts whether an account is a bot. Bot hunters believe that exposing an algorithm to thousands or millions of bot accounts helps a computer detect bot-like behavior. But the objective efficiency of any bot detection system is clouded by the fact that humans still have to make judgment calls about what data to use to build it.

Take Botometer, for example. Yang says that Botometer is trained on tweets from around 20,000 accounts. While some of these accounts self-identify as bots, Yang and a team of researchers manually classify most before the algorithm analyzes them. (Menczer says that some of the accounts used to train Botometer come from data sets from other peer-reviewed research. “We try to use all the data we have on hand, as long as it comes from a trusted source,” he says.) )


RELATED ARTICLES

How a small electoral business became a conspiracy theory target

At an invitation-only conference in August at a secret location southeast of Phoenix, a group of election deniers revealed a new conspiracy theory about...

19 Android Settings You Might Not Know About

if you are a typical smartphone user, you average almost five hours a day on your phone now. But beyond video and social...

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -

Most Popular

Time to get serious about Democrats’ chances in the House

There were more than a few Democrats who were a little miffed at my Friday bulletin on election rigging, which argued that Democrats aren't...

Uncertain Path For US-Taiwan Free Trade Deal Despite Hill’s Support

If Taiwan's semiconductor industry were to be destroyed, downgraded or subjected to Western sanctions as a result of a Chinese military occupation of the...

As Ukrainian forces enter Lyman, hundreds of Russians may still be trapped inside

A local resident walks past abandoned Russian tanks in recently liberated Kyrylivka. September 30, 2022. All through Friday night, Russian sources on Telegram were...