The tool, called BotOrNot, has been developed by Indiana University and analyses over 1,000 features from a user's friendship network, their Twitter content and temporal information, all in real time. (Agencies)
"We have applied a statistical learning framework to analyse Twitter data, but the 'secret sauce' is in the set of more than one thousand predictive features able to discriminate between human users and social bots, based on content and timing of their tweets, and the structure of their networks," said Alessandro Flammini, an associate professor of informatics and principal investigator on the project.
Through use of features and examples of Twitter bots provided by Texas A&M University professor James Caverlee's infolab, the researchers are able to train statistical models to discriminate between social bots and humans.
According to Flammini, the system is quite accurate. "Part of the motivation of our research is that we don't really know how bad the problem is in quantitative terms," said Fil Menczer, the informatics and computer science professor who directs IU's Center for Complex Networks and Systems Research, where the new work is being conducted.
"Are there thousands of social bots? Millions? We know there are lots of bots out there, and many are totally benign. But we also found examples of nasty bots used to mislead, exploit and manipulate discourse with rumours, spam, malware, misinformation, political Astroturf and slander," Menczer said.
Flammini and Menczer said it is their belief that these kinds of social bots could be dangerous for democracy, cause panic during an emergency, affect the stock market, facilitate cybercrime and hinder advancement of public policy.
The goal is to support human efforts to counter misinformation with truthful information, researchers said.
The tool, called BotOrNot, has been developed by Indiana University and analyses over 1,000 features from a user's friendship network, their Twitter content and temporal information, all in real time.