+ All Categories
Home > Social Media > The Rise of Social Bots

The Rise of Social Bots

Date post: 19-May-2015
Category:
Upload: goodbuzz-inc
View: 417 times
Download: 4 times
Share this document with a friend
Description:
The Turing test asked whether one could recognize the behavior of a human from that of a computer algorithm. Today this question has suddenly become very relevant in the context of social media, where text constraints limit the expressive power of humans, and real incentives abound to develop human-mimicking software agents called social bots. These elusive entities wildly populate social media ecosystems, often going unnoticed among the population of real people. Bots can be benign or harmful, aiming at persuading, smearing, or deceiving. Here we discuss the characteristics of modern, sophisticated social bots, and how their presence can endanger online ecosystems and our society. We then discuss current efforts aimed at detection of social bots in Twitter. Characteristics related to content, network, sentiment, and temporal patterns of activity are imitated by bots but at the same time can help discriminate synthetic behaviors from human ones, yielding signatures of engineered social tampering.
Popular Tags:
9
XX The Rise of Social Bots EMILIO FERRARA, Indiana University ONUR VAROL, Indiana University CLAYTON DAVIS, Indiana University FILIPPO MENCZER, Indiana University ALESSANDRO FLAMMINI, Indiana University The Turing test asked whether one could recognize the behavior of a human from that of a computer al- gorithm. Today this question has suddenly become very relevant in the context of social media, where text constraints limit the expressive power of humans, and real incentives abound to develop human-mimicking software agents called social bots. These elusive entities wildly populate social media ecosystems, often go- ing unnoticed among the population of real people. Bots can be benign or harmful, aiming at persuading, smearing, or deceiving. Here we discuss the characteristics of modern, sophisticated social bots, and how their presence can endanger online ecosystems and our society. We then discuss current efforts aimed at detection of social bots in Twitter. Characteristics related to content, network, sentiment, and temporal pat- terns of activity are imitated by bots but at the same time can help discriminate synthetic behaviors from human ones, yielding signatures of engineered social tampering. The rise of the machines Bots (short for software robots) have been around since the early days of computers: one compelling example is that of chatbots, algorithms designed to hold a conversa- tion with a human, as envisioned by Alan Turing in the 1950s [Turing 1950]. The dream of designing a computer algorithm that passes the Turing test has driven ar- tificial intelligence research for decades, as witnessed by initiatives like the Loebner Prize, awarding progress in natural language processing. 1 Many things have changed since the early days of AI, when bots like Joseph Weizenbaum’s ELIZA [Weizenbaum 1966] —that employed a Rogerian approach to fool the interlocutor— were developed as proofs-of-concept or for delight. Today, social media ecosystems populated by hundreds of millions of individuals present real incentives —including economic and political ones— to design algorithms that exhibit human-like behavior. Such ecosystems also raise the bar of the challenge, as they introduce new dimensions to emulate in addition to content, including the so- cial network, temporal activity, diffusion patterns and sentiment expression. A social bot is a computer algorithm that automatically produces content and interacts with humans on social media, trying to emulate and possibly alter their behavior. Social bots have been circulating on social media platforms for a few years [Lee et al. 2011; Boshmaf et al. 2011]. 1 www.loebner.net/Prizef/loebner-prize.html This work is supported by the National Science Foundation (grant CCF-1101743), by DARPA (grant W911NF-12-1-0037), and by the James McDonnell Foundation. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Corresponding author: E. Ferrara ([email protected]) Author’s addresses: E. Ferrara, O. Varol, C. Davis, F. Menczer, and A. Flammini. Center for Complex Networks and System Research. School of Informatics and Computing. Indiana University Bloomington. 919 E. 10th Street, Bloomington (IN) 47408 - USA arXiv:1407.5225v1 [cs.SI] 19 Jul 2014
Transcript
Page 1: The Rise of Social Bots

XX

The Rise of Social Bots

EMILIO FERRARA, Indiana UniversityONUR VAROL, Indiana UniversityCLAYTON DAVIS, Indiana UniversityFILIPPO MENCZER, Indiana UniversityALESSANDRO FLAMMINI, Indiana University

The Turing test asked whether one could recognize the behavior of a human from that of a computer al-gorithm. Today this question has suddenly become very relevant in the context of social media, where textconstraints limit the expressive power of humans, and real incentives abound to develop human-mimickingsoftware agents called social bots. These elusive entities wildly populate social media ecosystems, often go-ing unnoticed among the population of real people. Bots can be benign or harmful, aiming at persuading,smearing, or deceiving. Here we discuss the characteristics of modern, sophisticated social bots, and howtheir presence can endanger online ecosystems and our society. We then discuss current efforts aimed atdetection of social bots in Twitter. Characteristics related to content, network, sentiment, and temporal pat-terns of activity are imitated by bots but at the same time can help discriminate synthetic behaviors fromhuman ones, yielding signatures of engineered social tampering.

The rise of the machinesBots (short for software robots) have been around since the early days of computers:one compelling example is that of chatbots, algorithms designed to hold a conversa-tion with a human, as envisioned by Alan Turing in the 1950s [Turing 1950]. Thedream of designing a computer algorithm that passes the Turing test has driven ar-tificial intelligence research for decades, as witnessed by initiatives like the LoebnerPrize, awarding progress in natural language processing.1 Many things have changedsince the early days of AI, when bots like Joseph Weizenbaum’s ELIZA [Weizenbaum1966] —that employed a Rogerian approach to fool the interlocutor— were developedas proofs-of-concept or for delight.

Today, social media ecosystems populated by hundreds of millions of individualspresent real incentives —including economic and political ones— to design algorithmsthat exhibit human-like behavior. Such ecosystems also raise the bar of the challenge,as they introduce new dimensions to emulate in addition to content, including the so-cial network, temporal activity, diffusion patterns and sentiment expression. A socialbot is a computer algorithm that automatically produces content and interacts withhumans on social media, trying to emulate and possibly alter their behavior. Socialbots have been circulating on social media platforms for a few years [Lee et al. 2011;Boshmaf et al. 2011].

1www.loebner.net/Prizef/loebner-prize.html

This work is supported by the National Science Foundation (grant CCF-1101743), by DARPA (grantW911NF-12-1-0037), and by the James McDonnell Foundation. The funders had no role in study design,data collection and analysis, decision to publish, or preparation of the manuscript.

Corresponding author: E. Ferrara ([email protected])Author’s addresses: E. Ferrara, O. Varol, C. Davis, F. Menczer, and A. Flammini.Center for Complex Networks and System Research.School of Informatics and Computing.Indiana University Bloomington.919 E. 10th Street, Bloomington (IN) 47408 - USA

arX

iv:1

407.

5225

v1 [

cs.S

I] 1

9 Ju

l 201

4

Page 2: The Rise of Social Bots

XX:2 E. Ferrara et al.

Engineered social tamperingWhat are the intentions of social bots? Some of them are benign and, in principle, in-nocuous or even helpful: this category includes, for example, social bots that automat-ically aggregate content from various sources (like simple news feeds), and automaticresponders to inquiries, increasingly adopted by brands and companies for customercare. Although this type of bots are designed to provide a useful service, they can some-times be harmful, for example when they contribute to the spread of unverified infor-mation or rumors. A notable example is that of the Boston marathon bombing [Cassaet al. 2013], when false accusations spread widely on Twitter mostly due to bots auto-matically retweeting posts without verifying the facts or checking the credibility of thesource.

With every new technology comes abuse, and social media are no exception. A secondcategory of social bots includes malicious entities designed specifically with the pur-pose to harm. These bots mislead, exploit, and manipulate social media discourse withrumors, spam, malware, misinformation, political astroturf, slander, or even just noise.This results in several levels of societal harm. For example, bots used in political astro-turf artificially inflate support for a candidate; their activity can endanger democracyby influencing the outcome of elections. In fact, this might already have happened: dur-ing the 2010 U.S. midterm elections, a small set of social bots started a smearing cam-paign against one U.S. Senate candidate for Delaware, injecting thousands of tweetspointing to a pernicious website reporting unverified information, aimed at slander-ing this candidate and support his opponent [Ratkiewicz et al. 2011a]. A similar casewas reported for the Massachusetts special election of 2010 [Metaxas and Mustafaraj2012]. In the latter case, the target candidate lost the elections, even if it is impossibleto determine whether and to what extent this could be attributed to these bots. So-phisticated bots can generate personas that are more credible as fake followers, andthus harder for simple filtering algorithms to detect. They make for valuable entitieson the follower market: recent allegations regarding the acquisition of fake followershave involved Mitt Romney,2 Newt Gingrich,3 Barack Obama,4 and other exponents ofthe international political scene.

More examples of the devastating effects of social bots are reported every day byjournalists, analysts and researchers from all over the world. A recent orchestratedbot campaign successfully created the appearance of a sustained discussion about atech company. Automatic trading algorithms, now trained to use Twitter signals topredict the stock market [Bollen et al. 2011], picked up this conversation and startedtrading heavily the company’s stocks. This resulted in a 200-fold increase in marketvalue, bringing the company’s worth to 5 billion dollars.5 The analysts realized toolate the orchestration behind this operation. This event, however, may not be unprece-dented: on May 6, 2010 a flash crash occurred in the U.S. stock market, when theDow Jones plunged over 1,000 points (about 9%) within minutes —the biggest one-daypoint decline in history. Researchers speculated wildly about the causes of the disas-ter, and trading interference due to social bots could not be ruled out [Hwang et al.2012]. A similar effect on the stock market was recorded when, on April 23, 2013, theSyrian Electronic Army hacked into the Associate Press Twitter account and posted a

2Fake Twitter accounts may be driving up Mitt Romney’s follower number — www.theguardian.com/world/2012/aug/09/fake-twitter-accounts-mitt-romney3Update: Only 92% of Newt Gingrich’s Twitter Followers Are Fake — gawker.com/5826960/update-only-92-of-newt-gingrichs-twitter-followers-are-fake4Pols have a #fakefollower problem — www.politico.com/story/2014/06/twitter-politicians-107672.html5The Curious Case of Cynk, an Abandoned Tech Company Now Worth $5 Billion — mashable.com/2014/07/10/cynk

Page 3: The Rise of Social Bots

The Rise of Social Bots XX:3

false rumor about a terror attack on the White House, in which President Obama wasallegedly injured.6

The bot effectThese anecdotes illustrate few harmful effects of social bots on our increasingly in-terconnected society. In addition to potentially endangering democracy, causing panicduring emergencies, and affecting the stock market, social bots can harm our societyin even subtler ways. A recent study demonstrated the vulnerability of social mediausers to a social botnet designed to expose private information, like phone numbersand addresses [Boshmaf et al. 2011]. This kind of vulnerability can be exploited bycybercrime and cause the erosion of trust in social media [Hwang et al. 2012]. Bots canalso hinder the advancement of public policy, by creating the impression of a grassrootsmovement of contrarians [Ratkiewicz et al. 2011b], or polarize the political discussiondriving attention only on specific, biased sources [Conover et al. 2011]. They can al-ter the perception of social media influence, artificially enlarging the audience of somepeople [Edwards et al. 2014], or they can ruin the reputation of a company, for com-mercial or political purposes [Messias et al. 2013]. A recent study demonstrated thatemotions are contagious on social media [Kramer et al. 2014]: elusive bots acting unno-ticed in a population of unaware humans could easily manipulate them and alter theirperception of reality, with unpredictable results. Indirect effects of social bot activityinclude the alteration of social media analytics, adopted for various purposes such asTV ratings,7 expert finding [Wu et al. 2013], and scientific impact measurement.8

For all these reasons, we deem it crucial to design advanced methods to automati-cally detect social bots (or to discriminate between humans and bots). The research inthis direction has just started: some groups are trying to reverse-engineer social botsto understand their functioning [Freitas et al. 2014], while others are creating botsthemselves [Hwang et al. 2012; Briscoe et al. 2014] to study the susceptibility of peopleto their influence [Boshmaf et al. 2011; Wagner et al. 2012]. The strategies currentlyemployed by social media services appear inadequate to contrast this phenomenon.

Act like a human, think like a botOne of the greatest challenges of bot detection in social media is understanding whatmodern social bots can do. Early bots mainly performed one type of activity: postingcontent automatically. These bots were as naive as easy to spot by trivial detectionstrategies. In 2011, James Caverlee’s team at Texas A&M University implemented ahoneypot trap that managed to detect thousands of social bots [Lee et al. 2011]. Theidea was simple and effective: the team created a few Twitter accounts whose rolewas solely to create nonsensical tweets with gibberish content, in which no humanwould ever be interested. However, these accounts attracted many followers. Furtherinspection confirmed that the suspicious followers were indeed social bots trying togrow their social circles.

In recent years Twitter bots have become increasingly sophisticated, making theirdetection more difficult. Our replication of Caverlee’s honeypot strategy —and moresophisticated variants— yielded only a handful of bots in 2013. The boundary betweenhuman-like and bot-like behavior is now fuzzier. For example, social bots can search

6Syrian hackers claim AP hack that tipped stock market by $136 bil-lion. Is it terrorism? — www.washingtonpost.com/blogs/worldviews/wp/2013/04/23/syrian-hackers-claim-ap-hack-that-tipped-stock-market-by-136-billion-is-it-terrorism/7Nielsen’s New Twitter TV Ratings Are a Total Scam. Here’s Why. — defamer.gawker.com/nielsens-new-twitter-tv-ratings-are-a-total-scam-here-14422148428altmetrics: a manifesto — altmetrics.org/manifesto/

Page 4: The Rise of Social Bots

XX:4 E. Ferrara et al.

Table I. Classes of features extracted by Bot or Not?

Class Number of features DescriptionNetwork 112 Network features capture various dimensions of information diffu-

sion patterns. We build networks based on retweets, mentions, andhashtag co-occurrence, and extracts their statistical features. Exam-ples include degree distribution, clustering coefficient, and centralitymeasures.

User 56 User features are based on Twitter meta-data related to an account,including language, geographic locations, and account creation time.

Friends 208 Friend features include descriptive statistics relative to an account’ssocial contacts (followees), such as the median, moments, and entropyof the distributions of their number of followers, followees, posts, andso on.

Timing 24 Timing features capture temporal patterns of content generation(tweets) and consumption (retweets); examples include the signalsimilarity to a Poisson process [Ghosh et al. 2011], the average timebetween two consecutive posts, and such.

Content 411 Content features, are based on linguistic cues computed throughnatural language processing, especially part-of-speech tagging; ex-amples include the frequency of verbs, nouns, and adverbs in thephrases produced by the account.

Sentiment 339 Sentiment features are built using general-purpose and Twitter-specific sentiment analysis algorithms, including happiness, arousal-dominance-valence, and emotion scores [Golder and Macy 2011;Bollen et al. 2011].

the Web for information and media to fill their profiles, and post collected materialat predetermined times, emulating the human temporal signature of content produc-tion and consumption —including circadian patterns of daily activity and temporalspikes of information generation [Golder and Macy 2011]. They can even engage inmore complex types of interactions, such as entertaining conversations with other peo-ple, commenting on their posts, and answering their questions [Hwang et al. 2012].Some bots specifically aim to achieve greater influence by gathering new followers andexpanding their social circles; they can search the social network for popular and influ-ential people and follow them or capture their attention by sending them inquiries, inthe hope to be noticed [Aiello et al. 2012]. To acquire visibility, they can infiltrate popu-lar discussions, generating topically-appropriate —and even potentially interesting—content, by identifying relevant keywords and searching online for information fittingthat conversation [Freitas et al. 2014]. After the appropriate content is identified, thebots can automatically produce responses through natural language algorithms, pos-sibly including references to media or links pointing to external resources. Other botsaim at tampering with the identities of legitimate people: some are identity thieves,adopting slight variants of real usernames, and stealing personal information such aspictures and links. Even more advanced mechanisms can be employed; some socialbots are able to “clone” the behavior of legitimate people, by interacting with theirfriends and posting topically similar content with similar temporal patterns.

Bot or Not?At the beginning of 2014, we embarked on the design of a social bot detection frame-work for Twitter, called Bot or Not?. The main idea was to identify several classes offeatures that help recognize and separate bot-like from human-like behavior. Previ-ous familiarity with Twitter bots [Ratkiewicz et al. 2011b] allowed us to isolate sixclasses of features, summarized in Table I. Overall, our system generates more thanone thousand features used to learn human and bot prototypes.

To classify an account as either social bot or human, the model must be trained withinstances of both classes. Finding and labeling many examples of bots is challenging.

Page 5: The Rise of Social Bots

The Rise of Social Bots XX:5

Fig. 1. Classification performance of Bot or Not? for four different classifiers. The classification accuracy iscomputed by 10-fold cross validation and measured by the area under the receiver operating characteristiccurve (AUROC). The best score, obtained by Random Forest, is 95%.

As a proof of concept, we used the list of social bots identified by Caverlee’s team. Weused the Twitter Search API to collect up to 200 of their most recent tweets and upto 100 of the most recent tweets mentioning them. This procedure yielded a dataset of15 thousand manually verified social bot accounts and over 2.6 million tweets. Caver-lee’s list also contains legitimate (human) accounts. The same procedure resulted ina dataset of counterexamples with 16 thousand people and over 3 million tweets. Weused this dataset to train the social bot detection model and benchmark its perfor-mance.

Bot or Not? achieves very promising detection accuracy (see Fig. 1). Some featureclasses, like the user meta-data, appear more revealing and they can be easily ex-plained (see Fig. 2). Note that such performance evaluation is based on Caverlee’sdataset from 2011; we are already aware of more recent social bots that cannot be re-liably detected. Bots are continuously changing and evolving. Further work is neededto identify newer annotated instances of social bots at scale.

Borderline cases also exist, such as accounts (sometimes referred as cyborgs) thatare a mix of humans and social bots (e.g., when humans lend accounts to bots), orhacked accounts [Zangerle and Specht 2014]: detecting these anomalies is currentlyimpossible.

To make the detection system broadly accessible, we developed a Web-based applica-tion that interfaces with the Twitter API and retrieves the most recent activity of anyaccount, to make a determination of whether that account exhibits bot-like or human-like behavior. The Web interface, depicted in Fig. 3, allows one to inspect any activeTwitter account. Data about that account and its contacts are collected and processedin real time. The classifier trained on all feature classes provides a likelihood scorethat the account is a social bot. For instance, in Fig. 3, we see that one of the authors isdeemed less likely to be a bot compared to an algorithm developed by Sune Lehmann’s

Page 6: The Rise of Social Bots

XX:6 E. Ferrara et al.

Fig. 2. Subset of user features that best discriminate social bots from humans. Bots retweet more thanhumans and have longer user names, while they produce fewer tweets, replies and mentions, and they areretweeted less than humans. Bot accounts also tend to be more recent.

team.9 The system also presents disaggregate scores according to models trained oneach feature class independently.

Often, an account may be classified as a social bot according to some feature classes,but not according to others. This is due to the large heterogeneity of features exhibitedby people —some may have bot-like features, for example their meta-data or friendinformation.

In addition to the classification results, Bot or Not? provides a variety of visualiza-tions that capture some insights about the features exploited by the system. Examplesare displayed in Fig. 4. We invite the reader to explore these interactive visualizationsdirectly at truthy.indiana.edu/botornot.

Master of puppetsIf social bots are the puppets, additional efforts will have to be directed at findingthe “masters.” Governments10 and other entities with sufficient resources11 have beenalleged to use social bots to their advantage. Even assuming the availability of effectivedetection technologies, it will be crucial to reverse-engineer the observed social botstrategies: who they target, how they generate content, when they take action, and

9You are here because of a robot — sunelehmann.com/2013/12/04/youre-here-because-of-a-robot/10Russian Twitter political protests ‘swamped by spam’ — www.bbc.com/news/technology-1610887611Fake Twitter accounts used to promote tar sands pipeline — http://www.theguardian.com/environment/2011/aug/05/fake-twitter-tar-sands-pipeline

Page 7: The Rise of Social Bots

The Rise of Social Bots XX:7

Fig. 3. Web interface of Bot or Not? (truthy.indiana.edu/botornot). The panels show the likelihood that theinspected accounts are social bots along with individual scores according to six features classes. Left: TheTwitter account of one of the authors is identified as likely operated by a human. Right: A popular social botis correctly assigned a high bot score.

what topics they talk about. The ability to extrapolate such information will allow usto systematically infer who are the entities behind them.

Tools like Bot or Not? help shed light on the intricate world of social bots. Yet manyresearch questions remain open. For example, nobody knows exactly how many so-cial bots populate social media, or what share of content can be attributed to bots—estimates vary wildly and we might have observed only the tip of the iceberg. Botbehaviors are already quite sophisticated: they can build realistic social networks andproduce credible content with human-like temporal patterns. As we build better de-tection systems, we expect an arms race similar to that observed for spam in the past.Static training instances are an intrinsic limitation of supervised learning in such ascenario; machine learning techniques such as active learning might help respond tonewer threats. The race will be over only when the increased cost of deception will nolonger be justified due to the effectiveness of early detection.

The future of social media ecosystems might already point in the direction of envi-ronments where machine-machine interaction is the norm, and humans will navigatea world populated mostly by bots. We believe there is a need for bots and humans tobe able to recognize each other, to avoid bizarre, or even dangerous, situations basedon false assumptions of human interlocutors.12

12That Time 2 Bots Were Talking, and Bank of America Butted In — www.theatlantic.com/technology/archive/2014/07/that-time-2-bots-were-talking-and-bank-of-america-butted-in/374023/

Page 8: The Rise of Social Bots

XX:8 E. Ferrara et al.

A

E

C

B

D

Fig. 4. Visualizations provided by Bot or Not?. (A) Part-of-speech tag proportions. (B) Language distributionof contacts. (C) Network of co-occurring hashtags. (D) Emotion, happiness and arousal-dominance-valencesentiment scores. (E) Temporal patterns of content consumption and production.

ACKNOWLEDGMENTS

The authors are grateful to Mohsen JafariAsbagh, Prashant Shiralkar, Qiaozhu Mei, and Zhe Zhao.

Page 9: The Rise of Social Bots

The Rise of Social Bots XX:9

REFERENCESLuca Maria Aiello, Martina Deplano, Rossano Schifanella, and Giancarlo Ruffo. 2012. People are Strange

when you’re a Stranger: Impact and Influence of Bots on Social Networks. In ICWSM: 6th AAAI Inter-national Conference on Weblogs and Social Media. AAAI, 10–17.

Johan Bollen, Huina Mao, and Xiaojun Zeng. 2011. Twitter mood predicts the stock market. Journal ofComputational Science 2, 1 (2011), 1–8.

Yazan Boshmaf, Ildar Muslukhov, Konstantin Beznosov, and Matei Ripeanu. 2011. The socialbot network:when bots socialize for fame and money. In ACSAC: 27th Annual Computer Security Applications Con-ference. ACM, 93–102.

Erica J Briscoe, D Scott Appling, and Heather Hayes. 2014. Cues to Deception in Social Media Communica-tions. In HICSS: 47th Hawaii International Conference on System Sciences. IEEE, 1435–1443.

Christopher A Cassa, Rumi Chunara, Kenneth Mandl, and John S Brownstein. 2013. Twitter as a sentinelin emergency situations: lessons from the Boston marathon explosions. PLoS currents 5 (2013).

Michael Conover, Jacob Ratkiewicz, Matthew Francisco, Bruno Goncalves, Filippo Menczer, and AlessandroFlammini. 2011. Political polarization on Twitter. In ICWSM: 5th International AAAI Conference onWeblogs and Social Media.

Chad Edwards, Autumn Edwards, Patric R Spence, and Ashleigh K Shelton. 2014. Is that a bot running thesocial media feed? Testing the differences in perceptions of communication quality for a human agentand a bot agent on Twitter. Computers in Human Behavior 33 (2014), 372–376.

Carlos A Freitas, Fabrıcio Benevenuto, Saptarshi Ghosh, and Adriano Veloso. 2014. Reverse EngineeringSocialbot Infiltration Strategies in Twitter. arXiv preprint arXiv:1405.4927 (2014).

Rumi Ghosh, Tawan Surachawala, and Kristina Lerman. 2011. Entropy-based Classification of “Retweeting”Activity on Twitter. In SNA-KDD: Proceedings of the KDD workshop on Social Network Analysis.

Scott A Golder and Michael W Macy. 2011. Diurnal and seasonal mood vary with work, sleep, and daylengthacross diverse cultures. Science 333, 6051 (2011), 1878–1881.

Tim Hwang, Ian Pearce, and Max Nanis. 2012. Socialbots: Voices from the fronts. Interactions 19, 2 (2012),38–45.

Adam DI Kramer, Jamie E Guillory, and Jeffrey T Hancock. 2014. Experimental evidence of massive-scaleemotional contagion through social networks. Proceedings of the National Academy of Sciences (2014),201320040.

Kyumin Lee, Brian David Eoff, and James Caverlee. 2011. Seven Months with the Devils: A Long-TermStudy of Content Polluters on Twitter. In ICWSM: 5th International AAAI Conference on Weblogs andSocial Media.

Johnnatan Messias, Lucas Schmidt, Ricardo Oliveira, and Fabrıcio Benevenuto. 2013. You followed my bot!Transforming robots into influential users in Twitter. First Monday 18, 7 (2013).

Panagiotis T Metaxas and Eni Mustafaraj. 2012. Social media and the elections. Science 338, 6106 (2012),472–473.

Jacob Ratkiewicz, Michael Conover, Mark Meiss, Bruno Goncalves, Alessandro Flammini, and FilippoMenczer. 2011a. Detecting and tracking political abuse in social media. In ICWSM: 5th InternationalAAAI Conference on Weblogs and Social Media. 297–304.

Jacob Ratkiewicz, Michael Conover, Mark Meiss, Bruno Goncalves, Snehal Patil, Alessandro Flammini, andFilippo Menczer. 2011b. Truthy: mapping the spread of astroturf in microblog streams. In WWW: 20thInternational Conference on World Wide Web. 249–252.

Alan M Turing. 1950. Computing machinery and intelligence. Mind (1950), 433–460.Claudia Wagner, Silvia Mitter, Christian Korner, and Markus Strohmaier. 2012. When social bots attack:

Modeling susceptibility of users in online social networks. In WWW: 21th International Conference onWorld Wide Web. 41–48.

Joseph Weizenbaum. 1966. ELIZA – a computer program for the study of natural language communicationbetween man and machine. Commun. ACM 9, 1 (1966), 36–45.

Xian Wu, Ziming Feng, Wei Fan, Jing Gao, and Yong Yu. 2013. Detecting Marionette Microblog Users for Im-proved Information Credibility. In Machine Learning and Knowledge Discovery in Databases. Springer,483–498.

Eva Zangerle and Gunther Specht. 2014. Sorry, I was hacked A Classification of Compromised TwitterAccounts. In SAC: the 29th Symposium On Applied Computing.


Recommended