Date post: | 19-Dec-2015 |
Category: |
Documents |
View: | 215 times |
Download: | 0 times |
Rustock Botnet and ASNs
TPRC 24 September 2011
John S. Quarterman, Quarterman CreationsSerpil Sayin, Koç University
Andrew B. Whinston, U. Texas at Austin
Supported by NSF grant no. 0831338; the usual disclaimers apply.
Spam, Botnets, Security, and Policy
Starting with some published ASN rankings Drill down to Rustock and other botnets Show some effects of a takedown Specific enough to be actionable affected orgs They could use to detect and fix vulnerabilities How to get the orgs to pay attention? Reputational rankings to produce peer pressure A few simple policy suggestions
After the Rustock Takedown
Rustock Takedown and Slowdown
December 2011 Rustock Slowdown 16 March 2011 Rustock Takedown Which ASNs were affected? Effects on overall spam? Using data from CBL blocklist To ASNs and orgs using Team Cymru data Rankings and graphs by SpamRankings.net
Rustock Takedown Rank Effects
16 March 2011 Takedown Daily Graph
Takedown
HardwareOutage
December 2011 Rustock Slowdown
Slowdown
Dec 2010 – July 2011 Top Botnets Recovery
Slowdown
Takedown
Slowdown vs. Takedown
Slowdown: gradual and temporary During slowdown:
Maazben and bobax took up the slack As Rustock returned in Jan, bobax went back down
After slowdown: Maazben also retreated to its old levels Rustock #1, Lethic #2
Takedown: rapid and much longer-lasting But other botnets took up the slack
Dec 2010 Top Spamming ASNs
Increases
During
slowdown
March 2011 Top Spamming ASNs
4766 #1
4766 #9
March 2011 AS 4766's Botnets
Lethic
Dec 2010 AS 9829's Botnets
Bobax
Lethic
March 2011 Top Botnets
Lethic
Maazben
Opportunistic Botnets & Spamming
Knock one down Two more pop up Spammers can just rent from a different botnet Other botnets can use same vulnerabilities
Dec 2010 Top Botnets
Rustock
Lethic
Congratulations Rustock Takedown!
Takedown had more lasting effect than Slowdown
Congratulations! But in both cases other botnets started to take
up the slack Whack-a-mole is fun, but not a solution Need many more takedowns Or many more organizations playing How do we get orgs to do that?
Cyberwar meets IT Security
Generations of warfare 1st: massed troops 2nd tanks and heavy
artillery 3rd maneuver 4th IEDs and suicide
bombs 5th open source gangs
in it for the money
Cyberwarfare responses 1st: key escrow 2nd Internet off switch at
CONUS (Maginot Line) 3rd CERT, FIRST, etc. 4th botnet takedowns 5th economic and
reputational incentives for distributed diverse commons governance
Spam as a Proxy for Infosec
Most orgs keep security problems secret Because they think it will harm their reputation Ahah! Publish reputation and they'll care Need available proxy for security Anti-spam blocklists have spam data Spam comes from botnets which use vulns Just as a sneeze means disease, outbound spam
means poor infosec (Other diseases may not sneeze; for those other
data; come back to that later.)
Peer pressure and Medical orgs
Peer pressure is key: rank similar orgs (Festinger, Luttmer, Apesteguia; see paper for refs)
Spam data is for every org on the Internet, not just ISPs; any ESP (Email Service Provider)
We ranked medical orgs (worldwide, U.S.) Within 2 months they all dropped to zero spam Confirmation from [confidential] medical org: 'The listing on your site added additional impetus to
make sure we “stay clean” so in that regard, you are successful.'
How Rankings Work
Rankings must be: Frequent, comprehensive, and detailed Must compare peers
To be usable: Marketing: brag about good rankings; bad rankings
are incentive to get better so can brag Sales: good reputation for customer retention Diagnostics: drilldowns for clues to what to fix
Producing more comprehensive application of existing Internet security methods
Many rankings examples
FT Business school rankings Vehicle Blue Book Credit rating: Moody, S&P, Fitch And by far the most numerous: sports scores
In leagues, for teams, for players Detailed, earned run average, etc. And composite overall
Further rankings from spam data
Botnet rankings: botnets use known vulnerabilities; orgs infested by botnets prob. have those vulns; not good for their reputation
Vulnerability rankings: an org infested by several botnets which exploit common vulns very likely has those vulns
Infosec experiments: an org can change its infosec and watch rankings to see which infosec works
Single IP address drilldowns: which addresses are spamming, which botnets infest them
Derivative rankings
Normalized (addresses, customers, employees) Susceptibility (speed of infection by botnets) Recidivism (frequency of re-infestation) Improvement (change over time) Composite (weighted average of all the above)
Internet field experiments
We are releasing rankings for one country Then later for a similar country Does the second country change similarly? Can experiment with many rankings Per country, per org category, per data source Does peer pressure on disclosure change
behavior? The rankings themselves provide ways to
determine how well they work
Policy: other data, other rankings SpamRankings.net pioneers reputational peer
rankings related to Internet security Available now because spam data is available Similar rankings could be made with other data:
Phishing sources and servers Breaches, vulns, etc.: you can think of more
A simple policy suggestion: Require making other specific data available Enable multiple rankings by multiple agencies Transparency for diverse cooperation (Elinor Ostrom)
Needed and Not Needed NeededNeeded More data sources Publicly available Frequent, comprehensive More research (data
correlation, ranking effects, law, policy, etc.)
Independent ranking and certification agency(ies)
Many diverse, cooperating entities (rankers, ranked, academia, industry, govt)
Not neededNot needed New Internet
protocols Punitive laws Reports only to govt Sporadic reports Selected by
reporting orgs Dept. of Homeland
Internet Security
Acknowledgments
This material is based upon work supported by the National Science Foundation under Grant No. 0831338. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
We also gratefully acknowledge custom data from CBL, PSBL, Fletcher Mattox and the U. Texas Computer Science Department, Quarterman Creations, Gretchen Phillips and GP Enterprise, and especially Team Cymru. None of them are responsible for anything we do, either.