Date post: | 01-Jan-2016 |
Category: |
Documents |
Upload: | porter-cobb |
View: | 19 times |
Download: | 0 times |
Towards a Taxonomy of Vulnerability Scanning Techniques
Adam Shostack
Bindview Development
Overview
• Audience
• Goals
• Taxonomies
• Exploit Testing
• Inference Methods
Audience
• This talk is for users of security scanners– Better understand the tools– Be more effective in using them
• Also for designers of scanning tools– Open a dialog between tool creators– Be able to discuss things at an abstract level
Goals
• To understand how security scanners find vulnerabilities
• Understand the constraints on the tools
• Create an engineering discussion
• Greg Hoglund will explain why scanners suck, (track B, 4:00) so I won’t bother
Taxonomies
• Means of organizing information
• Good taxonomies allow you to say interesting things about the groups
• Bad taxonomies have poor “borders”– The classification decisions are not clear or
reproducible by different classifiers
• Even good taxonomies may have anomalies– Duck-billed Platypus
Starting Points
• Exploit testing
• Banner checking
Finding /cgi-bin/phf
• This is the classic, easy scan
GET /cgi-bin/phf?q=;cat%20/etc/passwd
• Reliable• Somewhat non-intrusive• Intuitively correct
SATAN and Exploits
• Reliance on banners
• 250 SMTP cs.berkeley.edu Sendmail 4.1 Ready for exploit at 1/1/70
• Lookup sendmail 4.1 in database• Less reliable• Less intrusive• Intuitively worrisome
Terminology
• Vulnerability: A design flaw, defect, or misconfiguration which can be exploited by an attacker– Vulnerability scanners don’t use the term in the
academic senses of the word
• Problem: synonym for vulnerability, less loaded with semantic baggage– Why are we confident the system has the $DATA
vulnerability?
Terminology
• Test: Algorithm for finding problem by exploit– We test for PHF
• Inference: algorithm for finding problem without exploiting– We infer this sendmail has the debug problem
Exploits (1)
• GET /cgi-bin/phf?q=;cat%20/etc/passwd
• This exploits the problem
• Disproves the Safety Hypothesis
• We see the results in the main TCP stream– This makes the check much more reliable– So why not always do this?
Exploits (2)
• Sometimes can not see the results instream– Need an alternate means of observation– Inherently less reliable– Majordomo Reply-To: bug– Exploit goes via mail queue, may take hours
Risk Models
• Safety Assumed– Many exploit tests work this way– Reduces false positives
• Risk Assumed– Can work well with indirect, inference– Disprove Majordomo Reply-To by proving that
host does not accept mail
• Both are effective tools
Banners In Exploit Tests
• Can reduce false positive rates from misinterpreting results
• Can reduce impact of testing by only testing “expected vulnerable” systems
• Correctness of the technique depends on the definition of the vulnerability– “A web server gives out source when %20 appended”
or “IIS Web server gives out source…?”
Impact of Testing
• Trying to violate the security of the system– Doing things the software author didn’t expect– This has a substantial effect on the system
• Leave core files
• Fill Logs
• Add root accounts
• Make copies of /etc/shadow off the host
– Stack smashing attacks crash the service
Impact of Testing
• We haven’t starting talking about trying to test for Denial of Service problems
teardrop
land
bonk
killcisco
DOS Testing (daemons)
• Connect, attack, reconnect– Indirect observation technique– Fails under inetd– May fail because of other factors
• Look carefully at connection– Learn a lot from RST vs. FIN vs. RST+PUSH
DOS Testing (daemons)
• Hard to test again if you’re not sure what you saw
• Some daemons die on connect/close– Strobe found a plethora of these– So did nmap
• Systems can fail for reasons unrelated to the check being performed
“Was that a Production Server?”
• Most tools try very hard to avoid this problem
• It’s a huge drag on sales and support to crash targets (hosts or services) without warning you a dozen times
Less Intrusive Methods
• Inference– Versioning– Port Status– Protocol Compliance– Behavior– Presence of /cgi-bin/file
• Credentials
Inference
• If exploit will crash the target
• If output can not reliably be parsed
• If exploit is still secret – Discovered by company, and not disclosed– No full disclosure debate please; companies do this
• If exploit violates the rules– More applicable to consultants, custom tools
• Distinctions more clear cut
Versioning
• Very effective when banner information is hard to change– named– ssh
• Sendmail’s banner is not hard to change
• Usually uses if (banner matches && version < N) sorts of logic
Port Status
• Declare risk if you can connect
• Can be a policy violation in itself
• Can be used when additional probing will not reveal more information, i.e. overflows in rpc-mountd
• Gets interesting when done through a firewall, or with UDP
Protocol Compliance
• Exercise the server than port-status, thus more reliable, intrusive
• Declare vulnerability based on results
• Useful and correct when policy is “no web servers”
Behavior
• Examine edges of protocol for implementation details
• Infer software information from results
• Demonstrate that software under examination behaves differently from the software which has the vulnerability
Credentials
• Things needed to login– UNIX login/password– NT account name/password
• “Login” on NT network means sending credentials with API calls
• Does not include public, anonymous, guest
Credentials (2)
• Very non-intrusive (except at install time)
• Very reliable
• Once logged in:– ask for SW version information– MD5 files– Call APIs to gather data
Conclusions
• Overview of techniques based on:– Exploit– Inference– Credentials
• Pros and Cons of various techniques
• This is a work in progress– Lots of interesting work
Towards a Taxonomy of Vulnerability Scanning Techniques
Adam Shostack
Bindview Development