+ All Categories
Home > Documents > Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive...

Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive...

Date post: 19-Dec-2015
Category:
View: 220 times
Download: 1 times
Share this document with a friend
Popular Tags:
106
Testing IDS
Transcript
Page 1: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

Testing IDS

Page 2: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

2/106

Testing IDS

• Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available to test IDS.

• Quantitative IDS performance measurement results are essential in order to compare different systems.

Page 3: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

3/106

Testing IDS

• Quantitative results are needed by:– Acquisition managers – to improve the

process of system selection.– Security analysts – to know the likelihood that

the alerts produced by IDS are caused by real attacks that are in progress.

– Researchers and developers – to understand the strengths and weaknesses of IDS in order to focus research efforts on improving systems and measuring their progress.

Page 4: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

4/106

Testing IDS

• Quantitatively measurable IDS characteristics:– Coverage– Probability of false alarms– Probability of detection– Resistance to attacks directed at the IDS– Ability to handle high bandwidth traffic– Ability to correlate events– Ability to detect new attacks

Page 5: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

5/106

Testing IDS

• Quantitatively measurable IDS characteristics (cont.):– Ability to identify an attack– Ability to determine attack success– Capacity verification (NIDS).

Page 6: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

6/106

Testing IDS

• Coverage– Determines which attacks an IDS can detect

under ideal conditions.– For misuse (signature based) systems

• Counting the number of signatures and mapping them to a standard naming scheme.

– For anomaly detection systems• Determining which attacks out of the set of all

known attacks could be detected by a particular methodology.

Page 7: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

7/106

Testing IDS

• Coverage (cont.)– The problem with determining coverage of an

IDS lies in the fact that various researchers characterize the attacks by different numbers of parameters.

Page 8: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

8/106

Testing IDS• Coverage (cont.)

– These characterizations may take into account the particular goal of the attack (DoS, penetration, scanning, etc.), the software, protocol and/or OS against which it is targeted, the victim type, the data to be collected in order to obtain the evidence of the attack, the use or not of IDS evasion techniques, etc.

– Combinations of these parameters are also possible.

Page 9: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

9/106

Testing IDS

• Coverage (cont.)– The consequence of these differences are

coarse granularity attack definitions and finer granularity attack definitions.

– Because of the disparity in granularity, it is difficult to determine attack coverage of an IDS precisely.

Page 10: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

10/106

Testing IDS

• Coverage (cont.)– CVE is an attempt to alleviate this problem.– But the CVE approach does not work either, if

multiple attacks are used to exploit the same vulnerability using different approach (for example to evade IDS systems).

Page 11: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

11/106

Testing IDS

• Coverage (cont.)– Determining the importance of different attack

types is also a problem when determining coverage.

– Different environments may assign different costs and importance to detecting different types of attacks.

– Example:• An e-commerce site may not be interested in

surveillance attacks, but may be very interested in detecting DDoS attacks.

Page 12: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

12/106

Testing IDS

• Coverage (cont.)– Example (cont.):

• A military site may be especially interested in detecting surveillance attacks in order to prevent more serious attacks by acting in their early phases.

– Another problem with coverage is in determining which attacks to cover regarding system updates.

Page 13: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

13/106

Testing IDS

• Coverage (cont.)– Example:

• It is worthless to test IDS coverage of the attacks against the defended system in which the measures against these attacks have already been applied (patching, hardening, etc.)

Page 14: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

14/106

Testing IDS

• Probability of false alarms– Suppose that we have N IDS decisions, of which:

• In TP cases: intrusion – alarm.• In TN cases: no intrusion – no alarm.• In FP cases: no intrusion – alarm.• In FN cases: intrusion – no alarm.

– Total intrusions: TP+FN

– Total no-intrusions: FP+TN

– N=TP+FN+FP+TN

– Base-rate – the probability of an attack:

N

FNTPIP

Page 15: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

15/106

Testing IDS

• Probability of false alarms (cont.)– Events: Alarm A, Intrusion I

• The following rates are defined:– True positive rate TPR

– True negative rate TNR

IAPFNTP

TPTPR

IAPTNFP

TNTNR

Page 16: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

16/106

Testing IDS

• Probability of false alarms (cont.) – False positive rate FPR

– False negative rate FNR

IAPTNFP

FPFPR

IAPFNTP

FNFNR

Page 17: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

17/106

Testing IDS• Probability of false alarms (cont.)

– This measure determines the rate of false positives produced by an IDS in a given environment during a particular time frame.

Page 18: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

18/106

Testing IDS• Probability of false alarms (cont.)

– Typical causes of false positives:• Weak signatures (alert on all traffic to a specific

port, search for the occurrence of a common word such as ”help” in the first 100 bytes of SNMP or other TCP connections, alert on common violations of the TCP protocol, etc.)

• Normal network monitoring and maintenance traffic.

Page 19: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

19/106

Testing IDS• Probability of false alarms (cont.)

– Difficulties regarding measuring of false alarm rate:

• An IDS may have a different false positive rate in different network environments, and “standard network” does not exist.

• It is difficult to determine aspects of network traffic or host activity that will cause false alarms.

• Consequence: it is difficult to guarantee that it is possible to produce the same number and type of false alarms in a test network as in a real network.

Page 20: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

20/106

Testing IDS• Probability of false alarms (cont.)

– Difficulties regarding measuring of false alarm rate (cont.):

• IDS can be configured in many ways and it is difficult to determine which configuration of an IDS should be used for a particular false positive test.

Page 21: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

21/106

Testing IDS

• Probability of detection– This measurement determines the rate of

attacks detected correctly by an IDS in a given environment during a particular time frame.

Page 22: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

22/106

Testing IDS

• Probability of detection– Difficulties in measuring probability of

detection:• The success of an IDS is largely dependent upon

the set of the attacks used during the test.• The probability of detection varies with the false

positive rate – the same configuration of the IDS must be used for testing for false positives and hit rates.

Page 23: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

23/106

Testing IDS• Probability of detection (cont.)

– Difficulties in measuring probability of detection (cont.):

• A NIDS can be evaded by using the stealthy versions of attacks (fragmenting packets, using data encoding, using unusual TCP flags, encrypting attack packets, spreading attacks over multiple network sessions, launching attacks from multiple sources, etc.)

• This reduces the probability of detection, even though the same attack would be detected if no stealthy version would be applied.

Page 24: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

24/106

Testing IDS

• Resistance to attacks directed at the IDS– This measurement demonstrates how

resistant an IDS is to an attacker’s attempt to disrupt the correct operation of the IDS.

Page 25: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

25/106

Testing IDS• Resistance to attacks directed at the IDS

– Some typical attacks against IDS:• Sending a large amount of non-attack traffic with

volume exceeding the IDS processing capability – this causes dropping packets by the IDS.

• Sending to the IDS non-attack packets that are specially crafted to trigger many signatures within the IDS – the human operator is overwhelmed with false positives, or an automated analysis tools crashes.

Page 26: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

26/106

Testing IDS• Resistance to attacks directed at the IDS

(cont.)– Some typical attacks against IDS (cont.):

• Sending to the IDS a large number of attack packets intended to distract the human operator, while the attacker launches a real attack hidden among these “false attacks”.

• Sending to the IDS packets containing data that exploit a vulnerability within the very IDS processing algorithms. Such vulnerabilities may be consequence of coding errors.

Page 27: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

27/106

Testing IDS• Ability to handle high bandwidth traffic

– This measurement demonstrates how well an IDS will function when presented with a large volume of traffic.

– Most NIDS start to drop packets as the traffic volume increases – false negatives.

– At certain threshold, most IDS will stop detecting any attacks.

Page 28: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

28/106

Testing IDS• Ability to correlate events

– This measurement demonstrates how well an IDS correlates attack events.

– These events may be gathered from IDS, routers, firewalls, application logs, etc.

– One of the primary goals of event correlation is to identify penetration attacks.

– Currently, IDS have limited capabilities in this area.

Page 29: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

29/106

Testing IDS• Ability to detect new attacks

– This measurement demonstrates how well an IDS can detect attacks that have not occurred before.

– Signature-only based systems will have 0 score here.

– Anomaly-based systems may be suitable for this type of measurement. However, they in general produce more false alarms than the signature-based systems.

Page 30: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

30/106

Testing IDS• Ability to identify an attack

– This measurement demonstrates how well an IDS can identify the attack that it has detected.

– Each attack should be labelled with a common name or vulnerability name, or by assigning the attack to a category.

Page 31: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

31/106

Testing IDS• Ability to determine attack success

– This measurement demonstrates if the IDS can determine the success of attacks from remote sites that give the attacker higher-level privileges on the attacked system.

– Many remote privilege-gaining attacks (probes) fail and do not damage the attacked system.

– Many IDS do not distinguish between unsuccessful and successful attacks.

Page 32: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

32/106

Testing IDS• Ability to determine attack success (cont.)

– For the same attack, some IDS can detect the evidence of damage and some IDS detect only the signature of attack actions.

– The ability to determine the attack success is essential for the analysis of attack correlation and the attack scenario.

– Measuring this capability requires the information about both successful and unsuccessful attacks.

Page 33: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

33/106

Testing IDS• Capacity verification for NIDS

– The NIDS demand higher-level protocol awareness than other network devices (switches, routers, etc.)

– NIDS inspect more deeply the network packets than the other devices do.

– Therefore, it is important to measure the ability of a NIDS to capture, process and perform at the same level of accuracy under a given network load as it does on a quiescent network.

Page 34: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

34/106

Testing IDS• Capacity verification for NIDS (cont.)

– There exists a standardized capacity benchmarking methodology for NIDS (e.g. CISCO has its own methodology).

– The NIDS customers can use the standardized capacity test results for each metric and a profile of their networks to determine if the NIDS is capable of inspecting their traffic.

Page 35: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

35/106

Challenges of IDS testing

• The following problems (at least) make IDS testing a challenging task:– Collecting attack scripts and victim software is

difficult.– Requirements for testing signature-based and

anomaly-based IDS are different.– Requirements for testing host-based and

network-based IDS are different.– Using background traffic in IDS testing is not

standardized.

Page 36: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

36/106

Challenges of IDS testing• Collecting attack scripts and victim

software.– It is difficult and expensive to collect a large

number of attack scripts.– The attack scripts are available in various

repositories, but it takes time to find relevant scripts to a particular testing environment.

Page 37: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

37/106

Challenges of IDS testing• Collecting attack scripts and victim

software (cont.)– Once an adequate script is identified, it takes

approx. one person-week to review the code, test the exploit, determine where the attack leaves evidence, automate the attack and integrate it into a testing environment.

Page 38: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

38/106

Challenges of IDS testing• Different requirements for testing

signature-based and anomaly-based IDS– Most commercial systems are signature-

based.– Many research systems are anomaly based.

Page 39: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

39/106

Challenges of IDS testing• Different requirements for testing

signature-based and anomaly-based IDS (cont.)– An ideal IDS testing methodology would be

applicable to both signature-based and anomaly-based systems.

– This is important because the research anomaly-based systems should be compared to the commercial signature-based systems.

Page 40: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

40/106

Challenges of IDS testing• Different requirements for testing

signature-based and anomaly-based IDS (cont.)– The problems with creating a single test to

cover both type of systems:• Anomaly based systems with learning require

normal traffic for training that does not include attacks.

• Anomaly based systems with learning may learn behaviour of the testing methodology and perform well without detecting real attacks at all.

Page 41: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

41/106

Challenges of IDS testing• Different requirements for testing

signature-based and anomaly-based IDS (cont.)– The problems with creating a single test to

cover both type of systems (cont.):• This may happen when all the attacks in a test are

launched from a particular user, IP address, subnet, or MAC address.

Page 42: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

42/106

Challenges of IDS testing• Different requirements for testing

signature-based and anomaly-based IDS (cont.)– The problems with creating a single test to

cover both type of systems (cont.):• Anomaly-based systems with learning can also

learn subtle characteristics difficult to predetermine (packet window size, ports, typing speed, command set used, TCP flags, connection duration, etc.) – artificially perform well in the test environment.

Page 43: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

43/106

Challenges of IDS testing• Different requirements for testing

signature-based and anomaly-based IDS (cont.)– The problems with creating a single test to

cover both type of systems (cont.):• The performance of a signature based system in a

test will, to a large degree, depend on the set of attacks used in the test.

• Then the decision about which attacks to include in a test may be in favour of a particular IDS – not objective.

Page 44: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

44/106

Challenges of IDS testing• Different requirements for testing host-

based and network-based IDS– Testing host-based IDS presents some

difficulties not present when testing network-based IDS:

• Network-based IDS can be tested off-line by creating a log file containing TCP traffic and replaying that traffic to IDS – this is convenient, because there is no need to test all the IDS at the same time.

• Repeatability of the test is easy to achieve.

Page 45: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

45/106

Challenges of IDS testing• Different requirements for testing host-

based and network-based IDS (cont.)– Testing host-based IDS presents some

difficulties not present when testing network-based IDS (cont.):

• Host-based IDS use a variety of system inputs in order to determine whether or not a system is under attack.

• This set of inputs is not the same for all IDS.• Host-based IDS monitor a host, not a single data

feed.• Then it is difficult to replay activity from log files.

Page 46: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

46/106

Challenges of IDS testing• Different requirements for testing host-

based and network-based IDS (cont.)– Testing host-based IDS presents some

difficulties not present when testing network-based IDS (cont.):

• Since it is difficult to test a host-based IDS off-line, an on-line test should be performed.

• Consequence: problems of repeatability.

Page 47: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

47/106

Challenges of IDS testing• Using Background traffic in IDS testing

– Four approaches:• Testing using no background traffic/logs• Testing using real traffic/logs• Testing using sanitized traffic/logs• Testing using simulated traffic/logs.

– It is not clear which approach is the most effective for testing IDS.

– Each of the four approaches has unique advantages and disadvantages.

Page 48: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

48/106

Challenges of IDS testing• Using Background traffic in IDS testing

(cont.)– Testing using no background traffic/logs

• This testing may be used as a reference condition.• An IDS is set up on a host/network on which there

is no activity.• Then, computer attacks are launched on this

host/network to determine whether or not the IDS can detect them.

• This technique can determine the probability of detection (hit rate) under no load, but it cannot determine the false positive rate.

Page 49: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

49/106

Challenges of IDS testing• Using Background traffic in IDS testing

(cont.)– Testing using no background traffic/logs

(cont.)• Useful for verifying that an IDS has signatures for a

set of attacks and that the IDS can properly label each attack.

• Often much less costly than other approaches.• Drawback: tests using this technique are based on

the assumption that an IDS ability to detect an attack is the same regardless of the background activity.

Page 50: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

50/106

Challenges of IDS testing• Using Background traffic in IDS testing

(cont.)– Testing using no background traffic/logs

(cont.)• At low levels of background activity, that

assumption is probably true.• At high levels of background activity, the

assumption is often false since the IDS performances degrade at high traffic intensities.

Page 51: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

51/106

Challenges of IDS testing• Using Background traffic in IDS testing

(cont.)– Testing using real traffic/logs

• The attacks are injected into a stream of real background activity.

• Very effective for determining the hit rate of an IDS given a particular level of background activity.

• Background activity is real – contains all the anomalies and subtleties – realistic hit rates.

• Enables comparison of IDS hit rates at different levels of activity.

Page 52: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

52/106

Challenges of IDS testing• Using Background traffic in IDS testing

(cont.)– Testing using real traffic/logs (cont.)

• Drawbacks:– Repeatable test using real traffic is problematic – it is

difficult to store and replay large amounts of real traffic at rates higher than 100 Mb/s (currently). Possible solution: parallelization – packet sequencing problems.

– The experiments of this kind usually use a small number of victim machines, set up only to be attacked during the test. Some anomaly detection IDS can then artificially elevate their performances during the test.

Page 53: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

53/106

Challenges of IDS testing• Using Background traffic in IDS testing

(cont.)– Testing using real traffic/logs (cont.)

• Drawbacks (cont.):– The real background activity used may contain

anomalies unique to the network, which favour one IDS over another. Example: a test network may heavily use a particular protocol that was processed more deeply by a particular IDS.

– The major problem with testing using real background traffic/logs: it is very difficult to determine false positive rates correctly, because it is virtually impossible to guarantee the identification of all the attacks that naturally occur in the background activity.

Page 54: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

54/106

Challenges of IDS testing• Using Background traffic in IDS testing

(cont.)– Testing using real traffic/logs (cont.)

• Drawbacks (cont.):– It is difficult to publicly distribute the test, since there are

privacy concerns related to the use of real background activity.

– Replay may damage the timings – timestamps should also be kept.

Page 55: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

55/106

Challenges of IDS testing• Using Background traffic in IDS testing

(cont.)– Testing using sanitized traffic/logs

• Sanitizing – removing sensitive information from real data.

• The goal – to overcome the privacy problems of using, analyzing, and distributing real background activity.

• Example: TCP packet headers may be cleansed, and packet payloads may be hashed.

• Real background activity is prerecorded and then all the sensitive data are removed.

Page 56: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

56/106

Challenges of IDS testing• Using Background traffic in IDS testing

(cont.)– Testing using sanitized traffic/logs (cont.)

• Then, attack data are injected within the sanitized data stream:

– By replaying the sanitized data and running attacks concurrently, or

– By separately creating attack data and then inserting these into the sanitized data.

• Advantages:– Test data are freely distributable– The test is repeatable.

Page 57: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

57/106

Challenges of IDS testing• Using Background traffic in IDS testing

(cont.)– Testing using sanitized traffic/logs (cont.)

• Disadvantages:– Sanitization attempts may end up removing much of the

content of the background activity – very unrealistic environment.

– The major problem: Sanitization attempts may fail – accidental release of sensitive data. It is infeasible for a human to verify the sanitization of a large volume of data.

– The injected attacks do not interact realistically with the sanitized background activity. Example: an injected buffer overflow attack may cause a web server to crash, but background activity still requests the web server.

Page 58: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

58/106

Challenges of IDS testing• Using Background traffic in IDS testing

(cont.)– Testing using sanitized traffic/logs (cont.)

• Disadvantages (cont.):– When sanitizing real traffic, it may be difficult to remove

the attacks that existed in the data stream – this causes problems with the false positive rate testing.

– Sanitizing data may remove information needed to detect attacks.

Page 59: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

59/106

Challenges of IDS testing• Using Background traffic in IDS testing

(cont.)– Testing using simulated traffic/logs

• The most common approach to testing IDS.• A testbed network with hosts and network

infrastructure is created.• Background traffic is generated on this network, as

well as the attacks.• The testbed network includes victims of interest

with background traffic generated by means of complex traffic generators that model the actual network traffic statistics.

Page 60: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

60/106

Challenges of IDS testing• Using Background traffic in IDS testing

(cont.)– Testing using simulated traffic/logs (cont.)

• There is also a possibility to employ simpler traffic generators to create a small number of packet types at a high rate.

• Network traffic and host audit logs can be recorded in such a testbed network for later playback.

• There is also possibility to perform evaluations in real time.

Page 61: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

61/106

Challenges of IDS testing• Using Background traffic in IDS testing

(cont.)– Testing using simulated traffic/logs (cont.)

• Advantages:– Data can be distributed freely – it does not contain any

private or sensitive information.– There is a guarantee that the background activity does

not contain any unknown attacks. – IDS testing using simulated traffic is easily repeatable.

Page 62: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

62/106

Challenges of IDS testing• Using Background traffic in IDS testing

(cont.)– Testing using simulated traffic/logs (cont.)

• Disadvantages:– It is very costly and difficult to create a simulation.– It is difficult to simulate a high bandwidth environment –

resource constraints.– Different traffic is needed for different types of networks –

academic, e-commerce, military, etc.

Page 63: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

63/106

Measuring IDS performances

• In order to compare different IDS, a measure of their performances is needed.

• Of all the measurable characteristics mentioned before, the true positive rate and the false positive rate are the most important for comparing IDS.

• The true positive rate and the false positive rate are included in various sublimation metrics for comparing IDS.

Page 64: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

64/106

Measuring IDS performances

• It is important to determine the probability of intrusion, if an alert has been generated.

• This gives rise to a Bayesian probabilistic measure for characterising IDS performances.

• We need the total probability of an alert in order to determine the probability of intrusion given the alert.

Page 65: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

65/106

Measuring IDS performances

TP FN

TN FP

I

I

I, I mutually exclusive A=(IA)(IA)

IAPIPIAPIPAP

A

A

Total probability of an alert

Page 66: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

66/106

Measuring IDS performances

FNTP

TPTPRIAP

TNFP

FPFPRIAP

N

FNTPIP

IPIP 1

Page 67: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

67/106

Measuring IDS performances

• A performance measure: Bayesian detection rate:

• The greater the detection rate, the better the IDS, but…

)|()()|()(

)|()()|(

IAPIPIAPIP

IAPIPAIP

Page 68: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

68/106

Measuring IDS performances

• Base-rate fallacy– Even if the false alarm rate P(A|¬I) is very

low, the Bayesian detection rate P(I |A) is still low if the base-rate P(I) is low

– Example 1: if P(A|I) = 1, P(A|¬I) = 10-5, P(I) = 2×10-5, P(I |A) = 66%

– Example 2: if P(A|I) = 1, P(A|¬I) = 10-5, P(I) = 10-1, P(I |A) = 99.99%

– Example 3: if P(A|I) = 1, P(A|¬I) = 10-9, P(I) = 2×10-5, P(I |A) = 99.99%

Page 69: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

69/106

Measuring IDS performances

• Conclusion:– If the base-rate is low, the false alarm rate

must be extremely low.

• Example:– KDD cup data set without filtering has a very

high base-rate – no base-rate fallacy.– What is good for the military, it is sometimes

very bad for a non-military environment.

Page 70: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

70/106

Measuring IDS performances

• Another performance measure: ROC– Receiver Operating Characteristic– Used widely in systems for detection of

signals in noise (radars, etc.)– TPR vs. FPR curve– An ideal system has TPR=1 and FPR=0.

Page 71: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

71/106

Measuring IDS performances

• Example of a ROC curve:

% TPR

% FPR

IDS1

IDS2IDS2

Page 72: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

72/106

Measuring IDS performances

• The use of ROC curves for assessing IDS has suffered harsh criticism:– Normally, an IDS would be characterised by a

single point in the coordinates FPR-TPR (However, if a parameter of an IDS is varied, the ROC curve is obtained, instead of a single point).

Page 73: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

73/106

Measuring IDS performances

• Example – ROC of the IDS with the relabelling algorithm in which DB index and centroid diameters are implemented.

• The parameter: DeltaDB – varied between 0.2 and 0.45.

Page 74: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

74/106

Measuring IDS performances

Page 75: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

75/106

Test data sets

• For testing using simulated traffic/logs, a source of simulated traffic in which attacks are injected is needed.

• A widely used simulated traffic data set is the KDD cup ’99 data set.

Page 76: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

76/106

Test data sets

• It corresponds to testing IDS carried out by MIT Lincoln Laboratory in 1998 and 1999.

• In 1999, the KDD organized a contest in data mining and the data base used was that generated by Lincoln Laboratory.

Page 77: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

77/106

Test data sets

• KDD (SIGKDD) – ACM special interest group on knowledge discovery and data mining.

• The purpose of the KDD CUP ’99 contest was to classify the given data in order to differentiate attack records from the normal traffic records.

Page 78: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

78/106

Test data sets

• The KDD Cup 1999 Data– Various intrusions simulated in a military air-

base network environment – 9 weeks of raw tcpdump data for a LAN simulating a typical U.S. Air Force LAN.

– 4,900,000 data instances – vectors of extracted feature values from connection records.

Page 79: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

79/106

Test data sets

• The KDD Cup 1999 Data (cont.)– Data were split into 2 parts:

• The raw training data (4Gb of compressed binary tcpdump – 7 weeks of network traffic – approx. 5 million connection records).

• Test data – 2 weeks – approx. 2 million connection records.

– A connection:• A sequence of TCP packets starting and ending at

some well defined time instants, between which data flow to and from a source IP address to a target IP address under a well defined protocol.

Page 80: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

80/106

Test data sets

• The KDD Cup 1999 Data (cont.)– Each connection is labelled as either normal

or as an attack, with exactly one specified attack type.

– Each connection record consists of about 100 bytes.

Page 81: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

81/106

Test data sets

• Four categories of simulated attacks– DoS – denial of service (e.g. Syn flood). – R2L – unauthorized access from a remote

machine (e.g. guessing password).– U2R – unauthorized access to superuser or

root functions (e.g. various “buffer overflow” attacks).

– Probing – surveillance and other probing for vulnerabilities (e.g. port scanning).

Page 82: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

82/106

Test data sets

• The test data do not have the same probability distribution as the training data.

• They include specific attack types not in the training data.

• This made the data mining task more realistic – the distribution of real data and types of possible attacks are normally not known during the training of the learning system.

Page 83: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

83/106

Test data sets

• The training data set contains 22 training attack types:– back DoS – buffer_overflow u2r – ftp_write r2l – guess_passwd r2l – imap r2l – ipsweep probe

Page 84: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

84/106

Test data sets

• The training data set attack types (cont.)– land dos– loadmodule u2r – multihop r2l – neptune dos – nmap probe – perl u2r

Page 85: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

85/106

Test data sets

• The training data set attack types (cont.)– phf r2l – pod dos– portsweep probe – rootkit u2r – satan probe – smurf dos

Page 86: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

86/106

Test data sets

• The training data set attack types (cont.)– spy r2l – teardrop dos – warezclient r2l – warezmaster r2l.

• The test data set contains 14 additional attack types.

Page 87: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

87/106

Test data sets• 41 higher level traffic features were

defined in order to help distinguishing normal connections from attacks.

• These features are divided into 3 categories:– Basic features of individual TCP connections.– Content features within a connection

suggested by domain knowledge.– Traffic features computed using a 2-second

time window.

Page 88: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

88/106

Test data sets• Basic features of individual TCP

connections (host-based traffic features):– Connection records were sorted by

destination host.– Features were constructed using a window of

100 connections to the same host instead of a time window.

– This is useful since some probing attacks scan the hosts (or ports) using a long time interval.

Page 89: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

89/106

Test data sets• Content features within a connection

suggested by domain knowledge:– These features look for suspicious behaviour

in the data portions, such as the number of failed login attempts.

Page 90: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

90/106

Test data sets• Traffic features computed using a two-

second time window (time based traffic features):– The same host features examine only the

connections in the past two seconds that have the same destination host as the current connection, and calculate statistics related to protocol behaviour, service, etc.

– The same service features examine only the connections in the past two seconds that have the same service as the current connection.

Page 91: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

91/106

Test data sets

• Basic features of individual TCP connectionsFeature name

Description Type

duration Length (in sec) of the connection Continuous

protocol_type Type of the protocol, e.g. tcp, udp, etc. Discrete

service Network service on the destination, e.g. http, telnet, etc.

Discrete

src_bytes Number of data bytes from source to destination

Continuous

dst_bytes Number of data bytes from destination to source

Continuous

Page 92: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

92/106

Test data sets

• Basic features of individual TCP connections (cont.)

Feature name Description Type

flag Normal or error status of the connection

Discrete

land 1 if connection is from/to the same host/port; 0 otherwise

Discrete

wrong_fragment Number of “wrong” fragments Continuous

urgent Number of urgent packets Continuous

Page 93: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

93/106

Test data sets

• Content featuresFeature name Description Type

hot Number of “hot” indicators Continuous

num_failed_logins Number of failed login attempts Continuous

logged_in 1 if successfully logged in; 0 otherwise

Discrete

num_compromised Number of “compromised” conditions

Continuous

root_shell 1 if root shell is obtained; 0 otherwise

Discrete

Page 94: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

94/106

Test data sets

• Content features (cont.)Feature name Description Type

su_attempted 1 if “su root” command attempted; 0 otherwise

Discrete

num_root Number of “root” accesses Continuous

num_file_creations Number of file creation operations Continuous

num_shells Number of shell prompts Continuous

num_access_files Number of operations on access control files

Continuous

Page 95: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

95/106

Test data sets

• Content features (cont.)

Feature name Description Type

num_outbound_cmds Number of outbound commands in an ftp session

Continuous

is_hot_login 1 if the login belongs to the “hot” list; 0 otherwise

Discrete

is_guest_login 1 if the login is a “guest” login; 0 otherwise

Discrete

Page 96: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

96/106

Test data sets

• Time-based traffic featuresFeature name Description Type

count Number of connections to the same host as the current connection in the past 2 seconds

Continuous

The following features refer to so called “same host” connections

serror_rate % of connections that have “SYN” errors

Continuous

rerror_rate % of connections that have “REJ” errors

Continuous

Page 97: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

97/106

Test data sets

• Time-based traffic features (cont.)Feature name Description Type

“same host” connections (cont.)

same_srv_rate % of connections to the same service

Continuous

diff_srv_rate % of connections to different services

Continuous

srv_count Number of connections to the same service as the current connection in the past 2 seconds

Continuous

Page 98: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

98/106

Test data sets

• Time-based traffic features (cont.)

Feature name Description Type

The following features refer to so called “same service” connections

srv_serror_rate % of connections that have “SYN” errors

Continuous

srv_rerror_rate % of connections that have “REJ” errors

Continuous

srv_diff_host_rate % of connections to different hosts

Continuous

Page 99: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

99/106

Test data sets

• Selecting the right set of system features is a critical step when formulating the classification tasks (in this case – intrusion detection algorithm).

Page 100: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

100/106

Test data sets

• The 41 features were obtained by means of the following process:– Frequent sequential patterns (frequent

episodes !) from the network audit data were identified.

– These patterns were used as guidelines to select and construct temporal statistical features.

Page 101: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

101/106

• Weaknesses of the KDD Cup data set:– Simulated data must be similar to real data –

there is no proof that KDD cup data are similar to real data.

– No anomalous packets that appear in real data.– No failure modes.– Synthetic attacks are not distributed realistically

in the background normal data.– Simulated TCP traffic is not diverse enough

(only 9 types of TCP traffic in KDD cup data set).

Test data sets

Page 102: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

102/106

• Stide (Sequence Time Delay Embedding) data set – collections of system calls– Instead of high-level features used in the KDD

CUP ’99 database, low level features are used in order to identify potential intrusions – sequences of system calls.

– In the training phase, stide builds a database of all unique, contiguous system call sequences of a predetermined fixed length occurring in the traces.

Test data sets

Page 103: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

103/106

• Stide (Sequence Time Delay Embedding) data set – collections of system calls– During testing, stide compares sequences in the

new traces to those in the database, and reports an anomaly measure indicating how much the new traces differ from the normal training data.

– 13726 traces of normal data were collected at the Computer Science Department, University of New Mexico.

Test data sets

Page 104: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

104/106

• PESIM 2005 dataset – Fraunhofer Institute Berlin, Germany– Goal: to overcome the problem with the KDD

Cup 1999 dataset.– A combination of 5 servers in a virtual machine

environment (2 Windows, 2 Linux and 1 Solaris OS).

– HTTP, FTP, and SMTP services.– To achieve realistic traffic characteristics, news

sites were mirrored on the HTTP servers.– File sharing facilities offered on FTP servers.

Test data sets

Page 105: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

105/106

• PESIM 2005 dataset (cont.)– SMTP traffic injected artificially:

• 70% mails from personal communication and mailing lists.

• 30% spam mails received by 5 individuals.

– Normal data preprocessed in the following way:• Random selection of 1000 TCP connections from

each protocol.• Attachments removed from the TCP traffic.

– Attacks against the simulated services generated by penetration testing tools.

Test data sets

Page 106: Testing IDS. 2/106 Testing IDS Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available.

106/106

• PESIM 2005 dataset (cont.)– Multiple instances of 27 different attacks were

launched against the HTTP, FTP, and SMTP services.

– The origin of the major part of the attacks is from the Metasploit environment.

– Some of the attacks were taken from the Bugtraq and Packet Storm Security lists.

• The problem: this dataset is not publicly available.

Test data sets


Recommended