+ All Categories
Home > Documents > [IEEE TENCON 2013 - 2013 IEEE Region 10 Conference - Xi'an, China (2013.10.22-2013.10.25)] 2013 IEEE...

[IEEE TENCON 2013 - 2013 IEEE Region 10 Conference - Xi'an, China (2013.10.22-2013.10.25)] 2013 IEEE...

Date post: 29-Jan-2017
Category:
Upload: vasaka
View: 215 times
Download: 0 times
Share this document with a friend
4
Evaluation Studies of Three Intrusion Detection Systems under Various Attacks and Rule Sets Kittikhun Thongkanchorn, Sudsanguan Ngamsuriyaroj, Vasaka Visoottiviseth Faculty of Information and Communication Technology Mahidol University Bangkok, Thailand {kittikhun.tho, sudsanguan.nga, [email protected]} Abstract—This paper investigates the performance and the detection accuracy of three popular open-source intrusion detection systems: Snort, Suricata and Bro. We evaluate all systems using various attack types including DoS attack, DNS attack, FTP attack, Scan port attack, and SNMP attack. The experiments were run under different traffic rates and different sets of active rules. The performance metrics used are the CPU utilization, the number of packets lost, and the number of alerts. The results illustrated that each attack type had significant effects on the IDS performance. But, Bro showed better performance than other IDS systems when evaluated under different attack types and using a specific set of rules. The results also indicated the drop of the accuracy when the three IDS tools activate the full rule set. Keywords—Intrusion Detection System; Suricata; Snort; Bro; Performance Evaluation. I. INTRODUCTION Intrusion detection systems (IDS) have been widely used in many organizations for detecting malicious traffic. They are designed and built using different models and technologies. Thus, they may work efficiently in different environments. The open source IDS commonly used are Snort [1], Suricata [6], and Bro [8]. They also share several common rules, so called signatures. Snort is an open source originated since 1998, and has been the most widely deployed and studied. Suricata was developed by the Open Information Security Foundation (OISF). It uses multi-threaded architecture to help speed up the network traffic analysis. Bro engages both signature and anomaly-based detection methods. It defines specific attacks in term of events which help detect unusual network behavior. The main processing stages of an IDS consists of a module to capture network packets, a module to decode each packet to classify each portion of a packet, and a module to detect whether the packet is benign or malicious based on the rules applied [6]. Thus, IDS examines packets of possible malicious traffic via a set of rules. If the packet payload is matched with one of the rules, it will generate alerts. Since the network traffic has been very huge and increasing, and it would take significant time to examine each packet, the performance of IDS would be very important to ensure the security of the organization. Hence, the main goals of IDS are good performance (no packet loss during the analysis), and the best accuracy (minimum false positives and false negatives). Several researches have studied the performance of those three IDS in different environments. They reported the performance in terms of CPU and memory utilization, and the number of packet loss [2, 3, 4, 5]. But, none of them has studied the effect of the number of active rules. Currently, the number of rules on Snort is more than 20,000 and setting all of them active would take a lot of time to process each packet. In addition, the number of alerts generated would indicate how well each IDS reacts to malicious traffic and to different attacks. In this paper, we investigate the performance of the three IDS under different traffic rates and attacks, and use the number of packets lost, the number of alerts, and the CPU usage as the metrics. The remainder of this paper is organized as follows. Section 2 explains the related work. Section 3 presents the evaluation framework. Section 4 describes the experiments and the results, and finally Section 5 concludes our work. II. RELATED WORK K. Salah and A. Kahtani [2] studied the performance of Snort on Linux and Windows 2003 server. They evaluate using both normal and malicious traffic, and the metrics are the throughput and the packet loss. Their results showed that Snort on Linux has outperformed Snort on Windows. N. Paulauskas and J. Skudutis [3] investigated the performance of Snort by considering three parameters: hardware, logging technique, and the pattern matching algorithm. They found that hardware types and alert logging techniques were the main factors that affect the Snort performance. S. Ngamsuriyaroj et al. [4] measured the Snort performance under various attacks using different set of rules. Their experimental results showed that the number of active Snort rules has different effects on each attack type. A. Alhomouda et al. [5] investigated the performance of Snort and Suricata using three different platforms: ESXi (virtual machine), Linux 2.6 (Ubuntu10.10) and FreeBSD v.8.1. They tested each IDS using different packet sizes and speed, and measured the percentage of packet drop as well as the percentage of alerts. They concluded that Suricata gave 978-1-4799-2827-9/13/$31.00 ©2013 IEEE
Transcript
Page 1: [IEEE TENCON 2013 - 2013 IEEE Region 10 Conference - Xi'an, China (2013.10.22-2013.10.25)] 2013 IEEE International Conference of IEEE Region 10 (TENCON 2013) - Evaluation studies of

Evaluation Studies of Three Intrusion Detection

Systems under Various Attacks and Rule Sets

Kittikhun Thongkanchorn, Sudsanguan Ngamsuriyaroj, Vasaka Visoottiviseth

Faculty of Information and Communication Technology

Mahidol University

Bangkok, Thailand

{kittikhun.tho, sudsanguan.nga, [email protected]}

Abstract—This paper investigates the performance and the

detection accuracy of three popular open-source intrusion

detection systems: Snort, Suricata and Bro. We evaluate all

systems using various attack types including DoS attack, DNS

attack, FTP attack, Scan port attack, and SNMP attack. The

experiments were run under different traffic rates and different

sets of active rules. The performance metrics used are the CPU

utilization, the number of packets lost, and the number of alerts.

The results illustrated that each attack type had significant

effects on the IDS performance. But, Bro showed better

performance than other IDS systems when evaluated under

different attack types and using a specific set of rules. The results

also indicated the drop of the accuracy when the three IDS tools

activate the full rule set.

Keywords—Intrusion Detection System; Suricata; Snort; Bro;

Performance Evaluation.

I. INTRODUCTION

Intrusion detection systems (IDS) have been widely used in many organizations for detecting malicious traffic. They are designed and built using different models and technologies. Thus, they may work efficiently in different environments.

The open source IDS commonly used are Snort [1], Suricata [6], and Bro [8]. They also share several common rules, so called signatures. Snort is an open source originated since 1998, and has been the most widely deployed and studied. Suricata was developed by the Open Information Security Foundation (OISF). It uses multi-threaded architecture to help speed up the network traffic analysis. Bro engages both signature and anomaly-based detection methods. It defines specific attacks in term of events which help detect unusual network behavior.

The main processing stages of an IDS consists of a module to capture network packets, a module to decode each packet to classify each portion of a packet, and a module to detect whether the packet is benign or malicious based on the rules applied [6]. Thus, IDS examines packets of possible malicious traffic via a set of rules. If the packet payload is matched with one of the rules, it will generate alerts.

Since the network traffic has been very huge and

increasing, and it would take significant time to examine each

packet, the performance of IDS would be very important to

ensure the security of the organization. Hence, the main goals

of IDS are good performance (no packet loss during the

analysis), and the best accuracy (minimum false positives and

false negatives).

Several researches have studied the performance of those

three IDS in different environments. They reported the

performance in terms of CPU and memory utilization, and the

number of packet loss [2, 3, 4, 5]. But, none of them has

studied the effect of the number of active rules. Currently, the

number of rules on Snort is more than 20,000 and setting all of

them active would take a lot of time to process each packet. In

addition, the number of alerts generated would indicate how

well each IDS reacts to malicious traffic and to different

attacks.

In this paper, we investigate the performance of the three

IDS under different traffic rates and attacks, and use the

number of packets lost, the number of alerts, and the CPU

usage as the metrics.

The remainder of this paper is organized as follows.

Section 2 explains the related work. Section 3 presents the

evaluation framework. Section 4 describes the experiments

and the results, and finally Section 5 concludes our work.

II. RELATED WORK

K. Salah and A. Kahtani [2] studied the performance of Snort on Linux and Windows 2003 server. They evaluate using both normal and malicious traffic, and the metrics are the throughput and the packet loss. Their results showed that Snort on Linux has outperformed Snort on Windows.

N. Paulauskas and J. Skudutis [3] investigated the performance of Snort by considering three parameters: hardware, logging technique, and the pattern matching algorithm. They found that hardware types and alert logging techniques were the main factors that affect the Snort performance.

S. Ngamsuriyaroj et al. [4] measured the Snort performance under various attacks using different set of rules. Their experimental results showed that the number of active Snort rules has different effects on each attack type.

A. Alhomouda et al. [5] investigated the performance of Snort and Suricata using three different platforms: ESXi (virtual machine), Linux 2.6 (Ubuntu10.10) and FreeBSD v.8.1. They tested each IDS using different packet sizes and speed, and measured the percentage of packet drop as well as the percentage of alerts. They concluded that Suricata gave

978-1-4799-2827-9/13/$31.00 ©2013 IEEE

Page 2: [IEEE TENCON 2013 - 2013 IEEE Region 10 Conference - Xi'an, China (2013.10.22-2013.10.25)] 2013 IEEE International Conference of IEEE Region 10 (TENCON 2013) - Evaluation studies of

better performance on FreeBSD especially when it ran on the high-speed network traffic.

Our work is different from prior efforts since we focus on measuring the performance of the three IDS under various attacks and using different set of rules. In addition, the set of rules used are common among the three IDS.

III. EVALUATION FRAMEWORK

The evaluation framework shown in Figure 1 consists of

three components: the generated network traffic, the IDS and

related active rules, and the reported number of alerts as the

output. The network traffic is generated using four generator

tools: Ostinato [10], Network Mapper (NMAP) [13], High

Orbit Ion Cannon (HOIC) [12], and Low Orbit Ion Cannon

(LOIC) [11]. In addition, both background and attacking

traffic are generated and kept offline in the tcpdump format

before they are replayed to each IDS at different traffic rates.

INPUT

OUTPUT

Method

Network Traffic

Generation

IDS Alert

Rules

Fig. 1. Overview of Evaluation Framework

In this work, we use the rule sets from “Emerging Threats”

[9], an open source community project, which provides the

standard rule sets for Snort and Suricata as well as gives the

official rule sets of Bro. We found that Snort and Suricata

commonly share all rule sets whereas Bro lacks several rules

to handle many attacks. We also select eight types of attacks

and select the number of common rules corresponding to those

attacks, and they are given in Table I.

TABLE I. NUMBER OF COMMON RULES

The examples of rules of Snort, Suricata and Bro are

shown in Figure 2 and 3, respectively.

alert tcp $EXTERNAL_NET any -> $HOME_NET 21 (msg:"FTP adm scan"; flow:to_server,established; content:"PASS ddd@|0A|"; reference:arachnids,332; classtype:suspicious-login; sid:353; rev:6;

Fig. 2. Sample Snort and Suricata’s rule

signature sid-353 { ip-proto == tcp src-ip != local_nets dst-ip == local_nets dst-port == 21 event "FTP adm scan" tcp-state established,originator payload /.*PASS ddd@\x0a/ }

Fig. 3. Sample Bro Signature

Finally, with different attacks, each IDS will generate its

own alert when the input packet is matched with some rules.

Thus, the number of alerts will indicate how well an IDS

reacts to the malicious traffic.

IV. EXPERIMENTS AND RESULTS

Figure 4 depicts the network topology of our testbed

consisting of (1) three machines for Suricata v1.1, Snort v2.9.1,

and Bro v1.5 (2) the target server (3) the network switch, and

(4) the network traffic generator for both background and

malicious traffic. The three IDS machines run CentOS v5.0. In

our experiments, we use the following metrics (1) the number

of packets lost (2) the number of alerts generated, and (3) the

IDS process’s CPU and memory usage measured via the “top”

command. The four experiments are conducted, and the results

are described in detail below.

4. Network Traffic

Generator 3. Network Switch

2. Target Server

1. IDS

Fig. 4. System Configuration

The first experiment is run for both TCP and UDP normal traffic (1 million packets generated with Ostinato v0.3) at different traffic rates of 50 to 2000 pps. The numbers of packets lost, and the CPU utilization are measured. The results are shown in Table II and III. We found that, for TCP traffic, the CPU usages of Snort and Suricata is increased rapidly with increasing traffic rates whereas the CPU usages of Bro seem to be flatted at higher rates. The similar trend is shown for UDP traffic except that Suricata has the highest CPU usage for higher traffic rates. For the results of packet loss shown in Table III, we found that Suricata had a better trend of less TCP packet loss than Snort and Bro, whereas Bro performs consistently for both TCP and UDP traffic.

In the second experiment, we evaluate the performance of each IDS for malicious traffic generated by eight types of attacks: DNS, DoS/DDoS, FTP, ICMP, POP3/IMAP, SCAN, SNMP and Telnet/SSH. LOIC v.1.0.4.0 [11] and HOIC 2.1.003 [12] generate packets for DoS and DDoS attacks, while Nmap [13] v5.51 generates other seven types of malicious traffic.

Page 3: [IEEE TENCON 2013 - 2013 IEEE Region 10 Conference - Xi'an, China (2013.10.22-2013.10.25)] 2013 IEEE International Conference of IEEE Region 10 (TENCON 2013) - Evaluation studies of

TABLE II. CPU USAGE AT DIFFERENT PACKET RATES

Suricata Snort Bro Suricata Snort Bro

50 5.0 11.0 10 9.0 12.0 17

100 9.0 12.0 9.0 11.0 16.0 19.0

200 16.0 15.0 10.0 27.0 18.0 19.0

300 23.0 16.0 30.0 34.0 25.0 20.0

400 22.0 26.0 31.0 44.0 29.0 20.0

500 25.0 32.0 39.0 43.0 29.0 27.0

600 38.0 44.0 40.0 54.0 45.0 33.0

700 45.0 48.0 40.0 67.0 52.0 37.0

800 47.0 50.0 42.0 68.0 56.0 36.0

900 48.0 52.0 43.0 69.0 59.0 37.0

1000 54.0 60.0 43.0 73.0 68.0 43.0

2000 58.0 62.0 46.0 81.0 70.0 55.0

TCP CPUTraffic Rate

TABLE III. PACKET LOSS AT DIFFERENT PACKET RATES

Suricata Snort Bro Suricata Snort Bro

50 0.0 0.0 0 1.0 0.0 0

100 0.0 0.0 0.0 15.8 0.0 0.1

200 0.0 0.0 0.0 17.7 0.0 0.1

300 0.0 0.7 9.0 17.8 0.0 5.1

400 0.0 0.9 9.6 18.0 5.4 8.0

500 0.4 10.1 12.1 18.4 8.3 10.5

600 0.4 10.1 12.5 18.5 15.2 13.6

700 0.7 17.1 18.3 18.7 19.7 15.9

800 8.5 15.6 22.7 22.7 25.4 15.4

900 9.4 30.5 22.9 21.8 27.8 17.6

1000 33.6 35.4 24.2 20.1 28.4 21.5

2000 43.3 45.2 31.4 24.1 30.9 33.8

TCP UDPTraffic Rate

Similarly, the traffic rates are varied from 50 to 2000 pps. We measures the numbers of packets lost, and the CPU utilization of each attack at different traffic rates where the results of 400 pps rate are only depicted in Figure 5 and 6, respectively. Figure 5 compares the number of packet loss of different attack of all three IDS. The packet loss is high for DNS, DoS, FTP and SNMP attacks of all three IDS. Suricata has the smallest packet loss for ICMP, POP3/IMAP, SCAN and Telnet/SSH attacks.

Fig. 5. Packet Loss of All Attacks at the Traffic Rate of 400 pps

Figure 6 compares the CPU usage of the three IDS

under different attacks, and the results show similar trends of

all attacks. Table IV shows that Suricata gives the highest

number of alerts of all attacks except for Telnet/SSH and

SCAN attacks, while Snort gives the lowest number of alerts.

Fig. 6. CPU Usage of All Attacks at the Traffic Rate of 400 pps

TABLE IV. NUMBER OF ALERTS OF DIFFERENT ATTACKS

Figures 7 to 9 compare the number of alerts of all attacks at different traffic rates. Obviously, each IDS acts to different attacks significantly. Moreover, the three attacks: ICMP, SCAN and POP/IMAP show distinct number of alerts from others.

Fig. 7. Number of Alerts by Suricata of All Attacks at Different Rates

Fig. 8. Number of Alerts by Snort of All Attacks at Different Rates

Page 4: [IEEE TENCON 2013 - 2013 IEEE Region 10 Conference - Xi'an, China (2013.10.22-2013.10.25)] 2013 IEEE International Conference of IEEE Region 10 (TENCON 2013) - Evaluation studies of

Fig. 9. Number of Alerts by Bro of All Attacks at Different Rates

The third experiment is run for the combined normal and

malicious traffic. The normal TCP traffic is the background

and has the fixed rate 300 pps while the malicious traffic

varies from 100 to 700 pps. For the malicious traffic, the

DoS/DDoS, SCAN and SNMP attacks are run, and the shared

rule set is used. Figures 10 (a) and 10 (b) compare the number

of packet loss of each attack for the combined traffic of 400

and 1000 pps rates for each IDS. The results show that, at 400

pps, Suricata gives the least packet loss, Snort gives the

highest packet loss of SCAN attack, and Bro’s packet loss is

consistent among all three attacks. For 1000 pps, all IDS give

similar packet loss for all three attacks.

(a) (b)

Fig. 10. Packet Loss of Combined Traffic at (a) 400 pps and (b) 1000 pps

The fourth experiment measures the number of alerts when

the three IDS are run under all eight attacks. The full rule set

and the specific rule set of each attack are used. The results

shown in Table V indicate higher number of alerts for the all

rule sets of all attacks and all three IDS. One reasonable

explanation is that one packet payload may fit to more than

one rule, and some attacks may have similar packet payload.

TABLE V. NUMBER OF ALERTS USING DIFFERENT RULE SETS

V. CONCLUSIONS

This paper presents the evaluation of the three IDS: Suricata Snort and Bro since they have been designed in different architecture, and use different signatures in detecting intrusions. We evaluated each IDS using both normal and malicious traffic, different traffic rates, eight types of attacks, and different rule sets. The metrics measured in the experiments are the CPU usage, the number of packets lost, and the number of alerts. The experimental results indicate that each IDS gives low packet lost and low CPU usage for TCP traffic. In addition, the high traffic rate has a significant effect on the CPU usage, the packets lost and the number of alerts. Meanwhile, each IDS acts quite differently for each attack as the results show different packets lost and different number of alerts generated. Moreover, the effect of having background traffic mixed with the attacking traffic is immaterial to the performance of all IDS. Finally, using different set of rules essentially affects the number of alerts generated by each IDS. This result would impact to the accuracy of the IDS, and the evaluation of false positives and false negatives should be further investigated.

The evaluation of different IDS is a challenging task since the techniques to improve the IDS tools have been continuously developed. In the meantime, the new types of attacks have also been invented as well. Thus, the experiments using desirable parameters should be frequently conducted to evaluate both the performance and the accuracy of IDS so that the problems found would be researched and new solutions would be introduced to help us battle out those malicious attacks efficiently and effectively.

REFERENCES

[1] Common Intrusion Detection Framework(CIDF),http://gost.isi.edu/cidf/.

[2] K. Salah, A. Kahtani. Performance Evaluation Comparison of Snort NIDS under Linux and Windows Server. Journal of Network and Computer Applications, vol. 33, 2010, pp. 6–15.

[3] N. Paulauskas, J. Skudutis. Investigation of the Intrusion Detection System “Snort” Performance. Electronics and Electrical Engineer. ISSN 1392 – 1215. 7 (87), 2008. Pp. 15-18.

[4] S. Ngamsuriyaroj, B. Sa-nguankwamdee, E. Maharattanaviroj and P.Ua-sopon, Measurement of Snort Performance under Various Attacks. National Conference on Computer Science and Engineering, Bangkok, Thailand. 2007.

[5] A. Alhomouda, R. Munira,J. P. Dissoa,I. Awana, b,A. Al-Dhelaan. Performance Evaluation Study of Intrusion Detection System. Procedia Computer Science, 5, 2011, pp. 173–180.

[6] Suricata, http://www.openinfosecfoundation.org/.

[7] Snort, http://www.snort.org/.

[8] Bro-IDS, http://www.bro-ids.org/.

[9] Emerging Threats, http://www.emergingthreats.net/.

[10] Ostinato, https://code.google.com/p/ostinato/.

[11] Low Orbit Ion Cannon (LOIC), http://sourceforge.net/projects/loic

[12] High Orbit Ion Cannon (HOIC), http://www.symantec.com

[13] Network Mapper (NMAP), http://nmap.org/


Recommended