Date post: | 15-Jul-2015 |
Category: |
Data & Analytics |
Upload: | andrej-simko |
View: | 222 times |
Download: | 1 times |
MASARYK UNIVERSITYFACULTY OF INFORMATICS
}w���������� ������������� !"#$%&'()+,-./012345<yA|Comparative Analysis of Personal
Firewalls
MASTER THESIS
Bc. Andrej Simko
Brno, January 2015
Declaration
Hereby I declare, that this paper is my original authorial work, which I have workedout by my own. All sources, references and literature used or excerpted during elabo-ration of this work are properly cited and listed in complete reference to the due source.
Supervisor: Mgr. Vıt Bukac
ii
Acknowledgement
I want to express my sincere gratitude to Vıt Bukac. Without his guidance, this workwould have never been brought to the level you see now. I could not have imagined abetter advisor.
I also want to thank my family for their constant support in everything - my deci-sion so far and studies alike.
iii
Abstract
This thesis describes the analysis of 18 personal firewalls. It discovers the differences intheir behaviour while they are under various techniques of port scanning and Denialof Service (DoS) attacks. With port scanning, the detection ability, time consumption,leaked port states and obfuscation techniques are analysed. With using different DoSattacks, performance measurements of CPU and network adapter are taken. The poten-tial of firewall fingerprinting based on the different behaviour across multiple productsis also addressed.
iv
Keywords
firewall, comparative analysis, port scan, denial of service, flooding attacks, finger-printing, port TCP/0
v
Contents
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Personal firewalls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2.1 Packet filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22.2 Stateful inspection firewall . . . . . . . . . . . . . . . . . . . . . . . . . . . 22.3 Unified Threat Management (UTM) and Next Generation firewalls . . . 32.4 Testing - Proactive Security Challenge 64 . . . . . . . . . . . . . . . . . . 3
3 Attacks by types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53.1 Port scanning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
3.1.1 Ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53.1.2 Flags . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53.1.3 Port scanning attack . . . . . . . . . . . . . . . . . . . . . . . . . . 63.1.4 Port scan attack techniques in Nmap . . . . . . . . . . . . . . . . . 7
TCP SYN scan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8TCP connect() scan . . . . . . . . . . . . . . . . . . . . . . . . . . 8TCP FIN scan, TCP Xmas scan, Null scan . . . . . . . . . . . . . 10TCP ACK scan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10TCP Window scan . . . . . . . . . . . . . . . . . . . . . . . . . . . 10TCP Maimon scan . . . . . . . . . . . . . . . . . . . . . . . . . . . 11UDP scan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11SCTP INIT scan . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11SCTP COOKIE ECHO scan . . . . . . . . . . . . . . . . . . . . . 11IP protocol scan . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12Service/Version detection scan . . . . . . . . . . . . . . . . . . . 12
3.1.5 Other Nmap options . . . . . . . . . . . . . . . . . . . . . . . . . . 12-6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-O . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-p <port range> . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13--top-ports <count> . . . . . . . . . . . . . . . . . . . . . . . . . . 13--mtu <mtu number> . . . . . . . . . . . . . . . . . . . . . . . . . 13--scan-delay <time> . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.1.6 Ping scan (ICMP echo request) . . . . . . . . . . . . . . . . . . . . 143.2 Denial of Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.2.1 Hping3 tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153.2.2 Low Orbit ION Cannon (LOIC) . . . . . . . . . . . . . . . . . . . . 163.2.3 IPv6 Router Advertisement (ICMPv6 type 134, code 0) . . . . . . 163.2.4 IPv6 neighbor Advertisement (ICMPv6 type 136, code 0) . . . . . 17
4 Experiment description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18Virtual environment . . . . . . . . . . . . . . . . . . . . . . . . . 18Physical environment . . . . . . . . . . . . . . . . . . . . . . . . . 18
vi
4.1 Choosing particular firewalls . . . . . . . . . . . . . . . . . . . . . . . . . 19Firewall settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.2 Port scanning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20Logging port scanning attacks . . . . . . . . . . . . . . . . . . . . 21Port TCP/0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24Detection thresholds . . . . . . . . . . . . . . . . . . . . . . . . . 24
4.3 DoS attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254.3.1 DoS results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
5 Fingerprinting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335.1 Using time differences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335.2 Using port states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
6 Ideal behaviour of firewall under certain attacks . . . . . . . . . . . . . . . . 486.1 Ideal port scanning behaviour of a firewall . . . . . . . . . . . . . . . . . 486.2 Ideal behavior under the DoS attacks . . . . . . . . . . . . . . . . . . . . . 51
7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 547.1 Future improvements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55Appendices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59A List of attachments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59B Abbreviations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60C Additional figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
vii
Chapter 1
Introduction
Personal firewall is a need-to-have requirement on every machine to defend againstvarious network attacks. In the recent years, specialized firewall applications were re-placed by all-in-one security suites. All major antivirus companies have incorporatedfirewall into their products to facilitate a better protection for their products. There arenot that many comparative analyses of firewalls for users to see which products havehigher score than others. One notable example of a test suite is the Proactive SecurityChallenge 64 [21], but it doesn’t perform tests I incorporated into this thesis. See Chap-ter 2.4 for more details.
Zero-day exploits are still on the rise. For the attacker, the precise knowledge whichendpoint-protection system is installed on the victim’s device is invaluable informa-tion. Especially if he wants to avoid detection, or use certain exploits which can be ex-ploited on particular security suites. Therefore the ability to fingerprint firewalls comeshandy. I didn’t find any research to perform firewall fingerprinting using port scanningattacks, so I created one.
Although the Denial of Service (DoS) attacks usually target servers, they can also belaunched against the entire networks. If that happens, it is interesting to observe howpersonal firewalls behave when they are under different kinds of DoS attacks. In thesecond chapter, the general history and the principles of how firewalls work will be de-scribed. The third chapter will focus on describing the theory behind the attacks usedin this thesis. Different port scanning techniques and DoS attacks will be describedclosely. The fourth chapter will elaborate on the preparation of environment for testingpurposes, mention important differences observed between various firewall brands,give example of detection thresholds of triggering alarms of the attacks and show theresults from DoS attacks observed on the performance of the victim’s computer. Thefingerprinting will be illustrated in the fifth chapter, along with a few examples of de-termining which firewall is installed on the victim’s computer without any previousknowledge. The results from all port scanning attacks are shown in tables with all theport states, as well as the timings of each portscanning attacks. Interesting statisticalobservations are also pointed out. In the chapter six, the description of how the idealfirewall should behave is outlined. This is important to note as it would successfulcounter possible fingerprintings by using port scan attacks. The final chapter describespossibilities for the future improvements as well as summarizes the most importantfindings and observations of this work.
1
Chapter 2
Personal firewalls
Firewalls filter the traffic into or out of a network, based on certain rules. Personalfirewalls are host based - they provide protection for a single operating system. Basedon certain rules set by the administrator, they can either allow or deny certain con-nections. There are two possible approaches - implicit allow and implicit deny defaultpolicies. For the security reasons, the whitelisting approach is highly recommended -what is not allowed is denied by default. When new packet/session/connection is re-ceived by the firewall, it usually goes through filtering rules based on their order in thelist. Suppose we have a rule “accept all packets from 192.168.20.0/24” at the top of thelist, followed be a “drop all” rule. When packet arrives, its header is examined by thefirewall and checked against this ruleset. For example, if the packet comes from the IPaddress 192.168.20.2, it is allowed. If it comes from 192.168.16.32, it is denied.
2.1 Packet filters
A packet filter examines the header of each packet individually and can filter trafficbased on simple rules. IP addresses, subnet addresses, ports, and protocols are usedfor creating filtering rules in the ACL (Access Control List). By combining these rules,only the TCP (Transmission Control Protocol) traffic that originates from the IP addresswithin my Local Area Network (LAN) which comes to a port 25 can be allowed. Thepacket filter is handy to have on routers, but is almost useless to have on personal com-puter. For once it needs configuring by more advanced user, and for second it is notsufficient for the end stations today. It usually requires configuring long ACLs, whichtakes high amount of time and needs certain skills. However it can still become handyeven for end stations. For example during DoS attack to a certain port or from a certainIP address. It can effectively leverage this thread, if configured properly. Second use-case could be blocking regular port scanning attacks from a certain address. Blockingall unused ports is highly recommended approach of using the packet filter. This typeof firewall is also called “stateless” as it can’t determine whether the packet is a part ofan ongoing session. It understands only link layer, internet layer and transport layer ofthe TCP/IP protocol stack.
2.2 Stateful inspection firewall
Instead of inspecting each packet individually, the stateful firewall monitors the overallsession. With the TCP, the session must begin with a 3-way handshake - see Figure 2.1.
2
2. PERSONAL FIREWALLS
Figure 2.1: 3-way TCP handshake [2]
After the session is established, the firewall can monitor its status and traffic. StatefulPacket Inspection (SPI) is aware of who created the connection and it can determine ifpacket is either the start of a new connection, part of an existing connection, or is an in-valid packet. It can allow only inbound TCP packets that are generated in response toa connection initiated from inside of the internal network. Dynamic ports are openedbased on the needs of the connections and when the session is complete, firewall canclose these no longer needed ports. As with the packet filters, the stateful inspectionfirewall doesn’t understand the application layer of TCP/IP protocol suite.
2.3 Unified Threat Management (UTM) and Next Generationfirewalls
The firewall is no longer a specialized piece of software. It became a part of UTM - anintegration of application control, user awareness, Intrusion Detection System (IDS),Intrusion Prevention System (IPS), antivirus system, antispam system, content filters,deep packet analysis, anomaly detection system. . . It understands data on the appli-cation layer of TCP/IP and can block inappropriate content (e.g. vulnerabilities, web-sites, viruses. . . ). Evolutionary, the UTM was followed by proposition of Gartner whodefined the next step in firewall evolution in [1]. Today, these two stages are merginginto one another. They both focus on easy and effective management of the entire secu-rity system, because it is perceived to be the most common security problem. Gartnerin the research paper [29] stated: “Through 2018, more than 95% of firewall breacheswill be caused by firewall misconfigurations, not firewall flaws”.
As the firewall understands the application layer, the rules can be set for particularapplications: “Allow Skype for Alice” or “Deny Facebook for Bob”.
2.4 Testing - Proactive Security Challenge 64
Probably the most known test suite of application-based security on Windows can befound on www.matousec.com. It examines products called Internet security suites,personal firewalls, Host Intrusion Prevention Systems (HIPS). . . It entails 11 testing lev-els, each consisting of 10 tests. 38 firewalls are ordered based on their score in all 110test rated from 0% to 100%. All the source codes are available on their website. Theindividual tests are divided into 4 main categories:
3
2. PERSONAL FIREWALLS
Product ScoreAgnitum 90%
Avast! 8%AVG 7%Avira 9%
Bitdefender 19%COMODO 97%Emsisoft -
ESET 67%F-Secure 6%
Gdata -Kaspersky 89%
McAfee 3%Microsoft -
Norton 9%Panda 1%
Quick Heal -TrustPort 8%
ZoneAlarm 34%
Table 2.1: Firewall score in Proactive Security Challenge 64
• Leak tests attempt to send data to the Internet server.
• Spying tests are using keyloggers or packet sniffers to spy on user’s inputs ordata.
• Autorun tests are trying to install themselves persistently so that they wouldremain active after the reboot.
• Self-defense tests are attempting to terminate security product processes or threads,and remove, destroy or corrupt critical objects for that security product.
As the reader can observe, these categories are not directly testing firewalls themselvesbut rather the overall endpoint security protection systems. There are no network at-tacks included in these tests. In my thesis, I’ve only done tests which focus on fire-walls. For the completeness of this research, see Table 2.1 with product score of fire-walls which I used in my tests and their score in Proactive Security Challenge 64[21].Some firewalls I tested were not tested by the Proactive Security Challenge 64, hencethere is “-” sign in the table.
4
Chapter 3
Attacks by types
3.1 Port scanning
3.1.1 Ports
After we are provided with host-to-host delivery by the network layer of TCP/IP stack,we need to address the process-to-process connectivity in the transport layer, so thatapplication would also be addressable. This is done by assigning different port num-bers to the host. The particular process/application is uniquely determined by the IPaddress with port number (this touple is together called the socket) and the transportlayer protocol (TCP, UDP, SCTP and DCCP). A port is uniquely identified by a 16-bitlong port number (0-65535), which is stored in segment’s header (see Figure 3.1)Ports can be divided in 3 categories (set by RFC 6335 [8]):
• System (well-known) ports (0-1023)
• User (registered) ports (1024-49151)
• Dynamic (private) ports (49151-65535)
Since November 1977 [5], the Internet Assigned Numbers Authority (IANA) - an orga-nization for assigning IP addresses, AS (Autonomous System) numbers, port numbersand others; have been periodically updating tables of port numbers in the form of RFCdocuments. In January 2002, the last RFC 1700 [5] was rendered obsolete by RFC 3232[6] which officially stated that all future changes will be made available in the onlinedatabase on www.iana.org. Today, the official list can be found here: [4]. IANA canonly assign system and user ports, never dynamic ports. Some of the examples of portsthat are mentioned in this thesis are: 135 (TCP - msrpc), 445 (TCP - microsoft-ds) or 3389(TCP - ms-wbt-server).
3.1.2 Flags
In the TCP header (see Figure 3.1), apart from other fields, there are six 1-bit fieldsindicating flags:
• URG - Urgent - packet must be processed urgently
• ACK - Acknowledgment - for segment that has been successfully received
• PSH - Push - the receiver should immediately push data to the upper level
• RST - Reset - hard termination of the connection
5
3. ATTACKS BY TYPES
Figure 3.1: TCP Header [15]
Figure 3.2: 4-way TCP handshake [3]
• SYN - Synchronize - for setting up the connection and synchronizing sequencenumbers
• FIN - Finish - no more data from sender, connection tear-down
TCP has to establish the connection first and send data later - for this purpose, the 3-way handshake needs to occur prior to sending data (see Figure 2.1). For the connectiontear-down, 4-way communication is needed (see Figure 3.2).
3.1.3 Port scanning attack
Port scanning attack is a process of probing host for open ports. Using various tech-niques, the attacker/administrator/penetration tester is able to differentiate betweenvarious states of ports: open, closed, open|filtered (where “|” means logical “or” andis used when the particular technique can’t differentiate between these two states),
6
3. ATTACKS BY TYPES
closed, and filtered.
Port scans can be divided into horizontal and vertical scans. Horizontal scan is scan-ning of single port on many victims, whereas vertical means scanning many ports ona single victim. In my research I was scanning only 1 victim at a time, thus I will onlydescribe vertical scans.
Since the early days, this particular attack have been used to discover vulnerablesystems that can be potentially attacked or exploited. One could object that over thetime, many countermeasures and security devices/features have been implementedthat would stop this thread - firewalls, Intrusion Detection Systems (IDS), IntrusionPrevention Systems (IPS), Network Address Translation (NAT), or proxy servers. Theport scanning attack still poses great threat nowadays - for example the HACIENDAprogram of NSA/GCHQ [12]. In this document they describe common ground forscanning entire countries and sharing the results between agencies of United States,Canada, United Kingdom, Australia and New Zealand.
Port scans are not only used by government agencies, they are often used by crimi-nals or hackers. Especially when they possess a knowledge of some 0-day vulnerabilityand the knowledge from port scans, they can attack their victims.
One of the studies which can demonstrate the scale of today’s port scanning attacksis the paper “An Internet-Wide View of Internet-Wide Scanning” [22]. The authorshave shown, that scans of the entire IPv4 address space are quite common nowadays- by legitimate researchers, security companies, and attackers alike. According to theirwords: “Internet-scale horizontal scans have become common”. High speed scanningof the entire IPv4 address space, and thus public Internet addresses, was made possiblewith introduction of two open-source utilities in 2013: Zmap [23] and Masscan [24]. Thetime required for launching such an Internet-wide scan attack is around 44 minutes forZmap [22] and under 3 minutes for Masscan [24]. Observations on real networks weremade, that within hours or days after a new vulnerability is discovered (e.g. Linksysbackdoors, Heartbleed or NTP DDoS attacks), there is an obvious spike in scanning ofvulnerable ports.
3.1.4 Port scan attack techniques in Nmap
Nmap (network mapper) is an open source cross-platform tool for network discoveryand security auditing. It is fully capable of both horizontal and vertical scans. I choseto use Nmap for testing purposes, because it has many options which can be used forscanning particular host and evading detections by firewalls on the victim. See Table3.1 for the list of nmap’s scanning techniques with used commands and possible portstates. See Figure 3.3 for graphical interpretation of Nmap scanning techniques de-scribed later.
I stored every command I ran through Nmap into a separate txt file (see files onCD for more information), so that I could later analyze the results. All the importantinformation, like how many ports are in open state, which port states differ from all
7
3. ATTACKS BY TYPES
Scanning technique Nmap command Possible states of portsTCP SYN -sS open, closed, filtered
TCP connect() -sT open, closed, filteredTCP FIN, TCP Xmas, TCP Null -sF, -sX, -sN open|filtered, closed, filtered
TCP ACK -sA unfiltered, filteredTCP Window -sW open, closed, filteredTCP Maimon -sM open|filtered, closed
UDP scan -sU open, open|filtered, closed, filteredSCTP Init -sY open, closed, filtered
SCTP COOKIE ECHO -sZ open|filtered, filteredIP protocol scan -sO open, open|filtered, closed, filtered
Service and Version detection -sV open, closed, filtered
Table 3.1: List of Nmap techniques
others, as well as the time consumption of particular scan with particular parameterscan be used for further analysis from the log files. In every directory (with scanningthe particular firewall) there are at least 33 log files from different scans.
The importance of Nmap can be seen simply by looking at its occurrence on manysecurity conferences, such as “Let’s Screw With Nmap” on Defcon 21 [11], “NetworkAnti-Reconnaissance: Messing with Nmap Through Smoke and Mirrors” on Defcon 20[10], or many others.
TCP SYN scan
This is probably the most common and the most used scanning technique (it is also thedefault setting) for Nmap tool. It never completes the TCP handshake (which makesit stealthy) because Nmap resets the connection before it can be completed. The at-tacker sends only TCP SYN segments and never responds to SYN+ACK segments.After sending the TCP SYN segment, the response (from the victim) can either beSYN+ACK if the port is opened; RST if the port is closed; or getting the ICMP un-reachable error or no response at all if the port is filtered. The only problem with TCPSYN scan can be the need of root privileges. As this is now widely used technique,many network protection systems detects these types of scans.
TCP connect() scan
Tries to establish the entire TCP connection and finish the 3-way handshake (afterwhich RST segment is immediately sent back) by using the connect() function of op-erating system. It thus doesn’t need the privileged access to be run. As with the TCPSYN, closed port is determined by receiving only RST segment; opened port is deter-mined by SYN+ACK segment (to which Nmap sends ACK and RST segments); andfiltered port is determined if nothing is received. This technique is the most obviousand the most easy to detect.
8
3. ATTACKS BY TYPES
TCP FIN scan, TCP Xmas scan, Null scan
These are more stealthier methods than TCP SYN scan, because they do not even at-tempt to create a handshake. They just send 1 segment that would probably neveroccur in the real world (except from FIN scan). According to RFC 793 [9] “If the [des-tination port] state is CLOSED then all data in the incoming segment is discarded. Anincoming segment containing a RST is discarded. An incoming segment not containinga RST causes a RST to be sent in response”. On the next page it is stated that if segmentsare sent to open ports, but do not contain RST, SYN or ACK, then “you are unlikelyto get here, but if you do, drop the segment, and return”. In other words, when hostreceives information on the closed port (and it doesn’t contain RST), it should respondwith RST segment; and if the port is opened (and neither one or RST, SYN or ACKis present) it should not send anything. If ICMP unreachable error is generated, portis marked as filtered. Nmap thus uses either “closed”, “open|filtered”, and “filtered”information. Although the exact response should be operating system specific, I dis-covered that there are slight differences when using different firewalls.Differences in these techniques are as follows:
• FIN: only FIN flag is present (6 bits: 000001)
• Xmas: FIN, PSH and URG flags are present (6 bits: 101001)
• Null: no flag is present (6 bits: 000000)
Unfortunately, not all operating systems follow RFC 793. Some of them (Microsoft Win-dows, many CISCO devices, and few others) “send RST responses to the probes regard-less of whether the port is open or not” [7]. This results in all ports being marked asclosed.
Such a behaviour was observed on IPv4 mainly in Emsisoft and Kaspersky fire-walls - most of their ports were marked as “closed”. With other firewalls on IPv4,the majority of ports were marked as “open|filtered”. On IPv6, this changed rapidly- Panda, TrustPort and ZoneAlarm marked all 1000 scanned ports as “closed”. Basedon these observations, operating system can’t be deduced just by observing differentport states, as firewall changes the default behaviour of Windows (and most likely alsoother) operating systems.
TCP ACK scan
Although the TCP ACK scan doesn’t determine whether the port is opened (or evenopen|filtered), it can be useful to determine if firewall is stateful or not and which portsare filtered. If the scanned system is unfiltered, both open and closed ports shouldreturn RST segment (in which case Nmap marks them as unfiltered). If there is noresponse, or ICMP error message occurs, they are labeled as filtered.
TCP Window scan
Is exactly the same as the TCP ACK scan, but uses exploitation of implementationdetails to determine between open and closed ports. When RST segment is returned, it
10
3. ATTACKS BY TYPES
might happen that Window field size can be either zero or positive number. If it is zero,ports are usually closed; and if it is a positive number, they might be open. The resultsfrom my research however indicate that this isn’t pure operating system specific, anda few firewalls behave differently than they should.
TCP Maimon scan
Named after its discoverer, Uriel Maimon, this scanning method should be BSD-likesystem specific. It is the same as TCP FIN, TCP Xmas and TCP Null with one difference- FIN+ACK is used in probes. According to the RFC 793 [9], the RST segment should begenerated to FIN+ACK probe; which is not always true and some operating systemssimply drop the segment if port is open.
UDP scan
This is the only option in Nmap which is able to scan UDP ports. Since UDP is aconnection-less service, there are no flags in the UDP header. In fact, the UDP header isdesigned to be as small as possible and has only source and destination IP addresses,length and checksum. For some common ports (e.g. 53 - DNS and 161 - SNMP) Nmapsends protocol-specific payload, and for all other ports the data part is empty. If theresponse is ICMP port unreachable error (type 3, code 3), then the port is closed. Ifdifferent ICMP unreachable error (type 3, codes 1, 2, 9, 10 or 13) [13] is generated, portis marked as filtered. If UDP packet is generated as a response, port is marked as open;and if no response at all is generated, port is classified as open|filtered.
SCTP INIT scan
The Stream Control Transmission Protocol (SCTP) is rather new protocol defined inRFC 4960 [7]. This particular INIT scan is an equivalent to TCP SYN scan, because itnever creates full connection. Nmap is able to scan 52 SCTP ports, which are storedin the “nmap-services” file in the nmap installation directory. Note that according toIANA, there are 65 ports allocated to SCTP protocol [4].
Although it is unlikely that normal end user station would use any of these ports,I tried to test it anyway. This was the only portscan technique in which all firewallsshared exactly the same port state results - all 52 were reported as filtered on bothIPv4 and IPv6 scans. The only difference with IPv4 was negligible - with Panda andKaspersky it took 2.13s, whether on all other firewalls it took exactly 2.34s. The sametime consumption result was observed with SCTP COOKIE ECHO scan.
SCTP COOKIE ECHO scan
More advanced variant of SCTP scanning exploits the fact, that implementation shoulddrop packets containing COOKIE ECHO chunks on open ports, but send ABORT seg-ment if the port is closed. It can’t differentiate between open and filtered ports, but canidentify closed ports. Nmap is again able to scan 52 ports and unlike the SCTP INITscan, there are visible differences in port states across firewalls on IPv4 - Kaspersky and
11
3. ATTACKS BY TYPES
Panda have 2 and 3 ports marked as filtered, respectively. All other firewalls have all52 ports in open|filtered state. Apart from the same time difference with the same fire-walls, McAfee’s scan took 2.61s to finish. With IPv6 scanning, results were even morerigid - only 9 firewalls reported all 52 ports as open|filtered.
IP protocol scan
Up until now every scan was probing particular ports on TCP, UDP or SCTP protocols.On the other hand, IP protocol scan is able to determine which IP protocols (ICMP,IGMP, TCP, UDP, SCTP, . . . ) are supported by the target machine. The scan iteratesthrough 8-bit IP protocol field in the IP header (thus scanning for 256 different protocolnumbers). If Nmap receives any response from protocol from scanned host, it marksprotocol as open. If an ICMP protocol unreachable message (type 3, code 2) is gener-ated, the protocol is marked as closed. If other ICMP error message is generated (type3, codes 1, 3, 9, 10, or 13) [13] then the protocol is marked as filtered. If no response isgenerated then the protocol is marked as open|filtered.
Service/Version detection scan
Normally, it is almost certain that if port 80/TCP is open, the http runs on it, or if25/TCP is opened, smtp service runs on it. Such a default behaviour is however notalways the case. This is when the Service and version detection option in Nmap comeshandy. Thanks to the vast database, it can also differentiate the version number ofparticular services. Nmap first tries to determine the service protocol (e.g. HTTP, SSH,FTP), the application name (e.g. Solaris telnetd, Apache httpd), the version number,hostname, device type (e.g. printer, router), the OS family (e.g. Windows, Linux) [14].
3.1.5 Other Nmap options
-6
This option turns on the IPv6 scanning (IPv6 address has to be used instead of IPv4).Enabling IPv6 port scanning can be very useful nowadays, provided we have the IPv6address of the victim. Because of the huge address space, it is no longer possible to gothrough all IP addresses on particular subnet as it was possible with IPv4. On the otherhand, hostnames can be also used instead of the entire IPv6 address, which makesthings easier. The output looks the same as when scanning IPv4 ports. There are no-table differences in both default port state behaviour and time of the scans.
-O
When used, it enables the OS detection by exploiting the TCP/IP stack fingerprinting.Nmap sends carefully crafted TCP/UDP packets to the victim and analyses every pieceof the response. After many of such packets, it compares result to the “Nmap-os-db”file (in version 6.47, the file was last edited 2014-08-13 15:39:44). Users are encouragedto send new finger-prints onto the Nmap website.
12
3. ATTACKS BY TYPES
-p <port range>
This option scans specific ports. For my purposes I always used it while testing detec-tion threshold of number of scanned ports; scanning special port TCP/0; and scanningall ports (with “-p 1-65535”) on the victim host. Unless specified otherwise by the user,this port range is scanned in randomized (non-consecutive) way.
--top-ports <count>
Scans selected number of ports that are most used on the Internet. The installation di-rectory of Nmap contains a file “Nmap-services” which has a database with 19 908most used ports (in Nmap 6.47 it was edited on 2014-08-13 20:52:08). Every row is de-fined by Service name, portnum/protocol, open-frequency, and optional comments.Based on this file, Nmap is able to scan certain number of most used ports on the In-ternet.
Those firewalls, which were successful in detecting the port scan attack on defaultsettings, have some threshold value of scanned ports which triggers detection whencrossed. This threshold value is different when testing a range of ports (e.g. from port1 to port 100) and number top ports. I assumed that the attacker would be able to scanless top ports than a range (because the top ports are the most used ones, they shouldbe most protected). As I will describe later, I observed the exact opposite - I can scanmore ports while using --top-ports command, than if I were to scan a range of ports.See Table 4.6 for the exact numbers.
--mtu <mtu number>
Setting the Maximum Transmission Unit (MTU) is a very good trick for evading detec-tion on some firewalls by forcing the segments to fragment. The multiple of 8 has to beused when specifying the fragmentation number. For my tests, I tried 4 alternatives: 8,16, 32, and 64. See Table 4.8 for the results.
--scan-delay <time>
Setting the scan delay will set the time between two probes. Nmap can use various timeformats in the time parameter: milliseconds (ms), seconds (either (s), or nothing - sinceit is the default option), minutes (m), or hours (h). Most of the firewalls which detectport scanning have the ability to detect them only from above a certain threshold value.With using this parameter, I tried to find out the lowest value which firewall didn’tdetect. As I tried to have a precision of 10 milliseconds. If you want to be absolutelycertain your portscan won’t be detected, try raising up the value a little. I used thisparameter only when port scanning with default configuration was detected, so in theattached tables, some firewalls have no value assigned to them. See Table 4.7 for moreinformation.
13
3. ATTACKS BY TYPES
Product Success Detectionping ping -6 ping ping -6
Agnitum no no partially partiallyAvast! yes yes no noAVG no no partially partiallyAvira no no no no
Bitdefender no no no noCOMODO no no no noEmsisoft no yes no no
ESET yes yes no noF-Secure no yes no no
Gdata yes yes no noKaspersky no no no no
McAfee no no yes yesMicrosoft no no no no
Norton no yes no noPanda yes yes no no
Quick Heal no yes no noTrustPort no yes partially no
ZoneAlarm no no partially no
Table 3.2: Ping scan behaviour
3.1.6 Ping scan (ICMP echo request)
ICMP scan is the easiest one to test, because of native support from operating sys-tems with ping command. Note that the response to ping can usually be set in settingsof the particular firewall, or underlying operating system. Interestingly enough, re-searchers found out two daily ICMP scans of the entire IPv4 address space comingfrom Guangzhou, China [22]. The purpose of these scans, as well as their usefulness ormaliciousness remains a mystery. For the completeness of this work, I observed fire-walls’ response to ping probes, and their detection capabilities, which can be seen inthe Table 3.2. In this table, the “Partial” with AVG means that in logs there was “Systemblock in ICMP on local port 8” for IPv4 ping and “System block in ICMPv6 on localport 128” on IPv6 ping. ZoneAlarm stored only Type: “Route”, Action: “Blocked”, date,time, source and destination IP. There was no mention of ping, but there was somethingelse stored in logs. Only McAfee detects ping scan.
3.2 Denial of Service
Denial of Service attacks (DoS) are types of attacks, which can make host, networkor infrastructure unavailable to their legitimate users. These attacks are mainly usedagainst web servers, DNS servers, email servers or network infrastructure. DoS attackscan be used to interrupt computers or the entire network. When two or more attackersare taking part in the same attack, it is called a Distributed Denial of Service (DDoS).DDoS usually makes use of extensive botnets (compromised hosts) to launch this type
14
3. ATTACKS BY TYPES
of attack [25].
So called “asymmetric” DoS attacks can be mounted from a single slow device ona slow network, but they can still inflict major damage (e.g. the amplification attackon NTP - the Network Time Protocol - with the amplification factor 4670 [18]). I willfocus on DoS on end stations (computers of normal users), in particular on CPU andnetwork adapter of the victim.
There are many tools available online which can be used to attack some targets (e.g.XOIC [16] or LOIC [17]). These tools can easily be used against particular hosts - on IPaddresses in LAN. So called “flooding” is used, when the attacker tries to overwhelmits victim by a huge volume of packets. This mainly results in a consumption of vic-tim’s network adapter resources. The CPU may also be heavily affected, depending onthe type of packet and length of the data part. Firewalls have to look on each packetheader to inspect it and if there is a need to perform a deep packet inspection (e.g. tobe able to detect some attack types), then firewall looks into the application data partof the packet and performs an analysis. This behaviour is depleting CPU resources. Ifthese flooding attacks are not detected, the user can be completely unaware that hisinability to work with his device is due to some attack.
Similar to TCP port scanning attack, the attacker can set any TCP flags in TCP flood-ing attack. For the simplicity, I did not set any flags in my attacks. I did however test3 (different) possibilities of every flooding attack: with 0 data size (only header wassent), 1000 bytes data size and 65495 bytes data size (the maximum value possiblewith Hping3 tool).
I discovered that different firewalls cope differently with DoS attacks. As we willsee, some of them can take as low as 19% of average CPU usage in contrast with 100%with other firewalls while being under the same type of active attack. Bandwidth con-sumed on the particular network interface can also be different: ranging from 1% to98%.
If these DoS attacks were targeted on a single open port discovered by Hping3,Nmap, or any other tool, I think they would inflict even more damage by consumingmore resources, because firewall could try to do deep-packet analysis. On the otherhand while non-open ports are being scanned, the firewall should not care of thesepackets and simply drop them. However, since every firewall I tested have differentports open and some of them have no open ports at all, I decided not to test this ap-proach and focus on scenario which can be used all the time - an attack using defaultsettings.
3.2.1 Hping3 tool
Hping3 [19] is a packet generator tool available by default in Linux distributions suchas BackTrack or Kali. It can create TCP, UDP or ICMP packets with various options,fragmentations or sizes and send them from randomly generated IP address (for ex-ample for covering the tracks of sender’s IP address). It can be used for port scanning,
15
3. ATTACKS BY TYPES
testing firewall rules, or mounting a DoS attacks. It can give potentially unlimited con-trol over flooding DoS attacks.
3.2.2 Low Orbit ION Cannon (LOIC)
LOIC is a network stress testing and DoS tool available on Windows which is sim-ple to use by anyone. User can select either target’s URL or IP address, attacked port,TCP/UDP/HTTP method, payload (with TCP/UDP) or the speed of generating pack-ets. The speed is controlled by a slider and there are no numbers to show how manypackets per second are generated. I chose LOIC because the potential script kiddiesrunning on Windows could choose this tool as well because of friendly user interface.
3.2.3 IPv6 Router Advertisement (ICMPv6 type 134, code 0)
DoS attack can be created by flooding a Local Area Network (LAN) with Router Ad-vertisement messages, which will consume 100% of CPU. Many operating systems areaffected: Windows (8, 2008, 7, 2003, 2000, XP), all FreeBSD versions, all NetBSD ver-sions and CISCO devices with firmware released before November 2010. All vulnera-ble operating systems will spend great amount of system resources to many SLAAC(StateLess Auto Configuration) processes. Multiple CVE (Common Vulnerabilities andExposures) has been created with a severity score of 7.8 [20]. The official descriptionof vulnerability follows: “When flooding the local network with random router adver-tisements, hosts and routers update the network information, consuming all availableCPU resources, making the systems unusable and unresponsive” [20]. According to thesame source, “a personal firewall or similar security product does not protect againstthis attack, as the default filter rules allow these packets through”. Although the reportis from April 2011, no tested firewall detected this attack. There were only few firewallswhich were able to consume less than 90% of average host’s CPU performance.
For assigning IPv4 addresses to hosts, the Dynamic Host Configuration Protocol(DHCP) is used. For IPv6 addresses either stateful (DHCPv6) or stateless (neighbordiscovery protocol [26]) can be used. The disadvantage of stateful autoconfigurationis the need of the DHCPv6 server, which can be unavailable in normal household forTV, refrigerator or other devices that can have IPv6 address. The stateless neighbordiscovery protocol uses ICMPv6 messages and is responsible for autoconfiguration ofIPv6 addresses, determining network prefixes, determining layer 2 addresses of nodeson the same link and more. This protocol consists of 5 ICMP messages:
• Router Solicitation (RS)
• Router Advertisement (RA)
• Neighbor Solicitation (NS)
• Neighbor Advertisement (NA)
• ICMP Redirect
16
3. ATTACKS BY TYPES
When new node connects to the network, it sends Router Solicitation message tothe network. Router (as the “clever wiseman”) responds with a Router Advertisementmessage, which is also sent out periodically by the router. The process of stateless au-toconfiguration of a node is complex and requires following steps:
• Link-Local Address Generation (prefix “FE80” followed by 54 zeroes, followedby 64-bit MAC address or a randomly generated ID),
• Link-Local Address Uniqueness Test (to determine if generated address is notalready used in the local network),
• Link-Local Address Assignment (if uniqueness test passed, the device assignsthe link-local address to its IP interface),
• Router Contact (to get more information),
• Router Direction (router directs host how to proceed), and
• Global Address Configuration (host configures itself with globally unique inter-net address).
When BackTrack’s native built-in command flood router6 eth0 is used, it floods theentire Local Area Network (LAN) with Router Advertisement messages, thus makingall IPv6 enabled LAN devices on LAN unresponsive, including game consoles likePlaystation 3 or Xbox.
3.2.4 IPv6 neighbor Advertisement (ICMPv6 type 136, code 0)
For layer 2 (L2) address resolution in IPv4, the ARP (Address Resolution Protocol) isused. In the IPv6 world, neighbor Solicitation and neighbor Advertisement messagesare used. In the usual scenario with no attacker present, a node looking for layer 2address takes the last 24 bits of the IP address whose L2 address it is looking for andconcatenates it with the common multicast prefix (FF02:0:0:0:0:1:FF00::/104). A neigh-bor Solicitation message is sent to such multicast address. When a node which belongsto that particular multicast group receives a neighbor Solicitation message, it answerswith a neighbor Advertisement message. This message contains all IPv6 and L2 ad-dresses the node has and 3 flags:
• Router flag: indicates if sender is a router.
• Solicited flag: indicates that the advertisement was sent in response to a neighborSolicitation from the Destination address.
• Override flag: indicates that the advertisement should override an existing cacheentry and update the cached link-layer address.
Although flooding all devices on LAN with BackTrack’s command flood advertise6eht0 is not so “deadly” as when using Router Advertisement messages, it still con-sumes over 95% of CPU. More detailed analysis will be used in chapters 4/5/6.
17
Chapter 4
Experiment description
I prepared two different environments: virtual for testing port scanning, and physicalfor testing DoS attacks. Every environment had victim and attacker, on which specialsettings needed to be used.
Virtual environment
I’ve been assigned access to Masaryk University’s Windows Server 2008 RC2 withHyper-V. On this virtualization server, two Windows 8.1 64-bit virtual machines (theattacker and the victim) were installed and activated. All updates were installed, alongwith required utilities for testing (Nmap, TODO ADD). Both computers were con-nected via the private LAN network so that there would be no other computers gener-ating traffic.
The victim was only connected to the internet while downloading, installing andupdating new endpoint security software that was to be tested. It was assigned IPv4address 192.168.20.1 and IPv6 address fe80::f5dd:cd1d:175a:2d6d.
The attacker was never connected to the internet and it was assigned IPv4 address192.168.20.2 and IPv6 address fe80::b0e1:ffb9:719e:686.
First, the checkpoint of victim virtual machine was created. Afterwards, differentendpoint security solutions were installed one at a time. After the installation was suc-cessful and all tests were run, the state of the victim was again checkpointed, so that Icould return to particular firewall if I needed to run more tests of check logs.
Physical environment
Because I wanted to observe the consumption of resources of each DoS attack, to testthis behaviour in the “ideal” virtual environment would be of no practical value. Iwas assigned two identical physical computers (Intel Core 2 Duo E8500 3.16 GHz with4096 MB DDR3 and Windows 8.1 Professional 32bit) and connected them with 1GbUTP (Unshielded Twisted Pair) cable without any interconnecting network device likerouter or switch. Again, Windows 8.1 was fully updated on both computers. On theattacker’s computer, I used bootable BackTrack R3 to perform attacks with using com-mands hping3, flood advertise6 and flood router6.
18
4. EXPERIMENT DESCRIPTION
Company Product Tested versionAgnitum Outpost Pro Security Suite 9.1 (4652.701.1951)
Avast! Internet Security 2014.9.0.2021AVG Internet Security 2015 2015.0.5315Avira Antivirus Pro 14.0.7.306
Bitdefender Internet Security 2015 18.17.0.1227COMODO Internet Security Premium 7.0.317799.4142Emsisoft Internet Security 9.0.0.4570
ESET Smart Security 8.0.301.0F-Secure SAFE Internet Security 14.115 build 100
Gdata Internet Security 24.4727Kaspersky Internet Security 2015 15.0.0.463 (a)
McAfee Total Protection 12.8.988Microsoft Windows 8.1 Firewall -
Norton Security 22.0.1.14Panda AntiVirus Pro 2015 15.0.4
Quick Heal AntiVirus Pro 15.00 (8.0.8.0)TrustPort Internet Security 14.0.5.5273
ZoneAlarm Free Antivirus + Firewall 2015 13.3.209.000
Table 4.1: Antivirus security suites
4.1 Choosing particular firewalls
I aimed to test firewalls that are well known in Czech Republic, along with widelyused firewalls in the world. Since firewalls are incorporated into endpoint “securitysuites”, I chose only one particular suite from each company, which had the firewallfeatures, but without any unnecessary functionalities for this research (e.g. driver up-dates, file encryption, system speedup, parental control, online backup. . . ). Therefore,some products are named only “Antivirus” but they contain full-featured firewall. Idownloaded and tested trial versions of selected 18 security suites with full updates(see Table 4.1).
Firewall settings
Every firewall has different filtering modes, protections of network, or levels of detec-tion. Therefore I decided to leave these settings mostly on default, but when asked Iselected “work” profile of network (out of options public/work/home), and automatic(mostly out of automatic/interactive/learning). See Table 4.2 for the list of all changedsettings. Where character “-” occurs, no interaction with user was needed and it wasleft on default settings.
It is very important to note, that I was observing and testing only the default be-haviour of selected firewalls. As this research was a quantitative and not qualitative
19
4. EXPERIMENT DESCRIPTION
Company SettingsAvast! private, unfriendlyAVG automatic
Bitdefender no autopilotCOMODO work, safe mode
ESET home/work, automaticPanda work
Table 4.2: Firewall settings
analysis, I did not dive in depth when testing particular firewalls. There might be spe-cialized settings for detection levels or option to deny certain packets (e.g. for blacklist-ing the ICMPv6 Router Advertisement of neighbor Advertisement messages) which Idid not test.
4.2 Port scanning
Port scanning was performed only in virtual environment using Nmap tool, thereforethe time of actual scanning can differ from the real world scenario. All outputs weresaved into TXT files and later checked for open/open|filtered/closed/unfiltered/filteredports, and the time of the actual attacks.
As the first step, all techniques in Nmap were used with their default settings (1000ports): -sS, -sT, -sF, -sX, -sN, -sA, -sW, -sM, -sU, -sY, -sZ, -sO, -sV. Then two othercommands were used: -sS -O, -sS -p1-65535 (the scanning of all ports). Finally, oneparticular port was scanned with -sS -p0. The same thing was done for both IPv4 andIPv6 addresses. On attached CD, there are at least 32 log files - at least 16 for IPv4 andat least 16 for IPv6.
If the firewall did not detect any of these techniques, no other tests were run (henceno values will be presented in tables). If the firewall detected some technique of theattack (but only from -sS, -sT and -sU), then more tests followed for determining:
• The highest number of scanned ports without detection (with command -p 101-X).
• The highest number of top ports scanned without detection (with command --top-ports X).
• The shortest scan delay without detection (with command --scan-delay X).
• If they are detected with using fragmentation (with command --mtu X where Xis one of 8, 16, 32, 64).
Some firewalls have blocking timers incorporated in them. When they detect andcounter attack, they will try to block it. It is not very efficient to wait 5 or more minutesfor these timers to reset. Such a waiting approach also contains the possible drawback
20
4. EXPERIMENT DESCRIPTION
of firewall’s internal mechanism learning that it is “normal” when firewall is beingattacked regularly and the behaviour could be changed to represent this fact, or thefirewall just won’t detect more attacks. Also the time needed for timers to reset is anissue itself. To cope with these three drawbacks during large-scale testing, the virtualmachine was restored from checkpoint every time when the firewall detected the portscanning attack. Checkpoints were created for each firewall when it was freshly in-stalled on the system and updated - hence no attacks were present at that point whichcould somehow change firewall’s behaviour. This approach was far more efficient thansometimes waiting for dozens of minutes before the firewall would detect the attackagain. The checkpoint approach is in particular very useful while finding thresholdvalues of detections with millisecond precision times.
The exact command used is as follows: “nmap [technique/s] -n -v [IP of the victim]> file.txt”. Options -v for higher verbosity level and -n for never doing DNS resolutionwere used in every command. The entire log of the actual scan was always saved intothe TXT file and stored on the attached CD.
Logging port scanning attacks
During every port scan, an observation on the victim’s PC was being done, whether se-lected technique is detectable (and thus stored in logs) and if the user was made awareof this ongoing attack by popup window with details. There were significant differ-ences - for detailed view of whether port scanning attack was detected and stored inlogs, see Table 4.3 for IPv4 and Table 4.4 for IPv6.
Many firewalls did not log just one attack of type “port scan”, but stored hundredsof events in the packet log. When this behaviour was observed, I wrote “no” in the tableof detecting attack and storing it in logs, because there was no higher-level informationabout particular attack. When I stated “partially” in the same table, it means somethingwas stored in logs but not “port scan”.Detailed explanation of “partial” detection follows:
• Agnitum: “Attack type: “KOX””
• McAfee: “ping”
• Norton: In “Firewall - Activities” (packet log), there are thousands packets loggedwhich look like: “Rule “Default Block All Inbound Windows Services (PublicNetworks)” rejected TCP (6) traffic with (192.168.20.2 Port (54560))”
• Panda: “TCP flag check”
• ZoneAlarm: Every packet shown along with flags used (e.g. “AF” with TCP Mai-mon scan)
Based on these results, it is noticeable that some firewalls either lack higher-levellogs, or they are present, but Portscanning attack is not detected and thus not put there.
21
4. EXPERIMENT DESCRIPTION
Com
pany
-sS
-sS
-O-s
S-A
-sT
-sA
-sW
-sM
-sU
-sN
-sF
-sX
-sY
-sZ
-sO
-sV
Agn
itum
yes
yes
part
ially
yes
yes
nono
yes
part
ially
yes
yes
nono
noye
sA
vast
!ye
sye
sye
sye
sye
sye
sye
sye
sye
sye
sye
sno
nono
yes
AV
Gno
nono
nono
nono
nono
nono
nono
nono
Avi
rano
nono
nono
nono
nono
nono
nono
nono
Bitd
efen
dery
esye
sno
yes
nono
noye
sno
nono
nono
yes
noC
OM
OD
Ono
nono
nono
nono
nono
nono
nono
nono
Emsi
soft
nono
nono
nono
nono
nono
nono
nono
part
ially
ESET
yes
yes
yes
yes
nono
noye
sno
nono
nono
noye
sF-
Secu
reno
nono
nono
nono
nono
nono
nono
nono
Gda
tano
nono
yes
nono
nono
nono
nono
nono
noK
aspe
rsky
yes
yes
yes
yes
nono
nono
nono
nono
nono
yes
McA
fee
yes
yes
part
ially
yes
yes
yes
yes
yes
yes
yes
yes
nono
part
ially
yes
Mic
roso
ftno
nono
nono
nono
nono
nono
nono
nono
Nor
ton
part
ially
part
ially
part
ially
part
ially
part
ially
part
ially
part
ially
part
ially
part
ially
part
ially
part
ially
nono
part
ially
part
ially
Pand
aye
sye
sye
sye
sno
nono
yes
part
ially
part
ially
part
ially
nono
noye
sQ
uick
Hea
lno
nono
nono
nono
nono
nono
nono
nono
Trus
tPor
tno
nono
nono
nono
nono
nono
nono
nono
Zon
eAla
rmpa
rtia
llypa
rtia
llypa
rtia
llypa
rtia
llypa
rtia
llypa
rtia
llypa
rtia
llypa
rtia
llypa
rtia
llypa
rtia
llypa
rtia
llypa
rtia
llyno
part
ially
part
ially
Tabl
e4.
3:IP
v4sc
ans
show
nin
logs
22
4. EXPERIMENT DESCRIPTION
Com
pany
-sS
-sS
-O-s
T-s
A-s
W-s
M-s
U-s
N-s
F-s
X-s
Y-s
Z-s
O-s
VA
gnit
umno
part
ially
nono
yes
nono
part
ially
noye
sno
nopa
rtia
llyye
sA
vast
!ye
sye
sye
sye
sye
sye
sye
sye
sye
sye
sno
nono
yes
AV
Gno
nono
nono
nono
nono
nono
nono
noA
vira
nono
nono
nono
nono
nono
nono
nono
Bitd
efen
der
--
--
--
--
--
--
--
CO
MO
DO
nono
nono
nono
nono
nono
nono
nono
Emsi
soft
nono
nono
nono
nono
nono
nono
nono
ESET
nono
nono
nono
nono
nono
nono
nono
F-Se
cure
nono
nono
nono
nono
nono
nono
nono
Gda
tano
nono
nono
nono
nono
nono
nono
noK
aspe
rsky
yes
yes
yes
nono
nono
nono
nono
nono
yes
McA
fee
yes
yes
yes
yes
yes
yes
yes
yes
yes
yes
nono
part
ially
yes
Mic
roso
ftno
nono
nono
nono
nono
nono
nono
noN
orto
npa
rtia
llypa
rtia
llypa
rtia
llypa
rtia
llypa
rtia
llypa
rtia
llypa
rtia
llypa
rtia
llypa
rtia
llypa
rtia
llyno
nopa
rtia
llypa
rtia
llyPa
nda
nono
nono
nono
nono
nono
nono
nono
Qui
ckH
eal
nono
nono
nono
nono
nono
nono
nono
Trus
tPor
tno
nono
nono
nono
nono
nono
nono
noZ
oneA
larm
nono
nono
nono
nono
nono
nono
nono
Tabl
e4.
4:IP
v6sc
ans
show
nin
logs
23
4. EXPERIMENT DESCRIPTION
Company IPv4 IPv6Agnitum, AVG, Avira, COMODO, ESET, F-Secure,
Gdata, McAfee, Microsoft, Norton, Quick Heal filtered filteredAvast!, Emsisoft, Kaspersky, Panda closed closed
Bitdefender, ZoneAlarm filtered -TrustPort filtered closed
Table 4.5: TCP/0 port states across firewalls
Port TCP/0
The port TCP/0 is reserved by IANA and should not be used by any applications.However it can be used for malicious purposes. Scanning of this port “is frequentlyused for fingerprinting network stacks and because it is not possible to block the porton some firewalls” [22]. CISCO’s technical lead Craig Williams wrote in his blog, that amassive spike they detected on 2013/11/02 is extremely likely to be a reconnaissancebefore the attack [27] or may be connected with a new kind of malware. In my analysisI identify the opportunity for firewall fingerprinting, because the response to scanningon this port differs throughout some firewalls - see Table 4.5.
Scanning only a single port with TCP SYN scan was not detected by any firewall,therefore this is rather stealthy technique of fingerprinting. Most firewalls didn’t replyto this scan probe, and therefore the state was reported as filtered by Nmap. Howeverwith Avast!, Emsisoft, Kaspersky and Panda, both IPv4 and IPv6 scans of TCP/0 werereported as closed. This is very interesting result mainly for Avast! because duringnormal TCP SYN scan, 999 ports were reported as filtered and 1 as open, which makesmarking TCP/0 port inconsistent with its default behaviour for other ports. Same ap-plies for Panda, although there were 988 ports reported as filtered, 11 as closed, and1 as open. The most significant result was observing TrustPort, because it respondeddifferently for IPv4 and IPv6 scans of TCP/0 port - the state was filtered on IPv4, butclosed on IPv6. Bitdefender and ZoneAlarm firewalls couldn’t be scanned with IPv6TCP SYN scan. Their IPv4 port states were both filtered. As using this two single-portscans techniques has interesting results, it was used in fingerprinting and connectedwith other scans for higher precision of the process.
Detection thresholds
Every firewall which detected certain port scanning attack have some detection thresh-old value. If this threshold value isn’t exceeded, the attack becomes invisible for thefirewall’s detection mechanism. Nmap can use multiple of evasion techniques, mainlyadjusting the delay between the probes, using fragmentation, or applying a limit to thenumber of scanned ports. The Table 4.6 shows 2 different approaches - first 3 columnsstate the highest number of ports scanned in “consecutive” way without being de-tected by the firewall. The second 3 columns show the upper limit when top ports arescanned. It is interesting to see that these 2 values differ, sometimes rather significantly.
In the Table 4.7 the individual threshold values of TCP SYN scan without detection
24
4. EXPERIMENT DESCRIPTION
Company -p101-XXX ports --top-ports-sS -sT -sU -sS -sT -sU
Agnitum 10 10 12 10 10 20Avast! 6 6 6 7 7 17
Bitdefender 5 5 1 3 3 4ESET 8 8 7 8 10 7
Kaspersky 81 77 - 100 85 -McAfee 171 43 124 143 76 122Panda 11 10 10 11 10 11
Table 4.6: Highest number of scanned ports without detection
Company Time in secondsAgnitum 30.00
Avast! 13.28Bitdefender 0.69
ESET 1.00Kaspersky 0.16
McAfee 2.45Panda 28.00
Table 4.7: Lowest scan delay without detection
are shown. Note that only those firewalls which detected TCP SYN scan are shown inthis table. All values are in seconds rounded to 2 decimal places. If you want to scanthe computer as fast as possible without triggering alarm or detection, you should usethese numbers. To be on the safe side, you might want to increase these values to littlehigher number.
Fragmentation can also be used to avoid detection. I tried 4 different fragmenta-tions by setting the MTU to 8, 16, 32, and 64 bytes. The results can be seen in Table4.8. Note that “partial” with Agnitum means it detected “OPENTEAR” attack, but noport scanning. As Agnitum doesn’t have any severity levels of detected events, bothattacks had the same weight. With ESET, “Incorrect TCP packet length” was shown inthe packet logs hundreds of times, but no port scanning attack attack was reported.The severity was brought down from “warning” to “informative” and no pop-up win-dow was shown.
Using particular fragmentation option, I was able to not to trigger detection in fire-walls. Only Agnitum, Avast! and ESET can be fooled this way. Other firewalls willdetect all port scans even when fragmentation is being used.
4.3 DoS attacks
On victim’s computer, there was a need to make multiple measurements of consumedresources to which I used “typeperf -cf counters.txt -sc 20 -o output.csv” command inWindows CMD, which was run as Administrator to have a higher priority than other
25
4. EXPERIMENT DESCRIPTION
-sS
dete
ctab
le:
-sV
dete
ctab
le:
-sU
dete
ctab
le:
-sT
dete
ctab
le:
816
3264
816
3264
816
3264
816
3264
Agn
itum
part
ially
part
ially
yes
nopa
rtia
llypa
rtia
llyye
sno
yes
yes
yes
noye
sye
sye
sno
Ava
st!
nono
yes
yes
nono
yes
yes
yes
yes
yes
yes
yes
yes
yes
yes
Bitd
efen
der
yes
yes
yes
yes
yes
yes
yes
yes
yes
yes
yes
yes
yes
yes
yes
yes
ESET
part
ially
part
ially
nono
part
ially
part
ially
yes
yes
yes
yes
yes
yes
yes
yes
yes
yes
Kas
pers
kyye
sye
sye
sye
sye
sye
sye
sye
s-
--
-ye
sye
sye
sye
sM
cAfe
eye
sye
sye
sye
sye
sye
sye
sye
sye
sye
sye
sye
sye
sye
sye
sye
sPa
nda
yes
yes
yes
yes
yes
yes
yes
yes
yes
yes
yes
yes
yes
yes
yes
yes
Tabl
e4.
8:Po
rtsc
ande
tect
abili
tyus
ing
frag
men
tati
on
26
4. EXPERIMENT DESCRIPTION
processes. I decided to make 20 measurements (1 measurement per second) while un-der an active attack. Following counters were used in the input counter file (coun-ters.txt):
• \processor(0)\% Processor Time
• \processor(1)\% Processor Time
• \processor( Total)\% Processor Time
• \Network Interface(Realtek PCIe GBE Family Controller)\ Bytes Total/sec
• \Network Interface(Realtek PCIe GBE Family Controller)\ Current Bandwidth
• \Network Interface(Realtek PCIe GBE Family Controller)\ Packets/sec
Along with these 6 values, the date and time were put to every recorded measure-ment row. All files were then transformed into the tables which can be found on theattached CD, along with all original CSV files generated by typerf command. To ob-serve each firewall’s behaviour under the DoS attack, I ran LOIC UDP flood attackwith “wait” and “no wait” options on Windows. On BackTrack, following commandswere launched:
• flood advertise6 eth0
• flood router6 eth0
• hping3 --icmp --flood 192.168.1.1
• hping3 --icmp --flood -d 1000 192.168.1.1
• hping3 --icmp --flood -d 65495 192.168.1.1
• hping3 --udp --flood 192.168.1.1
• hping3 --udp --flood -d 1000 192.168.1.1
• hping3 --udp --flood -d 65495 192.168.1.1
• hping3 --rawip --flood 192.168.1.1
• hping3 --rawip --flood -d 1000 192.168.1.1
• hping3 --rawip --flood -d 65495 192.168.1.1
There are 2 tables for LOIC and 11 for BackTrack. As an example, see the Table 4.9 inwhich the TCP flood attack is shown with 1000 bytes of data in each packet. I createdadditional table which summarizes the maximum, minimum, average, median anddeviation values that can be observed on selected DoS attacks - see Table 4.10 for moreinformation.
27
4. EXPERIMENT DESCRIPTION
Prod
uct
hpin
g3–r
awip
–floo
d-d
1000
192.
168.
1.1
Tota
ltim
eA
vera
gehi
ghes
tcor
eus
age
Ave
rage
CPU
usag
eA
vera
geBy
tes
Tota
l/se
cA
vera
gePa
cket
s/se
cBa
ndw
idth
Agn
itum
0:00
:19
96.0
052
.08
106
523
566.
8110
302
0.96
0.85
22A
vast
!0:
00:1
983
.23
43.6
910
654
224
1.10
103
039.
020.
8523
AV
G0:
00:2
043
.68
22.2
410
652
882
0.47
103
025.
290.
8522
Avi
ra0:
00:2
195
.38
94.8
210
729
858
1.82
103
770.
480.
8584
Bitd
efen
der
0:00
:19
30.2
615
.49
106
299
177.
3310
280
3.84
0.85
04C
OM
OD
O0:
00:1
910
0.00
46.9
410
654
863
8.89
103
046.
900.
8524
Emsi
soft
0:00
:19
60.2
955
.66
428.
550.
410.
0000
ESET
0:00
:19
62.7
536
.59
106
587
680.
3710
308
2.96
0.85
27F-
Secu
re0:
00:2
089
.82
76.6
510
769
364
9.99
104
152.
720.
8615
Gda
ta0:
00:1
937
.97
19.3
910
642
381
8.59
102
924.
570.
8514
Kas
pers
ky0:
00:2
097
.42
52.2
579
452
351.
2876
840.
140.
6356
McA
fee
0:00
:21
99.8
657
.97
6331
655
1.93
6123
4.66
0.50
65M
icro
soft
0:00
:20
90.3
078
.57
106
342
570.
9110
284
5.77
0.85
07N
orto
n0:
00:3
310
0.00
99.8
977
666
553.
0675
112.
850.
6213
Pand
a0:
00:2
095
.24
75.4
279
643
292.
1077
024.
460.
6371
Qui
ckH
eal
0:00
:21
98.4
073
.75
106
488
685.
1010
298
7.30
0.85
19Tr
ustP
ort
0:00
:19
61.0
542
.09
106
443
440.
5110
294
3.37
0.85
15Z
oneA
larm
0:00
:19
39.2
320
.11
106
863
142.
1510
334
9.53
0.85
49
Tabl
e4.
9:R
esou
rces
cons
umpt
ion
duri
ngT
CP
flood
atta
ckw
ith
1000
byte
s
28
4. EXPERIMENT DESCRIPTION
mea
sure
men
tm
axm
inav
erag
em
edia
nde
viat
ion
flood
adve
rtis
e6et
h0To
talt
ime
00:3
6(P
anda
)0:
190:
210:
200:
00A
vera
gehi
ghes
tCPU
usag
e10
0.00
(Pan
da)
59.6
1(E
msi
soft
)94
.26
97.7
2.1
0.90
Ave
rage
CPU
usag
e10
0.00
(Pan
da)
36.4
8(B
itde
fend
er)
64.1
054
.54
18.9
6A
vera
geto
talM
byte
s/se
c14
.69
(Agn
itum
)3.
03(N
orto
n)7.
887.
263.
79A
vera
gepa
cket
s/se
c17
9099
(Agn
itum
)36
981
(Nor
ton)
9616
188
569
4613
4Ba
ndw
idth
12.3
2%(A
gnit
um)
2.54
%(N
orto
n)6.
61%
6.10
%3.
18%
flood
rout
er6
eth0
Tota
ltim
e00
:42
(Pan
da)
00:1
9(B
itde
fend
er)
0:25
0:23
0:06
Ave
rage
high
estC
PUus
age
100
(Ava
st!,
Avi
ra,C
OM
OD
O,E
SET,
Gda
ta,K
aspe
rsky
,M
cAfe
e,N
orto
n,Pa
nda,
Trus
tPor
t,Z
oneA
larm
)54
.58
(Bit
defe
nder
)95
.43
100
13.1
9A
vera
geC
PUus
age
100
(Avi
ra,P
anda
)28
.02
(Bit
defe
nder
)71
.78
65.2
419
.89
Ave
rage
tota
lMby
tes/
sec
15.6
7(A
gnit
um)
1.01
(Avi
ra)
2.79
1.29
4.35
Ave
rage
pack
ets/
sec
1392
44(A
gnit
um)
1099
7(F
-Sec
ure)
3511
825
171
3553
1Ba
ndw
idth
13.1
4%(A
gnit
um)
0.85
%(A
vira
)2.
89%
1.12
%4.
05%
hpin
g3--
icm
p--
flood
-d65
495
192.
168.
1.1
Tota
ltim
e0:
200:
190:
190:
190:
00A
vera
gehi
ghes
tCPU
usag
e10
0(C
OM
OD
O)
26.5
3(Z
oneA
larm
)51
.88
50.4
122
.21
Ave
rage
CPU
usag
e68
.65
(Agn
itum
)13
.71
(Zon
eAla
rm)
32.5
624
.718
.32
Ave
rage
tota
lMby
tes/
sec
215.
39(G
data
)92
.2(E
msi
soft
)11
9.41
117.
3125
.19
Ave
rage
pack
ets/
sec
1516
17(G
data
)64
916
(Em
siso
ft)
8405
982
578
1772
9Ba
ndw
idth
180.
68%
(Gda
ta)
77.3
4%(E
msi
soft
)10
0.17
%98
.41%
21.1
3%hp
ing3
--ud
p--
flood
192.
168.
1.1
Tota
ltim
e0:
03:5
9(A
vast
!)0:
00:1
7(E
msi
soft
)00
:42
00:2
000
:55
Ave
rage
high
estC
PUus
age
100
(Gda
ta,Z
onea
larm
)60
.35
(Bit
defe
nder
)89
.50
96.9
913
.79
Ave
rage
CPU
usag
e99
.93
(Gda
ta)
30.5
1(B
itde
fend
er)
64.5
063
.66
21.2
4A
vera
geto
talM
byte
s/se
c20
.31
(Tru
stPo
rt)
0.00
(Em
siso
ft)
8.28
11.1
36.
5A
vera
gepa
cket
s/se
c35
5004
(Tru
stPo
rt)
0(E
msi
soft
)14
5939
1949
7910
4155
Band
wid
th17
.04%
(Tru
stpo
rt)
0.00
%(E
msi
soft
)6.
94%
9.34
%5.
07%
hpin
g3--
raw
ip--
flood
-d10
0019
2.16
8.1.
1To
talt
ime
00:3
3(N
orto
n)00
:19
00:2
000
:20
00:0
3A
vera
gehi
ghes
tCPU
usag
e10
0(C
omod
o,N
orto
n)30
.26
(Bit
defe
nder
)76
.72
90.0
625
.26
Ave
rage
CPU
usag
e99
.89
(Nor
ton)
15.4
9(B
itde
fend
er)
53.5
352
.17
25.6
7A
vera
geto
talM
byte
s/se
c10
2.70
(F-S
ecur
e)0.
00(E
msi
soft
)89
.36
101.
5325
.91
Ave
rage
pack
ets/
sec
1041
53(F
-Sec
ure)
0(E
msi
soft
)89
874
1029
4333
599
Band
wid
th86
.15%
(F-S
ecur
e)0.
00%
(Em
siso
ft)
74.3
4%85
.15%
27.7
9%
Tabl
e4.
10:C
onsu
mpt
ion
diff
eren
ces
inD
oS
29
4. EXPERIMENT DESCRIPTION
4.3.1 DoS results
As well as with the port scan attacks, each firewall behaves significantly different fromone another. However, the differences in port scanning attacks could be observed bythe attacker. With DoS attacks, the attacker receives no information from the victim. Onthe other hand, the overall impact on the system resources can be measured with DoSattacks. If the attacker wants to deplete specific resources on the victim’s computer butis unaware which endpoint security solution is in place, he can use Table 4.11 whereall average numbers are shown. If for example the overall system resources are to beattacked, one can search for the highest average total time and discover that UDP floodwith no data is the best DoS attack. When CPU resources of multi-core system are to bedepleted, flood router6 eth0 should be used. For the bandwidth consumption, ICMPflood with 65495 bytes of data is the best course of action.
The only firewall which had major and unrecoverable difficulties with certain DoSattacks was Gdata. Using LOIC UDP flood, after waiting between 2:15-2:50 from thebeginning of the attack, a black screen appeared and the computer became unusable -hard reset had to be done. I was able to recreate this behaviour multiple times. Whenthe attack stopped after the black screen appeared, the victim’s PC was still unusableuntil the next restart. I took a picture of this behaviour which is shown in Figure [?]. Asit can be seen, the resolution was changed from 1600x1200 pixels, there are plenty ofgraphical elements that shouldn’t be there and the script haven’t even started to writeinto the file. I couldn’t get any data written into the CSV file because it only started towrite into the file after I stopped the attack, or it didn’t even create file in the first place.I was able to obtain the CSV file using hping3 UDP flood without data payload, butthe overall result was the same - black screen and inability to use computer withoutthe hard reset. The same problems were not observed with ICMP nor TCP floods.
Other interesting observation was with Panda under the flood router6 eth0 attack -every CPU core was used to 100% in every measurement. There were no measurementsof network utilizations stored into the CSV file. This attack was the most devastatingone on single CPU core - 11 firewalls had their CPU on 100%, and 5 other were onover 99.5%. Only Bitdefender (54.58%) and Emsisoft (64.33%) were significantly dif-ferent from the rest. Emsisoft shown orders of magnitude better results against TCP,UDP and ICMP floods - with bandwidth utilization 0.00%. The worst bandwidth re-sult was observed with Gdata - 181% of the bandwidth on ICMP flood with 65495 bytesdata part. Avast! is leading in the time of performing 20 measurements - it should havetaken 20 seconds, but it took 3 minutes and 59 seconds instead when it was under UDPflood with no data.
To achieve the highest CPU consumption, full bandwidth was rarely needed. Forexample, receiving only 428.55 Bytes per second in average was responsible for 60%CPU utilization for Emsisoft while under TCP flood attack with 1000 bytes data part.No more than 1.5 MB/s were needed for the flood router6 eth0 to successfully deplete16 firewalls to more than 99.5%. This means that even the attacker with slow band-width or computational capabilities is able to perform quiet serious DoS attacks.
30
4. EXPERIMENT DESCRIPTION
After the flood router6 eth0, there were thousands of records showed in ipconfig/all. To delete these bogus information, following commands had to be used:
• ipconfig /release6
• netsh int ipv6 reset
The first command releases leased IPv6 addresses and the second command resets theIPv6 configuration state.
31
4. EXPERIMENT DESCRIPTION
Att
ack
Tota
ltim
eA
vg.h
ighe
stco
reus
age
Avg
.CPU
usag
eA
vg.B
ytes
Tota
l/se
cA
vg.P
acke
ts/s
ecB
andw
idth
flood
adve
rtis
e6et
h000
:21
94.2
664
.18
262
835.
1496
160.
630.
07flo
odro
uter
6et
h000
:25
95.4
371
.78
292
201
0.08
3511
7.84
0.03
hpin
g3--
icm
p--
flood
00:2
574
.38
51.4
812
825
864.
8721
376
5.4
0.1
hpin
g3--
icm
p--
flood
-d10
0000
:20
54.1
636
.53
9747
515
1.24
9354
6.47
0.78
hpin
g3--
icm
p--
flood
-d65
495
00:1
951
.88
32.5
612
521
432
9.2
8405
8.91
1.00
hpin
g3--
udp
--flo
od00
:42
89.5
64.5
868
023
3.57
145
938.
910.
07hp
ing3
--ud
p--
flood
-d10
0000
:36
75.2
354
.84
7153
640
0.25
7048
2.31
0.57
hpin
g3--
udp
--flo
od-d
6549
500
:20
66.2
40.8
310
418
382
4.6
7017
9.97
0.83
hpin
g3--
raw
ip--
flood
00:2
685
.83
62.3
811
709
629.
4419
516
0.05
0.09
hpin
g3--
raw
ip--
flood
-d10
0000
:20
76.7
253
.53
9370
351
0.61
9062
2.51
0.75
hpin
g3--
raw
ip--
flood
-d65
495
00:2
057
.234
.85
117
867
220.
979
135.
30.
94LO
ICU
DP
wai
t00
:25
77.3
355
.74
1216
667
4.91
164
408.
320.
1LO
ICU
DP
now
ait
00:2
075
.754
.13
1111
208
4.46
150
157.
820.
1
Tabl
e4.
11:A
vera
gere
sour
ceco
nsum
ptio
non
part
icul
arD
oSat
tack
s
32
Chapter 5
Fingerprinting
Exploiting zero-day vulnerabilities have become common. Although there is a signif-icant research and bounties for bug tracking of commonly used systems, the exploita-tion still continues. In 2014, Kevin Mitnick launched a webstore called “Mitnick’s Ab-solute Zero Day Exploit Exchange” for zero-day exploits with CVSS (Common Vulner-ability Scoring System) at least 8 [28]. This suggests that there is a big demand for thesevulnerabilities that could be exploited by almost anyone. If the attacker breaks into thenetwork, he first needs to know which security systems are used. The endpoint protec-tion systems are one of such security countermeasures - most likely the very last lineof defence before the computer is compromised. As I discovered, it is not that hard tofingerprint these endpoint security systems to differentiate and find out which systemis used from the attacker’s point of view. After the attacker has this knowledge, he caneither find zero-day vulnerability to bypass this endpoint protection system, or useattacks/stealth techniques which are not detected by it.
5.1 Using time differences
There are significant discrepancies in time consumption of certain port scanning tech-niques: see Table 5.2 for the full IPv4 times and Table 5.3 for the full IPv6 times. Theuniqueness of time differences can be observed from several points = several tech-niques in both IPv4 and IPv6 as these two give different results. Table 5.1 was createdto represent the most important differences between the various techniques - firewallswhich took the least and the most time to perform a port scan against. Other columnswere computed to show other interesting numbers - average, median and deviation.
Scanning default 1000 ports with TCP SYN scan gives significant anomalies rang-ing from 1.45s (Panda on IPv6), through 22.91s (most common on IPv4), to 1243.67s(Avast! on IPv4). Scanning 1000 UDP ports also gives a significant differences: 7.83s(Gdata on IPv4) in contrast to 3769.63s (Panda on IPv4). The smallest difference canbe observed on scanning SCTP ports, because there are only 52 of them in total whichNmap scans. In almost every case, the IPv6 portscanning performed much faster thanon IPv4. Anomalies with scanning the IP protocol were also significant, ranging from335.77 seconds (AVG on IPv6) to 2.25 seconds (Quick Heal on IPv6). To demonstrate themagnitude of difference in the time consumption of among firewalls, I tested all 65535ports. The best result from the victim’s point of view was observed with Panda on IPv4which took 132307.52 seconds (37.75 hours). The best because it takes the longest timefor attacker to perform a full TCP SYN scan. The fastest result was also observed withPanda on IPv6 which took 34.3 seconds. This is really interesting result as Panda is
33
5. FINGERPRINTING
both the fastest (on IPv6) and the slowest (on IPv4) when under the TCP SYN scan ofall 65535 ports. The usual time of most firewalls was around 1434 seconds on both IPv4and IPv6.
There are plenty of approaches to fingerprinting based on Nmap scans and observ-ing the total amount of time needed to complete the scan. I created two examples - onefocuses on the reliability (see Figure 5.1) and the other one focuses on avoiding detec-tion (see Figure 5.2). These two diagrams can serve as a guide to fingerprint firewallby observing total scan time consumption under the ideal conditions. Note that delayshave to be taken into the account, as these times were taken in the ideal environment.Also, since I measured every port scan only once, there can be slight variation in mil-liseconds. In the Figure 5.1, first “nmap -6 -sS” is used and based on the time neededto perform this default scan, we can end in one of the 7 possible states. Sometimes,it is necessary to perform a scan which will be detected and logged by the firewall,hence final states can be either undetected or detected. First variant is detectable by 4different firewalls (Agnitum, Avast!, Kaspersky and McAfee), while the more stealthyvariant is detectable only on Avast!. Note that much more approaches of fingerprintingare possible and the best course of action would be to perform every Nmap scan andcompare results with overall tables.
5.2 Using port states
The default behaviour on how to respond to port scanning probe differs across fire-walls. It is not only operating system specific. Some firewalls are suppressing the de-fault behaviour of the operating system. If certain firewall sends a response to portscanning probe indicating it is closed, then such scanning takes much shorter time asopposed to sending no response at all. This difference could range from 5 seconds forGdata, to 1243.67 seconds for Avast!. Both numbers are scanning times of TCP SYNscan of 1000 ports. All firewalls that responded to TCP SYN scan probes as “closed”are shown in the Table 5.4 for IPv4 and able 5.5 for IPv6.
We can differentiate between 2 fingerprinting methods - within IPv4 or IPv6 or be-tween the two of them. For example, the TrustPort behaviour under TCP SYN portscan-ning attack varies when under IPv4 and IPv6 significantly - there are 5 open and 0closed ports on IPv4; but 10 open and 990 closed ports on IPv6. These differences arequite common even with more exotic techniques: for example Panda on TCP FIN scanhas 1000 closed ports on IPv4, but only 1 closed port on IPv6. The most significant dif-ferences between the IPv4 and IPv6 scans are with IP protocol scans: only Kaspersky(239 closed ports) and Panda (240 closed ports) have the majority of closed ports onIPv4; whereas there are many others on IPv6 - AVG (244), Emsisoft (145), Kaspersky(244), Panda (237), TrustPort (228) and ZoneAlarm (242).
I created Figure 5.2 to show the example of possible fingerprinting based on thedifferences in port states under various scanning techniques. Some firewalls can befingerprinted by scanning only 2 ports - Bitdefender and ZoneAlarm. I chose to exploitthe TCP/0 port on both IPv4 and IPv6 as well as the IP protocol scan. The IP protocol
34
5. FINGERPRINTING
technique maximum minimum average median deviation-sS 1243.67 (Avast!) 3.77 (Emsisoft) 153.81 22.91 377.84
-6 -sS 312.86 (Kaspersky) 1.45 (Panda) 32.19 17.17 71.60-sT 2669.77 (Panda) 45.56 (Bitdefender) 323.94 45.85 788.87
-6 -sT 1257.2 (Avast!) 45.34 (TrustPort) 123.31 45.81 286.87-sA 23.36 (Norton) 1.58 (Panda) 19.53 22.91 7.95
-6 -sA 23.39 (ESET) 1.38 (Panda) 15.68 22.91 10.56-sW 23.03 (F-Secure) 1.47 (Panda) 18.50 22.91 8.52
-6 -sW 23.41 (Avira) 1.39 (Panda) 16.63 22.91 9.91-sM 23.45 (Norton) 1.44 (Kaspersky) 18.49 22.91 8.66
-6 -sM 23.98 (ESET) 1.47 (Panda) 15.78 22.91 10.54-sU 3769.63 (Panda) 7.83 (Gdata) 290.29 22.91 905.41
-6 -sU 1160.00 (Emsisoft) 22.91 (most of them) 278.31 22.91 462.51-sN 23.22 (F-Secure) 2.81 (Kaspersky) 20.74 22.91 6.42
-6 -sN 23.22 (F-Secure) 1.38 (Panda) 16.77 22.91 10.38-sF 1236.8 (Agnitum) 1.56 (Kaspersky) 88.91 22.91 286.57
-6 -sF 23.19 (F-Secure) 1.39 (Panda) 15.79 22.91 10.42-sX 1243.58 (Agnitum) 1.44 (Kaspersky) 88.76 22.91 288.29
-6 -sX 23.02 (F-Secure) 1.39 (Panda) 15.63 22.91 10.57-sY 2.34 (most of them) 2.13 (Kaspersky, Panda) 2.32 2.34 0.07
-6 -sY 2.36 (Gdata) 0.00 (ZoneAlarm) 2.14 2.34 0.74-sZ 2.61 (McAfee) 2.13 (Kaspersky, Panda) 2.33 2.34 0.10
-6 -sZ 2.36 (Avira, Microsoft) 0.00 (ZoneAlarm) 2.14 2.34 0.74-sO 312.08 (Panda) 2.58 (Emsisoft) 51.37 6.72 106.26
-6 -sO 335.77 (AVG) 2.25 (Quick Heal) 84.50 6.72 128.22
Table 5.1: Time differences in port scanning
37
5. FINGERPRINTING
tim
ein
seco
nds
of“n
map
-s*
-v-n
192.
168.
20.1
”C
ompa
ny-s
S-s
S-p
1-65
535
-sT
-sA
-sW
-sM
-sU
-sN
-sF
-sX
-sY
-sZ
-sO
-sV
Agn
itum
22.9
168
07.5
393
.69
22.9
14.
344.
7622
.91
22.9
112
36.8
1243
.58
2.34
2.34
3.95
22.9
1A
vast
!12
43.6
714
34.7
522
99.2
923
.22
22.9
122
.91
22.9
122
.91
22.9
122
.91
2.34
2.34
3.2
23.2
AV
G25
.19
1360
.64
45.6
822
.91
22.9
122
.91
22.9
122
.91
22.9
123
.64
2.34
2.34
6.72
6.50
+48.
58A
vira
22.9
114
36.5
845
.57
22.9
122
.91
22.9
122
.91
22.9
122
.91
27.1
42.
342.
346.
7222
.91
Bitd
efen
der
22.9
114
76.3
245
.56
22.9
122
.91
22.9
122
.92
22.9
122
.91
22.9
12.
342.
346.
7222
.91
CO
MO
DO
22.9
114
34.7
546
.56
22.9
722
.91
22.9
122
.91
22.9
122
.91
22.9
12.
342.
3422
6.84
22.9
1Em
siso
ft3.
7774
.03
46.7
92.
383.
593.
5222
.91
3.41
5.2
3.36
2.34
2.34
2.58
6.03
ESET
22.9
114
35.5
345
.71
22.9
122
.91
22.9
122
.91
22.9
122
.91
22.9
12.
342.
345.
9722
.91
F-Se
cure
22.9
114
34.7
346
.36
23.1
923
.03
23.2
523
.19
23.2
223
.13
23.2
52.
342.
346.
7224
.19
Gda
ta5
111.
1345
.57
22.9
122
.91
22.9
27.
8322
.91
22.9
122
.91
2.34
2.34
2.91
5.19
+53.
60K
aspe
rsky
108.
3972
39.2
312
4.91
2.8
2.78
1.44
1111
.39
2.81
1.56
1.44
2.13
2.13
300.
1110
9.94
+56.
58M
cAfe
e5.
216
6.58
45.7
622
.91
22.9
122
.91
22.9
122
.92
22.9
122
.91
2.34
2.61
6.72
4.75
+6.0
1M
icro
soft
22.9
214
35.5
845
.82
22.9
122
.91
22.9
222
.91
22.9
122
.91
22.9
12.
342.
346.
7223
.13
Nor
ton
22.9
214
39.6
346
.51
23.3
622
.91
23.4
522
.91
23.1
723
.223
.16
2.34
2.34
6.72
22.9
1Pa
nda
1135
.03
1323
07.5
226
69.7
71.
581.
471.
4837
69.6
322
.92
35.9
822
.91
2.13
2.13
312.
0811
79.4
8+5.
01Q
uick
Hea
l23
.19
1455
.94
45.8
822
.91
22.9
122
.91
22.9
122
.92
22.9
222
.92
2.34
2.34
6.72
22.8
9Tr
ustP
ort
1311
1.75
45.6
522
.91
22.9
122
.91
15.2
822
.91
22.9
122
.91
2.34
2.34
6.72
4.78
+53.
57Z
oneA
larm
22.9
114
35.3
445
.822
.91
22.9
122
.91
22.9
122
.91
22.9
122
.92
2.34
2.34
6.72
22.9
1
Tabl
e5.
2:IP
v4po
rtsc
anni
ngti
mes
38
5. FINGERPRINTING
tim
ein
seco
nds
of“n
map
-6-s
*-v
-nfe
80::f
5dd:
cd1d
:175
a:2d
6d”
Com
pany
-sS
-sS
-p1-
6553
5-s
T-s
A-s
W-s
M-s
U-s
N-s
F-s
X-s
Y-s
Z-s
OA
gnit
um4.
3317
2.56
45.5
4.55
5.2
5.2
22.9
122
.91
4.67
5.2
2.34
2.34
6.72
Ava
st!
23.1
914
35.2
812
57.2
22.9
122
.91
22.9
422
.91
22.9
122
.91
22.9
12.
342.
343.
88A
VG
8.92
226.
3945
.81
22.9
122
.91
22.9
122
.91
22.9
122
.91
22.9
12.
132.
1333
5.77
Avi
ra22
.91
1435
.42
45.8
22.9
123
.41
22.9
122
.91
22.9
122
.91
22.9
12.
342.
366.
72Bi
tdef
ende
r-
--
--
--
--
--
--
CO
MO
DO
22.9
114
43.0
846
.84
22.9
122
.91
22.9
122
.91
22.9
122
.91
22.9
12.
342.
346.
72Em
siso
ft3.
864
50.9
3.45
3.67
3.61
1160
3.83
5.42
2.39
2.13
2.13
171.
55ES
ET11
.42
111.
5645
.723
.39
23.1
723
.98
22.9
123
.222
.91
22.9
12.
342.
343.
45F-
Secu
re22
.91
1434
.73
45.6
823
.09
23.3
623
.16
23.2
823
.22
23.1
923
.22.
342.
345.
44G
data
4.77
121.
8845
.66
22.9
122
.91
22.9
122
.91
22.9
122
.91
22.9
12.
362.
343.
89K
aspe
rsky
312.
8621
495.
2269
.89
1.42
2.52
1.47
1117
.58
1.45
1.47
1.58
2.13
2.13
300.
8M
cAfe
e4.
9811
6.34
45.7
522
.91
22.9
122
.91
22.9
122
.91
22.9
122
.91
2.34
2.34
6.73
Mic
roso
ft23
.34
1456
.09
45.8
622
.91
22.9
122
.91
22.9
122
.91
22.9
122
.91
2.34
2.36
6.72
Nor
ton
22.9
114
36.1
645
.82
22.9
122
.92
22.9
122
.91
22.9
122
.91
22.9
12.
342.
343.
45Pa
nda
1.45
34.3
45.3
71.
381.
391.
4710
71.5
21.
381.
391.
392.
132.
1327
9.95
Qui
ckH
eal
22.9
114
35.4
145
.922
.91
22.9
222
.91
22.9
122
.92
22.9
122
.91
2.34
2.34
2.25
Trus
tPor
t1.
4710
24.0
545
.34
1.67
1.56
1.56
1062
.59
1.47
1.69
1.45
2.13
2.13
287.
66Z
oneA
larm
-39
.41
-1.
4715
.19
1.59
44.2
21.
481.
481.
510
04.
72
Tabl
e5.
3:IP
v6po
rtsc
anni
ngti
mes
39
5. FINGERPRINTING
Company Number of closed ports afterthe default TCP SYN scan on IPv4
Agnitum, Avast!, AVG, Avira,Bitdefender, COMODO, ESET, 0
F-Secure, Gdata, McAfee, Microsoft,Norton, Quick Heal, TrustPort, ZoneAlarm
Panda 11Emsisoft 976
Kaspersky 986
Table 5.4: Number of closed ports after the TCP SYN scan on IPv4
Company Number of closed ports afterthe default TCP SYN scan on IPv6
Bitdefender, ZoneAlarm -Avast!, AVG, Avira,
COMODO, ESET, F-Secure, 0Gdata, McAfee, Microsoft,
Norton, Quick HealAgnitum 1
Panda 11Emsisoft 976
Kaspersky 987TrustPort 990
Table 5.5: Number of closed ports after the TCP SYN scan on IPv6
scan on IPv6 was not detected by any firewall, it has only 256 ports and it shows manydifferent states of ports.
When observing only differences within IPv4 scans, only two firewalls designatedmost ports as “closed”: Emsisoft (976 ports) and Kaspersky (986 ports) on TCP SYNscan. Interesting to add here is that only Panda designated all 1000 ports as closed dur-ing TCP Window and TCP Maimon scans - both took only 1.47s to scan. Other irregu-larities can be found in results from UDP scan - 16 firewalls had all 1000 ports markedas “open|filtered”, but for Kaspersky it was only 10, for Panda 987 and for Trustport999. TrustPort was also the only firewall which had “open” port in the UDP scan. OnlySCTP INIT scan gave the same results of port states across all 18 firewalls, however dif-ferences can be found in time of scan - 16 firewalls took exactly 2.34 seconds to finishthe scan, except of Kaspersky and Panda which both took 2.13 seconds, which is 9%decrease of time. In SCTP COOKIE ECHO scan, 16 firewalls had all 52 ports markedas “open|filtered” with the exception of Kaspersky and Panda, which marked 2 and 3ports as “filtered” respectively.
For more information, see the whole tables: 5.6, 5.7, 5.8, 5.9, 5.10, 5.11.
40
5. FINGERPRINTING
stat
esof
port
sw
ith
defa
ult“
nmap
-s*
-n-v
192.
168.
20.1
”-s
O-s
Vop
enop
en|fi
lter
edcl
osed
filte
red
open
clos
edfil
tere
d1
254
10
00
1000
125
41
00
010
000
256
00
10
999
025
60
00
010
000
256
00
00
1000
125
50
00
010
001
255
00
197
623
025
60
00
010
000
256
00
00
1000
125
50
010
099
03
1423
90
798
67
025
60
02
099
80
256
00
00
1000
025
60
00
010
003
1324
00
117
982
025
60
00
010
000
256
00
50
995
025
60
00
010
00
Tabl
e5.
6:IP
v4po
rtst
ates
(iii)
42
5. FINGERPRINTING
stat
esof
port
sw
ith
defa
ult“
nmap
-s*
-n-v
192.
168.
20.1
”-s
N-s
F-s
X-s
Y-s
Zop
en|fi
lter
edcl
osed
filte
red
open
|filt
ered
clos
edfil
tere
dop
en|fi
lter
edcl
osed
filte
red
open
clos
edfil
tere
dop
en|fi
lter
edfil
tere
d10
000
099
82
099
91
00
052
520
1000
00
1000
00
1000
00
00
5252
010
000
010
000
010
000
00
052
520
1000
00
1000
00
1000
00
00
5252
010
000
010
000
010
000
00
052
520
1000
00
1000
00
1000
00
00
5252
023
977
023
977
023
977
00
052
520
1000
00
1000
00
1000
00
00
5252
010
000
010
000
010
000
00
052
520
1000
00
1000
00
1000
00
00
5252
07
993
07
993
07
993
00
052
502
1000
00
1000
00
1000
00
00
5252
010
000
010
000
010
000
00
052
520
1000
00
1000
00
1000
00
00
5252
010
000
099
91
010
000
00
052
493
1000
00
1000
00
1000
00
00
5252
010
000
010
000
010
000
00
052
520
1000
00
1000
00
1000
00
00
5252
0
Tabl
e5.
7:IP
v4po
rtst
ates
(ii)
43
5. FINGERPRINTING
stat
esof
port
sw
ith
defa
ult“
nmap
-s*
-n-v
192.
168.
20.1
”-s
S-s
T-s
A-s
W-s
M-s
Uop
encl
osed
filte
red
open
clos
edfil
tere
dun
filte
red
filte
red
open
clos
edfil
tere
dop
en|fi
lter
edcl
osed
open
open
|filt
ered
clos
edfil
tere
dA
gnit
um0
010
009
099
10
1000
010
990
998
20
1000
00
Ava
st!
10
999
20
998
010
000
010
0010
000
010
000
0A
VG
10
999
10
999
010
000
010
0010
000
010
000
0A
vira
00
1000
00
1000
010
000
010
0010
000
010
000
0Bi
tdef
ende
r0
010
000
010
000
1000
00
1000
1000
00
1000
00
CO
MO
DO
00
1000
00
1000
010
000
010
0010
000
010
000
0Em
siso
ft1
976
231
099
997
723
097
723
2397
70
1000
00
ESET
00
1000
00
1000
010
000
010
0010
000
010
000
0F-
Secu
re0
010
000
010
000
1000
00
1000
1000
00
1000
00
Gda
ta10
099
010
099
00
1000
00
1000
1000
00
1000
00
Kas
pers
ky7
986
77
099
399
37
099
37
799
30
1099
00
McA
fee
20
998
20
998
010
000
010
0010
000
010
000
0M
icro
soft
00
1000
00
1000
010
000
010
0010
000
010
000
0N
orto
n0
010
000
010
000
1000
00
1000
1000
00
1000
00
Pand
a1
1198
83
799
00
1000
010
000
010
000
987
130
Qui
ckH
eal
00
1000
00
1000
010
000
010
0010
000
010
000
0Tr
ustP
ort
50
995
50
1000
010
000
010
0010
000
199
90
0Z
oneA
larm
00
1000
00
1000
010
000
010
0010
000
010
000
0
Tabl
e5.
8:IP
v4po
rtst
ates
(i)
44
5. FINGERPRINTING
stat
esof
port
sw
ith
defa
ult“
nmap
-6-s
*-n
-vfe
80::f
5dd:
cd1d
:175
a:2d
6d”
-sY
-sZ
-sO
-sV
open
clos
edfil
tere
dop
en|fi
lter
edfil
tere
dop
enop
en|fi
lter
edcl
osed
filte
red
open
clos
edfil
tere
d0
052
520
025
60
09
199
00
052
520
025
41
10
010
000
052
502
57
244
01
099
90
052
520
025
60
00
010
00-
--
--
--
--
--
-0
052
520
7025
60
00
010
000
052
493
010
614
55
976
240
052
520
125
50
03
099
70
052
520
125
50
00
010
000
052
520
125
50
09
099
10
052
493
84
244
698
77
00
520
520
256
00
10
999
00
5252
00
256
00
00
1000
00
5252
01
255
00
00
1000
00
5249
38
1123
70
1099
00
00
5252
01
255
00
00
1000
00
522
508
2022
80
1099
00
00
520
525
924
20
099
19
Tabl
e5.
9:IP
v6po
rtst
ates
(iii)
45
5. FINGERPRINTING
stat
esof
port
sw
ith
defa
ult“
nmap
-6-s
*-n
-vfe
80::f
5dd:
cd1d
:175
a:2d
6d”
-sM
-sU
-sN
-sF
-sX
open
|filt
ered
clos
edop
enop
en|fi
lter
edcl
osed
filte
red
open
|filt
ered
clos
edfil
tere
dop
en|fi
lter
edcl
osed
filte
red
open
|filt
ered
clos
edfil
tere
d99
82
010
000
010
000
099
82
099
82
010
000
010
000
010
000
010
000
010
000
010
000
010
000
010
000
010
000
010
000
010
000
010
000
010
000
010
000
010
000
0-
--
--
--
--
--
--
--
1000
00
1000
00
1000
00
1000
00
1000
00
2397
70
1598
50
2397
70
2397
70
2397
70
1000
00
1000
00
1000
00
1000
00
1000
00
1000
00
1000
00
1000
00
1000
00
1000
00
1000
00
1000
00
1000
00
1000
00
1000
00
799
30
1099
00
799
30
799
30
799
30
1000
00
1000
00
1000
00
1000
00
1000
00
1000
00
1000
00
1000
00
1000
00
1000
00
1000
00
1000
00
1000
00
1000
00
1000
00
010
005
099
50
010
000
010
000
010
000
1000
00
1000
00
1000
00
1000
00
1000
00
010
000
499
60
010
000
010
000
010
000
010
000
599
50
010
000
010
000
010
00
Tabl
e5.
10:I
Pv6
port
stat
es(i
i)
46
5. FINGERPRINTING
stat
esof
port
sw
ith
defa
ult“
nmap
-6-s
*-n
-vfe
80::f
5dd:
cd1d
:175
a:2d
6d”
-sS
-sS-
all
-sS
-O-s
T-s
A-s
Wop
encl
osed
filte
red
open
clos
edfil
tere
dop
encl
osed
filte
red
open
clos
edfil
tere
dun
filte
red
filte
red
open
clos
edfil
tere
dA
gnit
um10
198
910
365
522
101
989
100
990
1198
90
1198
9A
vast
!0
010
000
065
535
00
1000
20
998
010
000
010
00A
VG
10
999
10
6553
41
099
91
099
90
1000
00
1000
Avi
ra0
010
000
065
535
00
1000
00
1000
010
000
1000
Bitd
efen
der
--
--
--
--
--
--
--
--
-C
OM
OD
O0
010
000
065
535
00
1000
00
1000
010
000
010
00Em
siso
ft0
976
240
6550
926
097
624
10
999
977
230
977
23ES
ET3
099
73
065
532
30
997
30
997
010
000
010
00F-
Secu
re0
010
000
065
535
00
1000
00
1000
010
000
010
00G
data
90
991
90
6552
79
099
19
099
10
1000
00
1000
Kas
pers
ky6
987
76
6552
09
698
77
50
995
993
70
993
7M
cAfe
e1
099
91
065
535
10
999
10
999
010
000
010
00M
icro
soft
00
1000
00
6553
50
010
000
010
000
1000
00
1000
Nor
ton
00
1000
00
6553
50
010
000
010
000
1000
00
1000
Pand
a1
1198
810
6552
60
1099
00
100
990
1000
00
1000
0Q
uick
Hea
l0
010
000
065
536
00
1000
00
1000
010
000
010
00Tr
ustP
ort
1099
00
1065
526
010
990
010
099
010
000
010
000
Zon
eAla
rm-
--
065
526
10-
--
--
-10
000
010
000
Tabl
e5.
11:I
Pv6
port
stat
es(i
)
47
Chapter 6
Ideal behaviour of firewall under certain attacks
Based on my results from testing 18 different firewalls, I can extrapolate how theideal firewall should behave under the port scanning attacks. Interpreting results withportscan attack is much easier than interpreting DoS attacks. I can observe the run-ning time of every scan and differentiate between various states of ports when certaintechnique or obfuscation is used. During the DoS attacks I was only able to measurethe consumption of system resources on the victim, which I can interpret based onnumbers in results.
6.1 Ideal port scanning behaviour of a firewall
Based on my observations, I propose 5 points that are important to establish the com-mon ground of testing firewall’s behaviour while under port scanning attack:
• Leakage of port states. Every firewall should achieve the ideal port states on allports, no matter the IPv4 or IPv6 - as stated in Table 6.1.
• Resilience against obfuscation techniques (with respect to the previous point).Obfuscation techniques (e.g. fragmentation or higher delay between the probes)should make no difference in port scanning results.
• Unified time of the scanning. As shown in Table 6.2, there is an “ideal” time eachport scan should consume.
• Logging higher-level information on the detection. Each attack should be loggedwith a higher level information. Along with the source IP address, this shouldstate clear information about the type - “Port scanning attack”. See Table 6.4 forcurrent capabilities.
• Taking steps against the attack. After the attack is detected, there should bean action taken against than particular attack - e.g. throwing away all incomingpackets from the IP address of the attacker.
The most important function of a firewall under active port scanning attack is notwhether the firewall is able to detect the actual attack and warn the user. It is of fargreater importance to not to leak any information about the state of ports, no matterthe scanning or obfuscation techniques used. If the firewall warns user of every portscan but fails to hide open ports, user doesn’t know what to do and the attacker getsvaluable information. To get the unified time of scanning, as well as no leakage of portstates, no firewall should respond to packet scanning probes. For firewalls which sent
48
6. IDEAL BEHAVIOUR OF FIREWALL UNDER CERTAIN ATTACKS
Scanning technique Ideal port stateTCP SYN filtered
TCP connect() filteredTCP FIN, TCP Xmas, TCP Null open|filtered
TCP ACK filteredTCP Window filteredTCP Maimon open|filtered
UDP scan open|filteredSCTP Init filtered
SCTP COOKIE ECHO open|filteredIP protocol scan open|filtered
Service and Version detection filtered
Table 6.1: Ideal Port states
Port scanning technique Ideal time (s)TCP SYN, TCP FIN,
TCP Xmas, TCP Null,TCP ACK, TCP Window, 22.91
TCP Maimon, UDP, Serviceand Version detection
TCP connect() 45.56SCTP Init, 2.34
SCTP COOKIE ECHOIP protocol scan 6.72
Table 6.2: Ideal port scanning times
response to port scan probes indicating their state to be “closed”, the time of scanningwas radically different from those which marked them as “filtered”, when no responsewas sent. Only Avira, Bitdefender, ESET, F-secure, Microsoft, Norton, Quickheal, andZonealarm achieved all “ideal” port states.
After observing the behaviour of various scanning techniques on 18 firewalls, I pro-pose Table 6.1 to be an ideal port states to every technique used in Nmap. I also proposeTable 6.2 to be the ideal time for scanning the default amount of ports in Nmap. Notethat these values are those which most firewalls achieved and if all firewalls couldachieve these values, the fingerprinting based on the time differences would be effec-tively countered. On the other hand, firewalls could also try to raise the time neededfor port scannings by orders of magnitudes, which would probably alleviate the riskof being a random target.
Logging is also very important for more skilled users and system administrators.If the firewall contain only the packet log, in which there are hundreds of messages,searching for the actual attack becomes harder. If there are no filters that can be usedin logs, it is even more difficult. Screenshots of particular (relevant) logs after the port
49
6. IDEAL BEHAVIOUR OF FIREWALL UNDER CERTAIN ATTACKS
Product Ideal port states on all portson all 13 scanning techniques
on default IPv4 on default IPv6Agnitum, Avast!, AVG,
Emsisoft, Gdata, Kaspersky, no noMcAfee, Panda, TrustPort
Avira, Microsoft yes yesBitdefender yes -COMODO no yes
ESET, F-Secure, Norton, yes noQuick Heal, ZoneAlarm
Table 6.3: Firewalls under different attacks
Product Shown in Shown in Popup Filtershigher-level logs packet logs window
Agnitum yes yes yes noAvast!, Bitdefender yes no no yes
AVG, AviraEmsisoft, F-Secure no no no noGdata, Quick Heal
COMODO, Microsoft no no no yesESET yes no yes yes
Kaspersky yes no yes only searchMcAfee yes yes no noNorton no yes no only searchPanda yes no no only search
TrustPort, ZoneAlarm no yes no no
Table 6.4: Logs with port scanning
scanning attacks can be found on the attached CD. You can see the Table 6.4 for theinformation about particular logs of firewalls. On the second column, there is informa-tion whether the firewall logged information about “port scanning attack”. In the thirdcolumn, there is information whether port scanning packets were stored in packet logs.This usually resulted in few hundreds of logged events within few seconds. Next col-umn shows if user was made aware of port scanning attack by pop-up window. Thelast column shows whether there are any filters in the particular firewalls’ logs.
Table 6.5 shows my final result from the portscanning attacks on firewalls. All 13different Nmap techniques were used (-sS, -sT,-sA, -sW, -sM, -sU, -sN, -sF, -sX, -sY, -sZ,-sO, -sV). The first joined column shows the number of techniques on which all portswere in their ideal states. For example, the Kaspersky has both numbers (IPv4 andIPv6) equal to 1 because only on -sY technique all ports were in their ideal state. Aviraand Microsoft scored the highest numbers (13 on IPv4 and 13 on IPv6), because theywere the only ones to not to leak any port information. Comodo, F-Secure, Norton andQuick Heal were closely behind them with only 1 technique leaking port information
50
6. IDEAL BEHAVIOUR OF FIREWALL UNDER CERTAIN ATTACKS
# of techniques with # of logged Fragmentation # of TCP/0 statesall ports in ideal states portscan techniques successful that are “filtered”IPv4 IPv6 IPv4 IPv6 IPv4 IPv4 + IPv6
Agnitum 7 5 7 3 yes 2Avast! 10 11 10 10 yes 0AVG 10 8 0 0 - 2Avira 13 13 0 0 - 2
Bitdefender 13 0 4 0 no 1COMODO 12 13 0 0 - 2Emsisoft 3 1 0 0 - 0
ESET 13 9 4 0 yes 2F-Secure 13 12 0 0 - 2
Gdata 9 9 1 0 - 2Kaspersky 1 1 3 3 no 0
McAfee 10 9 10 10 no 2Microsoft 13 13 0 0 - 2
Norton 13 12 0 0 - 2Panda 4 1 4 0 no 0
Quick Heal 13 12 0 0 - 2TrustPort 10 1 0 0 - 1
ZoneAlarm 13 1 0 0 - 1
Table 6.5: Overall results of port scanning attacks
in each. The second joined column shows the number of detected and logged portscan-ning techniques. We can see that only 4 firewalls are able to detect any port scans onIPv6, in contrast to 8 on IPv4. The best performers were Avast! and McAfee which de-tected 10 out of 13 scanning techniques on both IPv4 and IPv6. Agnitum, Bitdefender,ESET, Gdata and Panda show degradation in detection from IPv4 to IPv6. Next col-umn shows whether any fragmentation technique as an obfuscation of the attack wassuccessful. In other words, if the particular firewall could be fooled with using frag-mentation into not detecting the portscan at all. Only Agnitum, Avast! and ESET couldbe mislead by this. As stated earlier, I didn’t test firewalls which didn’t detect the portscanning attack in the first place and thus there are “-” characters. The last column de-scribes how many (0, 1 or 2) TCP/0 ports were in their ideal state - “filtered”. Only 11firewalls had both TCP/0 ports (on IPv4 and IPv6) filtered.
6.2 Ideal behavior under the DoS attacks
With DoS attack, it is hard to estimate “why” firewalls are behaving the way they arebehaving. It is obvious that some firewalls could cope with floodings better than theyare coping with it now. Based on my results, there are huge differences in the total run-ning time of the script to measure 20 values, CPU usage and bandwidth consumption.I have to note, that there is a clear pattern of the trade-off between consuming networkand CPU resources with some DoS attack.
The ideal firewall should be able to detect the DoS attacks and try to counteract
51
6. IDEAL BEHAVIOUR OF FIREWALL UNDER CERTAIN ATTACKS
them. In the worst case scenario, even disabling the particular network adapter fora few seconds/minutes would be a viable solution. At least user’s device would notbe negatively suffering from the attack. Other, not-so-drastic measures could also betaken. For example ignoring all the packets with some characteristics (used protocol,IP address of sender, source/destination ports, . . . ).
The overall impact should be as low as possible - some firewalls are more successfulin this than others. I’ve summarized the overall comparison of the average highest coreusage in the Table 6.6. There are 11 DoS flooding attacks. In every attack column, thethreshold value in % was created using following guidelines:
• the maximum value is 90%
• the number of “yes” values should be the same or lower (but as close as possible)as the number of “no” values
• go from the initial value of 30% up by 10% steps until the first 2 requirements aremet
• if there are the same results for more thresholds, use the lowest threshold value
With meeting these guidelines, it is possible to reach values, in which exact or lowerhalf of firewalls behaved better than the others. You can observe the total number offirewalls which met the requirements in the last two rows. The last column is the finalscore of particular firewall - the higher, the better. We can observe, that the best fire-wall coping with DoS attack is the Bitdefender with the overall score 10 out of 11 tests.The worst firewalls are Agnitum and Quick Heal, which scored 0 point. Be aware thatthis is only 1 particular result out of 7 performance parameters I stored in every mea-surement. Only by cross-referencing results from all 7 measurements could we comeclose to determining the rank-list of these 18 firewalls with respect to their performancewhile under DoS flooding attacks.
52
6. IDEAL BEHAVIOUR OF FIREWALL UNDER CERTAIN ATTACKS
adve
rtis
e6ro
uter
6IC
MP,
0BIC
MP,
1000
BIC
MP,
6549
5BU
DP,
0BU
DP,
1000
BU
DP
6549
5BTC
P,0B
TCP,
1000
BTC
P,65
495B
Scor
eT
hres
hold
<80
%<
70%
<70
%<
50%
<50
%<
90%
<90
%<
60%
<90
%<
90%
<40
%A
gnit
umno
nono
nono
nono
nono
nono
0A
vast
!no
nono
nono
nono
nono
yes
no1
AV
Gno
nono
nono
nono
noye
sye
sye
s5
Avi
rano
nono
nono
nono
yes
nono
no1
Bitd
efen
der
yes
yes
noye
sye
sye
sye
sye
sye
sye
sye
s12
CO
MO
DO
nono
nono
noye
sno
nono
nono
2Em
siso
ftye
sye
sye
sno
noye
sye
sno
yes
yes
no9
ESET
nono
yes
yes
yes
noye
sye
sye
sye
sno
7F-
Secu
reno
noye
sye
sye
sno
yes
yes
noye
sye
s7
Gda
tano
noye
sye
sye
sno
noye
sye
sye
sye
s9
Kas
pers
kyno
noye
sye
sye
sno
nono
nono
no3
McA
fee
nono
nono
yes
nono
yes
nono
no2
Mic
roso
ftno
noye
sye
sye
sno
noye
sno
noye
s7
Nor
ton
nono
yes
yes
noye
sye
sno
nono
no4
Pand
ano
nono
nono
yes
yes
nono
nono
4Q
uick
Hea
lno
nono
nono
nono
nono
nono
0Tr
ustP
ort
nono
yes
yes
yes
yes
yes
yes
yes
yes
no10
Zon
eAla
rmno
nono
yes
yes
nono
yes
yes
yes
yes
8#
of“y
es”
22
89
96
79
79
6#
of“n
o”16
1610
99
1211
911
912
Tabl
e6.
6:R
esul
tsfr
omD
oSflo
odat
tack
s-A
vera
gehi
ghes
tcor
eus
age
53
Chapter 7
Summary
The Internet is a dangerous place. The number of attacks and vulnerabilities exploitedis still growing. Although there are plenty of security systems inside the networksinfrastructure (servers, intermediate firewalls, IDS, IPS, . . . ), user computers are stillvulnerable to attacks. It is highly recommended to have an endpoint security solutioninstalled on every host system. This solution contains antivirus, firewall, antispam, andmany other features and it is the last protection standing between the attacker and theuser. If breached, there is no more security for the user.
I’ve tested 18 firewalls which have the majority of market shares in Czech republicand worldwide. Each brand is trying to devise the “most secure solution yet” with im-plementing many new features, and improving detection rates of malicious behaviour.Yet while under not-so-sophisticated attacks, which have been known for over a decade,most of them show unsatisfactory results.
Only 2 firewalls have been able to not to leak any port states - Avira, and Microsoft.The worst result achieved Kaspersky, which leaked port states with 12 out of 13 dif-ferent Nmap portscanning techniques used. Although IPv6 has been in used for years,12 firewalls show degradation in concealing port states when compared to IPv4. De-tection and logging of portscan attacks have also shown to be a problem - 10 firewallsdidn’t detect any port scan. Again, by switching from IPv4 to IPv6 we see degradationin detection ability as well - in 5 out of 10 firewalls. The best result achieved Avast! andMcAfee, where each detected 10 techniques on both IPv4 and IPv6. At least 1 IPv6 scan-ning technique is detectable by 4 firewalls: Agnitum, Avast!, Kaspersky and McAfee.Fragmentation as an obfuscation technique was successful with 3 products: Agnitum,Avast! and ESET. The TCP/0 port state was leaked by 7 firewalls.
There are many differences between the responses to various attacks, which madeit possible to use my findings in firewall fingerprint. From the attacker’s point of view,if he knows precisely which firewall is installed on the host computer, he is in a bigadvantage. He can use only certain exploits or attacks which are not detectable to thatparticular security solution. The speed of scanning can greatly differ. For example us-ing IPv6 TCP SYN scan of 1000 ports on Panda took 1.45 seconds, while the same scanon IPv4 took 1243.67 seconds on Avast!. UDP port scan on IPv4 shows even greaterdifference - scanning 1000 ports on Panda took 3769.63 seconds, whereas it took only7.83s on Gdata. Two approaches can be used - either by observing the overall time ofscanning needed to scan a certain about of ports, or by observing port states. I usedboth techniques separately to demonstrate this possibility on figures 5.1, 5.2 and 5.3.
54
7. SUMMARY
To cope against the possible fingerprinting, I proposed the ideal port states eachfirewall should have under certain port scanning attacks - see Table 6.1. For getting ridof the time differences, I proposed Table 6.2 based on the most common values in myoverall testing scenarios. Note that these values could be much higher, which wouldhelp the overall security of the system if all firewalls shared these times. In such a case,a random attacker would not scan all ports if it would take hours to days.
DoS flooding attacks are each depleting valuable system resources. Some attackshave had persistent consequences - UDP flood on Gdata made the computer unusableand forced restart was needed, IPv6 router advertisements stored hundreds of recordsabout IPv6 addresses. . . The comparative table with assigned score to each firewall canbe seen in the Table 6.6. The higher the score, the better was the overall performanceof the particular firewall with respect to all other firewalls. The average consumptionof resources while under the DoS attack across all firewalls can be shown in the Ta-ble 4.11. We can observe that the most successful attack on CPU consumption wasflood router6.
I’m leaving the overall ranking of the firewalls on the reader, because there aremany different approaches in assigning relevant weights to create such a list. Read-ers can also perform their own analysis of log files attached on CD and for exampledetermine the particular open ports on certain firewalls.
7.1 Future improvements
There are plenty of possible improvements to this research:
• Broadened scope. Test more firewalls, apply various overall firewall settings (e.g.public/work/private, automatic/manual/learning mode. . . ), study each firewallclosely with every possible setting (e.g. possibility blacklisting attacker, turningon better detection, . . . ), and include more attacks (e.g. flooding attacks on IPv6,ARP spoofing attack. . . ).
• Achieve more real world results. Observe portscans on the real networks andtheir delay, perform many measurements of portscans for improving the preci-sion on their time consumption, perform DoS attacks and measurements on dif-ferent hardware (different amounts of RAM, different numbers of logical CPUs,. . . ) and different operating systems, incorporate probability to fingerprinting.
• Deeper analysis. Store pcap files from Wireshark with challenges and responsesof attacks (to observe whether different firewalls use different ICMP messagesas a response), study the list of individual port states on certain firewalls (e.g.list of all open ports for Agnitum), study fragmentation on all firewalls and alltechniques (observe if there are interesting time/port state differences).
• Combined attacks for obfuscation/fingerprinting. Combine DoS attacks withportscan, use corrupted checksums with DoS and portscans.
55
Bibliography
[1] John Pescatore, Greg Young, Gartner RAS Core Research Note G00171540: Defin-ing the Next-Generation Firewall, http://bradreese.com/blog/palo-alto-gartner.pdf, (reviewed on 2015/01/02), 2009/10/12.
[2] James Messer, Secrets of Network Cartography: A Comprehensive Guide to Nmap,a NetworkUptime.com publication, 2008.
[3] Four Way Handshake process: To teardown established TCP connection,http://c-kurity.blogspot.cz/2010/06/four-way-handshake-process-to-teardown.html (reviewed on 2015/01/02), 2010/06/18.
[4] Joe Touch, Eliot Lear, Allison Mankin, Service Name and Transport Proto-col Port Number Registry, http://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.xhtml (reviewed on2015/01/02), 2014/11/07.
[5] J. Reynolds, J. Postel, RFC 1700: Assigned Numbers, http://tools.ietf.org/html/rfc1700 (reviewed on 2015/01/02), 1994/10.
[6] J. Reynolds, RFC 3232: Assigned Numbers: RFC 1700 is Replaced by anOn-line Database, http://tools.ietf.org/html/rfc3232 (reviewed on2015/01/02), 2001/01.
[7] R. Stewart, RFC 4960: Stream Control Transmission Protocol, http://tools.ietf.org/html/rfc4960 (reviewed on 2015/01/02), 2007/09.
[8] M. Cotton, L. Eggert, J. Touch, M. Westerlund, S. Cheshire, Internet AssignedNumbers Authority (IANA) Procedures for the Management of the Service Nameand Transport Protocol Port Number Registry, http://tools.ietf.org/html/rfc6335 (reviewed on 2015/01/02), 2011/08.
[9] Information Sciences Institute, University of Southern California, RFC 793: Trans-mission Control Protocol, http://tools.ietf.org/html/rfc793 (reviewedon 2015/01/02), 1981/09.
[10] D. Petro, DEFCON 20, Network Anti-Reconnaissance: Messing with NmapThrough Smoke and Mirrors, https://www.defcon.org/images/defcon-20/dc-20-presentations/Petro/DEFCON-20-Petro-Messing-with-Nmap.pdf (reviewed on 2015/01/02), 2012/07.
[11] G. Pickett, DEFCON 21, Let’s Screw With Nmap, https://www.defcon.org/images/defcon-21/dc-21-presentations/Pickett/DEFCON-21-
56
7. SUMMARY
Pickett-Lets-Screw-With-Nmap-Updated.pdf (reviewed on 2015/01/02),2013/08.
[12] Julian Kirsch, Christian Grothoff, Monika Ermert, Jacob Appelbaum, LauraPoitras, Henrik Moltke, NSA/GCHQ: The HACIENDA Program for Internet Colo-nization, http://heise.de/-2292681 (reviewed on 2015/01/02), 2014/08/15.
[13] Gordon Lyon, Chapter 15: Nmap Reference Guide: Port Scanning Techniques,http://Nmap.org/book/man-port-scanning-techniques.html (re-viewed on 2015/01/02).
[14] Gordon Lyon, Chapter 15: Nmap Reference Guide: Service and Version Detec-tion,http://Nmap.org/book/man-version-detection.html (reviewed on2015/01/02).
[15] Elguber, TCP header image, https://elguber.wordpress.com/2010/05/26/osi-model/ (reviewed on 2015/01/02), 2010/05/26.
[16] david9904, XOIC: a tool to make DoS attacks, http://sourceforge.net/projects/xoic/ (reviewed on 2015/01/02), 2013/12/08.
[17] abatishchev, LOIC: A network stress testing application, http://sourceforge.net/projects/loic/ (reviewed on 2015/01/02), 2014/12/13.
[18] Marc Kuhrer, Thomas Hupperich, Christian Rossow, and Thorsten Holz. 2014.Exit from hell? reducing the impact of amplification DDoS attacks. In Proceedingsof the 23rd USENIX conference on Security Symposium (SEC’14). USENIX Associa-tion, Berkeley, CA, USA, 111-125.
[19] Salvatore Sanfilippo, hping3 - Linux man page, http://linux.die.net/man/8/hping3 (reviewed on 2015/01/02).
[20] Marc Heuse, ICMPv6 Router Announcement flooding denial of service affectingmultiple systems, http://www.mh-sec.de/downloads/mh-RA_flooding_CVE-2010-multiple.txt (reviewed on 2015/01/02), 2011/04/15.
[21] matousec.com, Proactive Security Challenge 64, http://www.matousec.com/projects/proactive-security-challenge-64/results.php, (reviewedon 2015/01/02), 2014/12/08.
[22] Zakir Durumeric, Michael Bailey, and J. Alex Halderman. 2014. An internet-wideview of internet-wide scanning. In Proceedings of the 23rd USENIX conference onSecurity Symposium (SEC’14). USENIX Association, Berkeley, CA, USA, 65-78.
[23] Zakir Durumeric, Eric Wustrow, and J. Alex Halderman. 2013. ZMap: fastinternet-wide scanning and its security applications. In Proceedings of the 22ndUSENIX conference on Security (SEC’13). USENIX Association, Berkeley, CA, USA,605-620.
[24] Robert Graham. MASSCAN: Mass IP port scanner, https://github.com/robertdavidgraham/masscan (reviewed on 2015/01/02), 2014.11.04.
57
7. SUMMARY
[25] Jelena Mirkovic, Max Robinson, Peter Reiher, George Oikonomou, Dis-tributed Defense Against DDoS Attacks, http://www.eecis.udel.edu/˜sunshine/publications/udel_tech_report_2005-02.pdf, (reviewedon 2015/01/02), 2005/02.
[26] T. Narten, E. Nordmark, W. Simpson, RFC 2461: Neighbor Discovery forIP Version 6 (IPv6), http://tools.ietf.org/html/rfc2461 (reviewed on2015/01/02), 1998/12.
[27] Craig Williams, Massive Increase in Reconnaissance Activity - Precursorto Attack?, http://blogs.cisco.com/security/massive-increase-in-reconnaissance-activity-precursor-to-attack/, (reviewed on2015/01/02), 2013/11/12.
[28] Kevin Mitnick, Mitnick’s Absolute Zero-DayTM Exploit Exchange, https://www.mitnicksecurity.com/shopping/absolute-zero-day-exploit-exchange (reviewed on 2015/01/02).
[29] Greg Young, One Brand of Firewall Is a Best Practice for Most Enterprises, https://www.gartner.com/doc/2254717, (reviewed on 2015/01/02), 2012/11/28.
58
Appendix A
List of attachments
• \cmd-commands\ TXT files used to measure performance, and example of ip-config /release6 command with hundreds of IPv6 addresses.
• \DoS-logs\ Log files generated by typerf in CSV format.
• \fingerprinting\ GraphViz files with PNG images used for fingerprinting.
• \firewall-logs\ PNG screen-shots of firewall logs, options, pop-ups and ex-ported log files after port scan/DoS attacks.
• \Nmap-logs\ TXT log files from Nmap scans.
• \tables\ XLSX tables.
• \thesis\ PDF and TeX files of this thesis along with used figures.
59
Appendix B
Abbreviations
ACK AcknowledgementACL Access Control ListARP Address Resolution ProtocolAS Autonomous SystemBSD Berkeley Software DistributionCPU Central Processing UnitCSV Comma-Separated ValuesCVE Common Vulnerabilities and ExposuresCVSS Common Vulnerability Scoring SystemDCCP Datagram Congestion Control ProtocolDDoS Distributed Denial of ServiceDHCP Dynamic Host Configuration ProtocolDNS Domain Name SystemDoS Denial of ServiceFIN FinishFTP File Transfer ProtocolGCHQ Government Communications HeadquartersHIPS Host Intrusion Prevention SystemHTTP Hypertext Transfer ProtocolIANA Internet Assigned Numbers AuthorityICMP Internet Control Message ProtocolIDS Intrusion Detection SystemIGMP Internet Group Management ProtocolIP Internet ProtocolIPS Intrusion Prevention SystemLAN Local Area NetworkLOIC Low Orbit Ion CannonMTU Maximum Transmission UnitNA Neighbor AdvertisementNAT Network Address TranslationNS Neighbor SolicitationNSA National Security AgencyNTP Network Time ProtocolPSH PushRA Router AdvertisementRFC Request for CommentsRS Router Solicitation
60
B. ABBREVIATIONS
RST ResetSCTP Stream Control Transmission ProtocolSLAAC Stateless Address AutoconfigurationSNMP Simple Network Management ProtocolSSH Secure ShellSYN SynchronizeTCP Transmission Control ProtocolTXT TextUDP User Datagram ProtocolURG UrgentUTM Unified Threat Management
61
List of Figures
2.1 3-way TCP handshake [2] 3
3.1 TCP Header [15] 63.2 4-way TCP handshake [3] 63.3 Graphical representation of Nmap scanning techniques [2] 9
5.1 Fingerprinting using time difference - reliability 355.2 Fingerprinting using time difference - avoiding detection 365.3 Fingerprinting using port states 41
C.1 Gdata DoS issue 62
63
List of Tables
2.1 Firewall score in Proactive Security Challenge 64 4
3.1 List of Nmap techniques 83.2 Ping scan behaviour 14
4.1 Antivirus security suites 194.2 Firewall settings 204.3 IPv4 scans shown in logs 224.4 IPv6 scans shown in logs 234.5 TCP/0 port states across firewalls 244.6 Highest number of scanned ports without detection 254.7 Lowest scan delay without detection 254.8 Port scan detectability using fragmentation 264.9 Resources consumption during TCP flood attack with 1000 bytes 284.10 Consumption differences in DoS 294.11 Average resource consumption on particular DoS attacks 32
5.1 Time differences in port scanning 375.2 IPv4 port scanning times 385.3 IPv6 port scanning times 395.4 Number of closed ports after the TCP SYN scan on IPv4 405.5 Number of closed ports after the TCP SYN scan on IPv6 405.6 IPv4 port states (iii) 425.7 IPv4 port states (ii) 435.8 IPv4 port states (i) 445.9 IPv6 port states (iii) 455.10 IPv6 port states (ii) 465.11 IPv6 port states (i) 47
6.1 Ideal Port states 496.2 Ideal port scanning times 496.3 Firewalls under different attacks 506.4 Logs with port scanning 506.5 Overall results of port scanning attacks 516.6 Results from DoS flood attacks - Average highest core usage 53
64