Post on 15-Jul-2020
transcript
Noname manuscript No.(will be inserted by the editor)
Integrating attacker behavior in IT security analysis:a discrete-event simulation approach
Andreas Ekelhart · Elmar Kiesling · Bernhard Grill ·Christine Strauss · Christian Stummer
Abstract When designing secure information systems,
a profound understanding of the threats that they are
exposed to is indispensable. Today’s most severe risks
come from malicious threat agents exploiting a vari-
ety of attack vectors to achieve their goals, rather than
from random opportunistic threats such as malware.
Most security analyses, however, focus on fixing tech-
nical weaknesses, but do not account for sophisticated
combinations of attack mechanisms and heterogeneity
in adversaries’ motivations, resources, capabilities, or
points of access. In order to address these shortcomings
and, thus, to provide security analysts with a tool that
makes it possible to also identify emergent weaknesses
that may arise from dynamic interactions of attacks,
we have combined rich conceptual modeling of security
knowledge with attack graph generation and discrete-
event simulation techniques. This paper describes the
prototypical implementation of the resulting security
analysis tool and demonstrates how it can be used for an
experimental evaluation of a system’s resilience against
various adversaries.
Keywords IT security · modeling and simulation ·secure systems analysis and design · attacker behavior
Andreas Ekelhart, Bernhard GrillSecure Business Austria, Vienna, AustriaE-mail: {aekelhart,bgrill}@sba-research.org
Elmar KieslingVienna University of Technology, Vienna, AustriaE-mail: elmar.kiesling@tuwien.ac.at
Christine StraussUniversity of Vienna, Vienna, AustriaE-mail: christine.strauss@univie.ac.at
Christian StummerBielefeld University, Bielefeld, GermanyE-mail: christian.stummer@uni-bielefeld.de
1 Introduction
Information systems today face numerous threats, in-
cluding opportunistic attacks such as malware and in-
creasingly sophisticated multi-stage attacks from mo-
tivated adversaries. From a risk analysis perspective,
the latter threats are particularly challenging. They in-
volve human attackers who behave adaptively, combine
various attack vectors in an ad-hoc manner, and make
deliberate decisions that are difficult to predict.
Given the growing frequency and relevance of such
malicious attacks, the need for sound security analy-
sis methods that can help systems planners and secu-
rity personnel to better analyze and understand the im-
pending risks is growing. The scope of most available
methods, however, is confined to identifying particular
technical vulnerabilities that need to be fixed, rather
than analyzing a system’s overall capacity to withstand
attacks from motivated adversaries.
These adversaries are typically not homogeneous,
but differ considerably in their motivations, resources,
capabilities, and points of access. In order to properly
analyze risks and design information systems accord-
ingly, it is therefore necessary to account for such char-
acteristics that determine attack strategies, attacker be-
havior and, ultimately, the threats posed by attackers.
Simulation constitutes a promising approach in this
context because assessing the security of a live sys-
tem by performing controlled attacks (i.e., penetration
tests) is expensive, time-consuming, and associated with
serious risks. Moreover, live tests are difficult to repro-
duce and, although testing can be systematized and
partly automated, the breadth and depth of analyses
remains limited by human testers’ capacity and com-
petence. Also, penetration tests can only be performed
once a system (physical or virtualized) has been actu-
ally implemented in reality.
2 Andreas Ekelhart et al.
In this paper, we develop a simulation-driven ap-
proach for secure information systems design that com-
plements other methods such as penetration testing and
can be used by security analysts to assess (i) the abil-
ity of a modeled system to cope with malicious attacks,
and (ii) the impact of alterations of the system (e.g.,
adding physical, technical, operational, and organiza-
tional security controls) on its overall security.
Our approach is based on the design science research
paradigm [12]. It facilitates the iterative development of
theories and artifacts by applying a body of formalized
security knowledge to particular environments (system
models). This results in an iterative process that con-
sists in (i) modeling a system design alternative, (ii) ex-
perimentally evaluating the design through simulated
attacks, (iii) reflecting upon weaknesses, (iv) making
engineering decisions, (v) incorporating them in the
model to create new design alternatives, and (vi) eval-
uating the new design alternatives. This process con-
tinues until satisfying results are achieved.
Enumerating all possible system configurations to
identify an “optimal” design is typically impractical due
to the vast combinatorial search space. However, the de-
sign process outlined above allows analysts to evaluate
and compare design artifacts (system models) and de-
velop blueprints for suitable system designs as well as
specific security knowledge being relevant in the con-
text of the system being developed.
Our simulation tool can support security analysts
over the whole life cycle of an information system, in-
cluding in an early planning stage. For systems in op-
eration, it provides analyses on the effects of a chang-
ing threat environment. The latter is accomplished by
updating the security knowledge base and re-running
attack simulations.
The remainder of this paper is organized as follows.
The simulation model is described in Section 2. Sec-
tion 3 then outlines implementation issues, and Sec-
tion 4 illustrates the applicability of the approach by
means of sample simulation scenarios. Next, Section 5
provides an overview of related work. Section 6, finally,
concludes with an outlook on further research.
2 Simulation model
A high-level overview of our approach is provided in
Fig. 1. Its constituent components will be described in
the following.
2.1 Knowledge Base
We use ontologies to capture the knowledge required
for simulating attacks in a well-structured and reusable
format. A general discussion on ontologies as knowl-
edge representation formalisms is provided in [11] and
[32]. Particularly important characteristics in the con-
text of our research are interoperability and machine-
processability. Both are achieved by formally codifying
a shared understanding of the security domain. For the
purpose of our simulation model, the relevant knowl-
edge is captured in (i) a security model and (ii) a sys-
tem model.
The security model formally specifies how informa-
tion systems can be attacked and how they can be pro-
tected from attacks by means of security controls. The
system model is the design artifact being evaluated and
improved through simulated attacks. It includes tangi-
ble (e.g., hardware components, buildings, employees)
and intangible assets (e.g., data, software, policies) and
specifies their relationships and configuration. For its
definition, we partly reuse concepts from an existing
infrastructure sub-ontology introduced in [9].
Potential attack vectors are formalized in the se-
curity model as abstract actions, irrespective of how
they will be applied by an attacker in a particular con-
text. This part of the knowledge base can be stored
in a centralized repository and shared among multiple
organizations, each of which can automatically relate
this knowledge to their own modeled systems. Domain
experts are hence only required for the definition and
maintenance of the security model.
2.1.1 Security Model
The security model defines “atomic” attack patterns
that are linked automatically based on shared pre- and
postconditions, thus spanning an abstract attack graph.1
In order to apply this generic security knowledge in
a particular context, the abstract patterns are com-
bined with concrete system element instances during
the simulation through target asset types specified for
each action (e.g., port scan applies to asset type port,
sql injection applies to DBServer, etc.). Hence, the sim-
ulation discovers concrete attack paths at execution
time rather than enumerating all possible attacks in
advance.
Existing catalogs can serve as a reference in order
to define a consistent and reasonably complete set of
such patterns. For the network and software systems do-
main, the publicly available and community-developed
1 A detail of an example graph is included in the appendix.
Integrating attacker behavior in IT security analysis: a discrete-event simulation approach 3
Knowledge base
Attack Scenario
SecurityModel
System Model
AttackerModel
Attack PatternLinking
AbstractAttack Graph
Attack Simulation Engine
AttackerObjectives
Results
Fig. 1 Simulation Framework (Overview)
Common Attack Pattern Enumeration and Classifica-
tion (CAPEC) repository [19] is a particularly valu-
able resource. It contains semi-structured descriptions
of common attack techniques that can be integrated in
our ontology. Other potential sources from which formal
attack patterns can be derived include OWASP [25],
CVE [20], security standards [3,13,23], and various pen-
etration testing catalogs.
Each pattern (an example is provided in Fig. 2) is
structured around an action characterized by an execu-
tion time, a maximum number of permissible attempts
maxTries, and a flag that indicates whether the action
can be carried out concurrently with other actions. Fi-
nally, each action is assigned a baseProbability of suc-
cessful execution. In some cases, this probability can
be derived deterministically (e.g., brute force password
cracking with given length, alphabet size, resources, and
time). In most cases, however, the probability can only
be estimated due to limited empirical data on attack
incidence rates and success rates of particular actions.
Therefore, it is necessary to resort to domain expert
judgment codified in existing knowledge bases which
provide a good starting point for deriving baseProbabil-
ity estimates. CAPEC, for example, specifies the typical
likelihood of exploit on a qualitative scale. The Common
Vulnerability Scoring System (CVSS) [18] provides rat-
ings on the exploitability of specific technical vulnera-
bilities. In the context of web security, OWASP specifies
the ease of exploit of various vulnerabilities. In the sam-
PreconditionassetType: [type | target | attacker]
PostconditionassetType: [type | target | attacker]
type: [success | failed | named]timeValid: [0..∞]
1..n
1..n
Property
0..n
Property
0..n
Actionname, description,
targetAssetType, executionTime, simultaneous, baseProbability,
maxTries, impactAttributes
propertyName: {exactly | not}
[value | precondition ID | target | attacker]
propertyName: {exactly | min| max| not}
[value | precondition ID | target | attacker]
ControlcontrolType: [preventive | detective*]conrolAggregationType: [min | max | cumulate]*responseType: [immediate | delayed]*outcome: [stop, named]implementationAssetType, implementation, visible, *delay
Fig. 2 Attack Pattern
ple simulation scenarios in Section 4, we use probability
estimates derived from practical experience by domain
experts in penetration testing.
Preconditions define requirements for an action to be
viable in a particular context. Each precondition refers
to a specific asset type and specifies 0..n required prop-
erty states. A property consists of a property name, a
restriction, and the property value. The quantifiers min
and max are used to impose restrictions.
4 Andreas Ekelhart et al.
Fig. 3 exemplifies this for a simplified attack pattern
for the FTP vulnerability CVE-2006-58152. The target
is a computer that must satisfy the following property
conditions in order to be susceptible: Linux must be in-
stalled as an operating system, the vulnerable FTP de-
mon must be installed, and a particular patch must not
have been applied. Finally, the attacker needs an attack
client of type Computer. In order to launch the attack,
the attacker must have access to the attack computer
and the system must be connected to the target com-
puter via the connectedTo relationship that represents
links in a network topology. Semi-structured knowledge
bases such as CAPEC, which verbally describe attack
prerequisites, are a valuable source for such terms and
connections.
Postconditions define possible outcomes of an action.
They determine the state transitions that occur at sim-
ulation execution time via onSuccess, onFail, and generic
attack action consequence paths. Successful execution
may, for example, allow the attacker to obtain root ac-
cess privileges on a target computer whereas a failed
attempt may disconnect it from the network. Generic
postcondition paths are used to model additional out-
comes, such as consequences upon detection.
Apart from specifying the outcome of an action from
an attack logic perspective, it is also necessary to keep
track of the impact generated by an attack. This im-
pact is a function of both the executed action and the
asset being attacked. The former determines which se-
curity attributes are affected whereas the latter deter-
mines the “criticality” of the impact. The example in
Fig. 3 defines an impact on the security attribute ‘con-
fidentiality’. The impact criticality is determined from
asset valuations specified in the system model. Upon
successful execution of the example pattern, the confi-
dentiality rating of the target computer determines the
actual impact caused.
Our model supports multiple security attributes and
arbitrary rating scales. It is hence possible, for example,
to specify that the action accessData affects the confi-
dentiality of an asset whereas a denial of service action
affects its availability. The CAPEC section CIA Impact
can be used as a source for affected security attributes.
Security Controls can be subdivided into preventive and
detective controls. Each control may be applied to spe-
cific system assets as a means to raise the difficulty of
or to detect attacks, respectively. Controls are charac-
terized by attributes that define how they can be de-
ployed in a system and an effectiveness rating. An an-
2 http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2006-5815
Precondition (target)assetType: target
PostconditionassetType: attacker
successtimeValid:
Action (cve_2006_5815)targetAssetType: Computer
impactAttributes: confidentiality executionTime simultaneous
baseProbability maxTries
PropertyEXACTLY installed ftp_demon
PropertyNOT installed ftp_patchX
PropertyEXACTLY installed os_linux
Precondition (ftp_demon)assetType: FTP Software
PropertyMAX softwareVersion 1.3
PropertyEXACTLY runsWith Permission_group
Precondition (attack_client)assetType: Computer
PropertyEXACTLY connectedTo target
PropertyEXACTLY access attacker
PropertyEXACTLY access target
Fig. 3 Attack Pattern Example
tivirus control, for example, is specified as ‘Computer
{installed} (AntiVirusSoftware) [effectiveness] ’, which
expresses the following rule: if a modeled computer in-
stance in the system model (e.g., Computer 001 ) is con-
nected via the installed relation to an instance of the
AntiVirusSoftware concept (e.g., AntiVirusSoftware 003 ),
then an implementation of the antivirus control is present
on this computer.
2.1.2 System Model
Anything of value for an organization – tangible or in-
tangible – is represented under the Asset top-level con-
cept. For instance, the concept Computer can be used
to model computer systems in a network. Additional
subconcepts, such as DBServer or WebServer add fur-
ther levels of detail. Relationships are used to make
statements about the system configuration. The rela-
tionship installed between a computer and a software
asset, for instance, specifies that the particular software
is installed on the machine whereas connected repre-
sents network links between computers. These relation-
ships provide a flexible and expressive mechanism that
does not rely on strict containment hierarchies (which
are considered problematic in system security models,
cf. [27]). Furthermore, details about assets can be added
through datatype properties (e.g., softwareVersion to
assign a specific version number).
In order to determine the impact of attacks in the
simulation, assets must be rated with respect to the rel-
evant security attributes. If available, these ratings can
be derived from existing impact analyses or asset criti-
cality reports. They can be specified as monetary values
or categories of a qualitative scale. NIST SP 800-39 [23],
for instance, describes the magnitude of impact by rat-
ings on a three-point Likert scale (1=low, 2=medium,
Integrating attacker behavior in IT security analysis: a discrete-event simulation approach 5
3=high). As an example, the confidentiality criticality
value for asset Data 002 is defined as follows:
<DataPropertyAssertion ><DataProperty IRI="# importanceConfidentiality "/><NamedIndividual IRI ="# Data_002"/><Literal datatypeIRI ="&xsd;integer">3</Literal >
</DataPropertyAssertion >
Control instances are also modeled as Asset sub-
classes. To deploy an antivirus control on Computer 001,
as in the example before, we link the Computer and
AntiVirusSoftware assets with the installed relation as
follows:
<ObjectPropertyAssertion ><ObjectProperty IRI="# installed"/><NamedIndividual IRI ="# Computer_001 "/><NamedIndividual IRI ="# AntiVirusSoftware_003 "/>
</ObjectPropertyAssertion >
The effectiveness of AntiVirusSoftware 003 is ex-
pressed by the effectiveness relation.
<DataPropertyAssertion ><DataProperty IRI="# effectiveness "/><NamedIndividual IRI ="# AntiVirusSoftware_003 "/><Literal datatypeIRI ="&xsd;integer">3</Literal >
</DataPropertyAssertion >
2.2 Attack Pattern Linking
The knowledge stored in the knowledge base is har-
nessed by means of an attack pattern linking compo-
nent that queries the knowledge base and connects se-
quences of potential actions. For example, the successful
execution of an SQL injection may provide an attacker
with access to a database server, which in turn may al-
low him or her to access the hosted database directly.
Given a particular attack scenario, patterns are linked
automatically to construct full abstract attack graphsfor the respective attack objective (cf. Section 2.4).
2.3 Attack Scenario
Understanding potential attackers’ motivation and goals
is a prerequisite for sound risk assessments and a major
determinant of an appropriate system design. Attack
scenarios explicate the designers’ assumptions regard-
ing attackers’ characteristics and objectives. Whereas
attackers are typically classified based on natural lan-
guage descriptions in the literature (e.g., external, in-
ternal, government, secret services [26]), we take advan-
tage of our formal approach to allow for more specific
attacker profiles.
In our model, attackers do not necessarily have com-
plete knowledge of the assets that they plan to attack.
Insiders may be modeled with a comprehensive under-
standing of the environment whereas an external at-
tacker may initially only see assets that can be reached
externally (e.g., a webserver). Relationships that spec-
ify knowledge of particular assets (knows), available
equipment (access), or access privileges (hasCredentials)
are available for modeling attackers’ characteristics with
respect to the attacked system.
Furthermore, attackers are modeled with attributes
to capture characteristics such as knowledge and skills
required for certain actions. To control attacker behav-
ior in the simulation, attributes such as monetary and
time budget, motivation, and risk preferences are speci-
fied. Finally, the attacker model references a behavioral
model implementation that determines attack strategy
and mechanisms for choosing routes of attacks.
An attacker model is complemented with an attack
objective, which is defined as a set of property states
that the attacker aims to achieve. An attacker thus may,
for instance, try to obtain access to a particular Data
asset.
2.4 Abstract Attack Graph
Given a particular attack scenario and the abstract at-
tack chains provided by the pattern linker, the attack
graph constructor generates a map of possible abstract
paths that lead to the attacker’s given objective. This
abstract graph is still not tied to a particular context
and its construction is therefore computationally feasi-
ble irrespective of the size of the modeled system. The
system being attacked is discovered at simulation exe-
cution time by iteratively matching abstract attack pat-
terns to concrete asset instances. The abstract graph
hence serves as a map that outlines the relevant depen-
dencies and helps to discover paths to reach the desired
goal at execution time.
2.5 Attack Simulation Engine
The simulation engine is the core component in our
framework. It performs probabilistic attack analyses on
a modeled system by performing a number of attack
simulations with varying sets of random seeds and record-
ing selected outcome variables for each replication.
The simulation follows a discrete-event approach,
i.e., each simulation event (e.g., start and end of an
attack action) occurs at a specific point in continuous
time. Hence, the simulation execution does not follow
any fixed sequence of steps, but is rather controlled by
attacker behavior and the scheduling of events on a
timeline. This approach is highly flexible and can cap-
ture complex temporal interactions between action out-
comes and control responses.
6 Andreas Ekelhart et al.
Successful attack steps may yield new actions or in-
validate preconditions for previously available actions.
A failed exploit action may, for instance, cause a sys-
tem to crash and render it unavailable for certain attack
actions.
Control response timing is based on the effectiveness
ratings and reaction times of detective controls. The
simulation determines at execution time whether and
when a control is triggered and whether it will affect
the attacker’s efforts. In some simulation replications,
the attacker may complete the attack before a detective
control is activated, while in others the control response
could interfere the attack path and force the attacker
to attempt a different approach. An intrusion preven-
tion system (IPS) may, for instance, block the attacker’s
source address. If this invalidates the preconditions of a
certain attack, then the attacker loses potential attack
actions and has to continue with the remaining actions.
Likewise, a control response might enable new attack
actions (e.g., an IPS reaction may lead to a service dis-
ruption). The following section lists the types of events
used in the simulation; Section 2.5.2 then describes the
mechanisms triggered by these events.
2.5.1 Events
The beginning of each individual attack action is rep-
resented by an Action Start Event. Upon execution, in-
stances of this event type determine the effective dura-
tion of the attack action and accordingly schedule an
Action Stop Event. The effective duration depends on
factors such as difficulty, attacker skills, preventive con-
trols applied on the asset being attacked, and random-
ness due to inherent variability. When detective controls
are associated with the asset being attacked, Detection
Events may be scheduled immediately or with a ran-
dom delay, as specified in the control model. If the at-
tack target is reached after the current action has been
executed, a Target Reached Event is scheduled immedi-
ately. Detective events may also completely terminate
an attack by scheduling an Attacker Stopped Event.
2.5.2 Simulation Mechanisms
The target selection algorithm implements the attacker’s
behavioral model. The particular algorithm used and
the behavioral parameters may vary among attackers.
Each target selection algorithm has access to the ab-
stract attack graph that can be interpreted as an at-
tacker’s mental map of possible attack strategies. In a
particular execution context, the engine generates spe-
cific attack actions by discovering valid asset assign-
ments for all preconditions of an abstract action. A
valid asset assignment is an action-asset combination
that satisfies all preconditions of the action (i.e., all
property restrictions of all preconditions).
Moreover, the behavioral model uses an Attack Trace
Tree to keep track of results of executed concrete attack
actions (i.e., actual action-asset combinations). In this
tree, each node represents an action that has been (in
case of an inner node) or can be (in case of a leaf node)
executed. An inner nodes’ child nodes represent the ac-
tions that have become available due to the outcome
of this particular action. The forthcoming Section 4 in-
cludes an example tree chosen from a sample scenario.
The Action Selection mechanism returns the next
action performed by the attacker. For the sample sce-
nario, we used a random depth-first target selection
model specified in Algorithm 1. The algorithm uses
the following variables: success indicates whether the
previous action was successful; new is a set of actions
that have become available due to the outcome of the
previous action; siblings is the set of actions sharing
the same predecessor action in the attack history tree;
all is the set of currently available attack actions. The
random choice function rc(p) returns true with proba-
bility p (and false otherwise) and function draw(A) re-
turns an action drawn randomly from the supplied set A
of actions. The parameters pContinueWithNewActions,
pTryNeighborOnNoNewActions, pRetryFailedAction and
pTryNeighborOnFailure may be adjusted to appropri-
ate values for particular attackers.
Algorithm 1 Random depth-first target selection1: if success then2: if |N | > 0 then3: if rc(pContinueWithNewActions) then4: return draw(new)5: else6: return draw(all)7: end if8: else if rc(pTryNeighborOnNoNewActions) then9: return draw(siblings)
10: else11: return draw(all)12: end if13: else if rc(pRetryFailedAction) then14: return previous action15: else if rc(pTryNeighborOnFailure) then16: return draw(siblings)17: else18: return draw(all)19: end if
As noted above, each action execution is triggered
by an Action End Event. Upon execution of such an
event, an Action Executor instance is created which
determines the Execution Result. The execution result
contains information about (i) success of the executed
Integrating attacker behavior in IT security analysis: a discrete-event simulation approach 7
Execution Result
Action StartEvent
Action EndEvent
ActionSelection
...
t=0
ActionExecution
DetectionEvent Response
Action StartEvent
Action EndEvent
...
Target Reached
Event
Action EndEvent
Attacker Stopped
Event
Action Selection
Fig. 4 Event Types and Scheduling of Events
action, as determined based on an action-specific prob-
ability specified in the attack model, (ii) actions that
are no longer available, (iii) new actions that have be-
come available, and (iv) whether the attack target has
been reached. The result in turn drives the selection of
subsequent actions via the Action Selection mechanism.
Detective controls define two possible outcomes when
attacks are detected: (i) Stop, i.e., the simulation run
terminates, or (ii) a defined path to a postcondition
(modeled in the abstract attack graph) is executed.
These postconditions may change property values in
the system model. When an intrusion detection system
blocks the attacker’s source address, for instance, it in-
hibits certain attack actions but may potentially also
enable new attack actions.
3 Implementation
For our prototypical implementation, we used ontolo-
gies for the knowledge base and Java for the imple-
mentation of the attack simulation components. Some
details are provided in the following.
Knowledge Base The system model and the attack and
control model are both implemented in OWL 2. Us-
ing editors such as Protege3, the modeled knowledge
can easily be shared with a community of users. To
retrieve specific actions in the context of a particular
system, our implementation transforms attack patterns
into SPARQL queries on the system model. Reading
data and writing states back to the knowledge base
is accomplished via the Java OWL API4. The Java
3 Protege: http://protege.stanford.edu4 http://owlapi.sourceforge.net
Universal Network/Graph framework JUNG25 is used
to visualize the resulting attack patterns and attack
graphs.
Attack Simulation The attack simulation requires effi-
cient means for maintaining and processing a timeline of
events. To this end, we use the scheduling mechanisms
provided by MASON6, a fast discrete-event simulation
core written in Java.
4 Example Scenarios
The example presented in this section is from the net-
work domain and illustrates the applicability of our
modeling and simulation approach.
4.1 Domain Description
First, we define concepts and relations relevant for a
system model. The model instance for the example sce-
narios includes Computers and installed Software. Soft-
ware contains subconcepts such as Operating Systems
or Control Software (e.g., AntiVirusSoftware and Intru-
sionDetectionSoftware). Data is represented as an orga-
nizational asset stored on a particular Server and man-
aged by database software. We further consider User
Credentials and Permission Groups. In total, the high-
level system model ontology contains 36 classes and 43
properties.
Next, we develop a security model that is based on
an existing penetration testing catalog. It comprises
attack patterns (e.g., brute force, port scan, sniffing
5 JUNG2: http://jung.sourceforge.net6 MASON: http://cs.gmu.edu/ eclab/projects/mason
8 Andreas Ekelhart et al.
credentials with and without encryption, spider web
server, SQL injection, and SSL testing) and legitimate
actions that can be used maliciously (e.g., connecting
to a computer with valid credentials or accessing data).
Finally, the model is enriched with security con-
trol definitions. Controls include, for example, antivirus
software, patches for specific software, intrusion detec-
tion systems, password quality (charset and length),
and the definition for the maximum number of login
tries.
Based on concepts and relations in the system model,
an organization creates instances that reflect its par-
ticular infrastructure. For our sample scenario, an au-
tomatically generated synthetic system model (includ-
ing implemented controls) was used. It contains 270
instances considering 50 computers, 4 web servers, 2
database servers, and 20 employees; 1,404 axioms pro-
vide information about classes, instances, and relations.
Fig. 5 depicts the generated system; note that in or-
der to improve clarity, details such as connections be-
tween systems, installed software etc. are not shown. Fi-
nally, criticality ratings are assigned to asset instances
as described in Section 2.1.2. In this scenario, we rate
confidentiality criticality on a 3-level qualitative scale
as follows: all data instances as well as the hosting
database servers are highly critical; web server, com-
puter instances, and user credentials are rated either
low or medium.
Fig. 5 Scenario System Model Overview
4.2 Simulation Setup
For the first simulation scenario, we assume an external
attacker with an attack client connected to the Inter-
net. The web server WS 1 serves as the attacker’s entry
point to the company’s internal network and is the only
asset that this attacker has knowledge of. As this sce-
nario represents the baseline for our risk analysis, no
security controls are engaged. The second scenario re-
sembles the first, but has security controls in place. In
the final scenario, we simulate an internal attacker who
has access to company equipment (Client 1 ), and ex-
tensive knowledge about internals such as user account
names. This assumption is reflected in the ontology by
knows, access, and hasCredentials relations. In all sim-
ulations, the attackers’ objective is access to Data 1.
4.3 Results
For each scenario, 500 simulation replications were exe-
cuted with an average runtime of 8.6 seconds for a single
replication on an Intel i5 2.60 GHz processor; hence, the
average runtime for a complete scenario was approxi-
mately 70 minutes. Table 1 summarizes results for the
three scenarios.
As can be expected, the external attacker always
reaches the goal in the first scenario in which no con-
trols are in place and no time and resource restrictions
exist. Compared to the other scenarios, the number of
executed attack actions is considerably higher, because
the attacker is never stopped by controls and hence con-
tinues to explore new paths until the target is reached.
The attack duration is also lower than in the two other
scenarios due to the lack of controls which typically
slow the attacker down (as described in Sect. 2.5). The
external attacker causes the highest confidentiality im-
pact among all attackers. This can be partly explained
by the fact that the attacker reaches the high rated
target in each simulation run, but also by the confiden-
tiality impact that is caused in intermediary steps while
trying to reach the target.
In the second scenario, in which controls have been
engaged, the attacker reaches the target without being
detected in 35 out of 500 simulation runs (i.e., 7%). The
attack is most often detected by the intrusion detection
system on WS 1. The implemented control set slows the
attacker down considerably. Even though the number of
successful actions is high, the confidentiality impact is
the lowest in all simulated scenarios. This is due to the
control set that frequently prevents the attacker from
reaching highly critical assets.
Finally, the internal attacker in the third scenario
is more successful than the external attacker, reaching
the target in 127 out of 500 simulation runs (25.4%).
Moreover, compared to the external attacker, the target
condition is typically reached much faster. This can be
attributed to the fact that the internal attack already
starts closer to the target and the attacker already pos-
sesses knowledge about the system. While the external
attacker has to attack WS 1, the internal attacker can
seek an alternative, less secured path to the target data
(e.g., via WS 2 ). The internal attacker causes fewer
Integrating attacker behavior in IT security analysis: a discrete-event simulation approach 9
External withoutControls
External withControls
Internal withControls
sum avg max sum avg max sum avg max
Target reached 500 1.00 1 35 0.07 1 127 0.25 1
Detection-stop 0 0.00 0 465 0.93 1 373 0.75 1
Attack duration (h) 7117 14.23 66 40605 81.21 7086 15946 31.89 187
Succ. attack dur. (h) 7117 14.23 66 40018 81.50 7086 15887 31.97 187
Total actions 26189 52.38 235 9588 19.18 457 5813 11.63 59
Successful actions 10382 20.76 86 4404 8.81 251 2233 4.47 23
Failed actions 15807 31.61 193 5184 10.37 206 3580 7.16 40
Activated controls 0 0.00 0 1388 2.78 32 496 0.99 8
Confidentiality Impact High 1146 2.29 11 113 0.23 8 310 0.62 7
Confidentiality Impact Medium 1134 2.27 24 423 0.85 19 367 0.73 4
Confidentiality Impact Low 2598 5.20 22 896 1.79 36 808 1.62 9
Table 1 Simulation Results (500 replications)
Fig. 6 Attack History Tree (internal attacker, pruned)
low and medium confidentiality impacts compared to
the external attack, but more high impacts.
Fig. 6 shows the history of a single simulation repli-
cation for Scenario 3 (i.e., the scenario with the internal
attacker) in an attack trace tree in order to illustrate
the attack paths followed by the simulated attacker.
For the sake of clarity, only executed actions are shown
(the complete graph of all actions available to an at-
tacker can contain thousands of nodes). At first, the
simulated attacker tries to execute an attack against
the FTP software on WS 1. Upon failure of this at-
tempt, the attacker chooses an alternate path and con-
nects to another internal client Computer 4 to attempt
a brute force attack, which also fails. By means of SSL
testing, the attacker learns about the SSL configura-
tion of WS 2. Exploiting a weak SSL configuration, the
attacker starts to sniff information on the internal Net-
work 1 and thereby gains access to the credentials of
Employee 19, who is connected to the internal Word-
press Software on WS 2. As a consequence, the attacker
may log in to the Wordpress Reader group, and access
the target Data 1.
Our tool allows security analysts to trace the rea-
sons for successful (simulated) attacks by exploring the
generated attack graphs. Furthermore, analysts may
test various control configurations and compare their
10 Andreas Ekelhart et al.
combined effects for different attackers. Alterations of
the system model itself – such as, for instance, switch-
ing operating systems or restructuring the network –
can also be investigated.
5 Related Work
Various approaches for the assessment and management
of information and communication security have been
developed so far. In particular, industry standards (e.g.,
[3,13,23]) that provide guiding procedures for risk as-
sessment and the implementation of security manage-
ment practices have seen widespread adoption among
practitioners in recent years. These standards tend to
focus on assessing vulnerabilities, implementing proper
procedures, and assuring compliance. The design- and
testing-oriented modeling and simulation tool introduced
in this paper can be seen as a complement to the typi-
cally defender-centric tool set used in such standards. It
supports dynamic risk analyses and provides a basis for
trade-off decisions in IT security management. Based on
an adversary-centric design perspective, it conceives se-
curity as a dynamic, socio-technical phenomenon rather
than a binary property of a technical system that can
be achieved by eliminating all weaknesses. In the con-
text of our research, we consider the literature on attack
trees, attack graphs, and attack simulations as the main
strands of related work.
Attack Trees Attack trees provide a methodical approach
to hierarchically model different ways in which an at-
tacker can achieve a certain outcome [17,31]. Similar
to our approach, attack trees are based on the idea of
analyzing the security of a system from an attacker’s
perspective. The tree-based structure specifies an at-
tack scenario in which the root node represents the
attacker’s goal and paths from leaf nodes to the root
represent different ways of achieving this goal. The leaf
nodes are grouped by logical and - and or -relations.
Several extensions and methods for constructing such
trees efficiently and calculating security metrics have
been proposed (e.g., [2,4,14,34]). In practice, however,
the modeling of attack trees is still typically a labor-
intensive manual ad-hoc process. Our approach codi-
fies reusable knowledge in an abstract knowledge base
only once and applies that security domain knowledge
dynamically to system model instances.
Moreover, approaches that aim at automatically con-
structing attack trees are confronted with an exponen-
tial search space which considerably limits their appli-
cability. We avoid this issue by specifying only abstract
patterns that are linked to form an abstract graph that
is computationally manageable. During the simulation
execution, we expand this graph iteratively with respect
to the specific context.
Attack Graphs Attack graphs are a related, but distinct
concept. Whereas an attack tree describes a particu-
lar attack scenario, attack graphs model a system as
a finite state machine [1,24,30]. Nodes represent states
with respect to security properties and edges represent
attack actions that cause the transition of states [16,
28]. As in our model, attack actions may be specified
as abstract patterns that provide a generic representa-
tion of a deliberate, malicious attack that commonly
occurs in specific contexts [21]. Our approach extends
the concept of attack patterns to capture a higher level
of detail with automated analysis in mind. To this end,
a meta-model is used that describes the elements and
connections of attack patterns, and new aspects rele-
vant for attack simulation are introduced.
While model checking based approaches have to cope
with an exponential search space and despite efforts
to reduce computational complexity [24], attack graph
approaches are still impractical for analyzing large net-
works by means of complete enumeration of all possible
attack paths [16]. Our solution to this issue is based on
the notion of constructing an attack graph as a dynamic
process that is driven by attackers’ iterative step-by-
step decisions. An advantage of this approach lies in
that we are not bound to monotonicity assumptions
that require, for instance, that privileges gained by an
attacker cannot be lost. We can specify alternate result
paths for attack actions, including outcomes of control
detection and fails, and we explicitly account for sit-
uations in which an attacker reverses states or alters
the system in such a way that renders certain attacks
impossible.
Other related work includes a graph-based system
model for vulnerability analysis that treats humans and
non-human “actors” fully symmetrically [27]. This work,
however, does not rely on simulation or the dynamic
construction of attack graphs through ontology-based
reasoning and iterative simulation of attack steps.
Attack Simulations Early work on attack simulation [6]
described a security model that largely neglected causal
and temporal aspects and used no representational for-
malisms. Later work [5] embedded a state-space model
in a discrete-event framework to capture simple causal
mechanisms. This modeling approach is similar to ours,
but its ability to handle non-trivial sized problem in-
stances is limited. Furthermore, it does not model ad-
versary behavior and focuses exclusively on network se-
curity.
Integrating attacker behavior in IT security analysis: a discrete-event simulation approach 11
A framework for the modeling and simulation of
network attacks was subsequently developed [10]. This
contribution is unique in that it views the network as
an “ambient” that contains other “ambients” and sim-
ulates an attacker dynamically finding an attack path
not through preconditions and postconditions, but us-
ing an “access-to-effect” paradigm. Limitations of the
concept of containment that this approach rests upon
were discussed later-on [27]. Further contributions pro-
pose new formalisms for dynamic security simulations,
such as stochastic and interval-timed colored Petri nets
[7] and Generalized Stochastic Petri Nets [8]. In the lat-
ter paper, Dalton et al. analyze attack trees and aim to
automate the analysis using simulation tools, but do
not provide a complete framework for dynamic analy-
sis.
6 Conclusions
The simulation-based tool for security analysis and sys-
tem design introduced in this paper accounts for adver-
sary heterogeneity and differentiates between different
types of external (e.g., vandals, well motivated profes-
sionals) and internal (e.g., disgruntled employees with
insider knowledge and access) attackers. It integrates
a knowledge base, an attack graph generation compo-
nent, and a discrete-event engine with attackers’ behav-
ioral patterns to simulate and analyze complex multi-
step attacks.
From a theoretical perspective, an innovative fea-
ture of our approach lies in the dynamic generation
of attack paths through step-by-step simulation of at-
tacker behavior (rather than building complete attack
graphs in advance). This is particularly relevant be-
cause attack graph construction is associated with scal-
ability limitations in large networked environments. Fur-
thermore, the approach is not bound to monotonicity
assumptions and allows for alternate result paths con-
sidering outcomes of control detection and failures. We
account for situations in which an attacker loses points
of attack as a consequence of actions that alter the en-
vironment in a way that renders certain attacks impos-
sible.
From a practical point of view, a key strength of
our approach is the strict separation of attack knowl-
edge and system model, which should make it easy to
share security information and apply it independently
in various contexts. Thus, a security domain expert
may define generic attack patterns while the IT man-
ager models the system and leverages the expert’s for-
malized domain knowledge. Whereas the proposed ap-
proach shall not replace live penetration tests, our pro-
totypical implementation and simulation experiments
have shown that it provides valuable (complementary)
means to study potential attacks on information sys-
tems and critical infrastructures.
The model could be extended further in several di-
rections. Refinement in the modeling of attackers’ be-
havior constitutes a promising area for future work.
This includes drawing upon the existing body of lit-
erature (e.g., [15,29]) in order to integrate additional
behavioral models into our framework. Furthermore, we
plan to integrate alternative approaches to model zero-
day attack actions. Such actions (e.g., an attack action
that requires particular software to be installed and al-
lows the attacker to obtain root access on the affected
machines) could be modeled as rare events and added
randomly to the generic attack graph with a probability
based on existing prevalence data. This would, at least
to some extent, make it possible to better assess the
“robustness” of a modeled IT system against such un-
known vulnerabilities. Moreover, it will be necessary to
extend the attack knowledge base and include various
additional types of attack patterns (e.g., from network-
ing, software, social engineering). Finally, adding an op-
timization layer (similar to the approach used in [22,
33]) for the automatic discovery of secure system con-
figurations is another worthwhile research challenge.
Acknowledgments
The work presented in this paper was performed in the
run of the research project “MOSES3” that is funded by
the Austrian Science Fund (FWF) by grant No. P23122-
N23. The research was carried out at Secure Business
Austria, a COMET K1 program competence center sup-
ported by FFG, the Austrian Research Promotion Agency.
References
1. Ammann, P., Wijesekera, D., Kaushik, S.: Scalable,graph-based network vulnerability analysis. In: Proceed-ings of the 9th ACM Conference on Computer and Com-munications Security, pp. 217–224. ACM (2002)
2. Bistarelli, S., Dall’Aglio, M., Peretti, P.: Strategic gameson defense trees. In: Formal Aspects in Security and Trust(LNCS 4691), pp. 1–15. Springer (2007)
3. BSI: BSI-Standards. Tech. rep., German Fed-eral Office for Information Security (2013). URLhttps://www.bsi.bund.de/EN/Publications/BSIStandards
4. Buldas, A., Laud, P., Priisalu, J., Saarepera, M., Willem-son, J.: Rational choice of security measures via multi-parameter attack trees. In: First International Workshopon Critical Information Infrastructures Security (LNCS4347), pp. 235–248. Springer (2006)
5. Chi, S.D., Park, J.S., Jung, K.C., Lee, J.S.: Network secu-rity modeling and cyber attack simulation methodology.In: Proceedings of 6th Australasian Conference (LNCS2119), pp. 320–333. Springer (2001)
12 Andreas Ekelhart et al.
6. Cohen, F.: Simulating cyber attacks, defences, and con-sequences. Computers & Security 18(6), 479–518 (1999)
7. Dahl, O.M., Wolthusen, S.D.: Modeling and executionof complex attack scenarios using interval timed coloredPetri nets. In: Proceedings of the Fourth IEEE Interna-tional Workshop on Information Assurance, pp. 157–168.IEEE (2006)
8. Dalton, G.C., Mills, R.F., Colombi, J.M., Raines, R.A.:Analyzing attack trees using generalized stochastic Petrinets. In: IEEE Information Assurance Workshop, pp.116–123. IEEE (2006)
9. Fenz, S., Ekelhart, A.: Formalizing information securityknowledge. In: Proceedings of the 4th ACM Symposiumon Information, Computer, and Communications Secu-rity, pp. 183–194. ACM (2009)
10. Franqueira, V.N.L., Lopes, R.H.C., van Eck, P.: Multi-step attack modelling and simulation (MsAMS) frame-work based on mobile ambients. In: Proceedings of the2009 ACM Symposium on Applied Computing, SAC ’09,pp. 66–73. ACM (2009)
11. Gomez-Perez, A., Fernandez-Lopez, M., Corcho, O.: On-tological Engineering. Springer (2004)
12. Hevner, A.R., March, S.T., Ram, S.: Design science ininformation systems research. MIS Quarterly 28(1), 75–105 (2004)
13. ISO: ISO/IEC 27001: 2013 information technology - se-curity techniques - information management systems -requirements. Tech. rep., International Organization forStandardization/International Electrotechnical Commis-sion (2013). URL http://www.iso.org/
14. Jurgenson, A., Willemson, J.: Computing exact outcomesof multi-parameter attack trees. In: On the Move toMeaningful Internet Systems: OTM 2008 (LNCS 5332),pp. 1036–1051. Springer (2008)
15. Liu, P., Zang, W., Yu, M.: Incentive-based modeling andinference of attacker intent, objectives, and strategies.ACM Transactions on Information and System Security8(1), 78–118 (2005)
16. Ma, Z., Smith, P.: Determining risks from advancedmulti-step attacks to critical information infrastructures.In: E. Luiijf, P. Hartel (eds.) Critical Information Infras-tructures Security (LNCS 8328), pp. 142–154. Springer(2013)
17. Mauw, S., Oostdijk, M.: Foundations of attack trees. In:Revised Selected Papers of the 8th Information Securityand Cryptology 2005 (LNCS 3935), pp. 186–198. Springer(2006)
18. Mell, P., Scarfone, K., Romanosky, S.: A Complete Guideto the Common Vulnerability Scoring System Version 2.0.NIST and Carnegie Mellon University (2007)
19. MITRE: Common attack pattern enumeration and classi-fication (CAPEC) (2014). URL http://capec.mitre.org/
20. MITRE: Common vulnerabilities and exposures (2014).URL https://cve.mitre.org/
21. Moore, A.: Attack modeling for information security andsurvivability. Tech. rep., DTIC Document (2001)
22. Neubauer, T., Stummer, C., Weippl, E.: Workshop-basedmultiobjective security safeguard selection. In: Proceed-ings of the First International Conference on Availabil-ity, Reliability and Security (ARES 2006), pp. 1–8. IEEE(2006)
23. NIST: Special publication 800-39: Managing informationsecurity risk – organization, mission, and information sys-tem view. Tech. rep., NIST Computer Security Division(2010)
24. Ou, X., Boyer, W.F., McQueen, M.A.: A scalable ap-proach to attack graph generation. In: Proceedings of
the 13th ACM Conference on Computer and communi-cations security, CCS ’06, pp. 336–345. ACM (2006)
25. OWASP Foundation: Open web application securityproject (2014). URL https://www.owasp.org/
26. Panchenko, A., Pimenidis, L.: Towards practical attackerclassification for risk analysis in anonymous communica-tion. In: Proceedings of the 10th IFIP TC-6 TC-11 Inter-national Conference on Communications and MultimediaSecurity (LNCS 4237), pp. 240–251. Springer (2006)
27. Pieters, W.: Representing humans in system securitymodels: An actor-network approach. Journal of WirelessMobile Networks, Ubiquitous Computing, and Depend-able Applications 2(1), 75–92 (2011)
28. Ritchey, R.W., Ammann, P.: Using model checking toanalyze network vulnerabilities. In: Proceedings of theIEEE Symposium on Security and Privacy (S&P 2000),pp. 156–165. IEEE (2000)
29. Sallhammar, K., Helvik, B.E., Knapskog, S.J.: Incorpo-rating attacker behavior in stochastic models of security.In: Proceedings of the International Conference on Secu-rity and Management (2005)
30. Sawilla, R.E., Ou, X.: Identifying critical attack assets independency attack graphs. In: Proceedings of the 13thEuropean Symposium on Research in Computer Security(LNCS 5283), pp. 18–34. Springer (2008)
31. Schneier, B.: Secrets & Lies: Digital Security in a Net-worked World. Wiley (2000)
32. Stojanovic, L., Schneider, J., Maedche, A., Libischer, S.,Studer, R., Lumpp, T., Abecker, A., Breiter, G., Dinger,J.: The role of ontologies in autonomic computing sys-tems. IBM Systems Journal 43(3), 598–616 (2004)
33. Strauss, C., Stummer, C.: Multiobjective decision sup-port in IT-risk management. International Journal of In-formation Technology & Decision Making 2(1), 251–268(2002)
34. Wang, L., Singhal, A., Jajodia, S.: Measuring the overallsecurity of network configurations using attack graphs.In: Proceedings of the 21st Annual IFIP WG 11.3 Work-ing Conference on Data and Applications Security (LNCS4602), pp. 98–112. Springer (2007)
Appendix
Fig.7
Ab
stra
ctA
ttack
Gra
ph
(det
ail
)