09
AI-Enabled Cyber Weapons and Implications
for Cybersecurity
Muhammad Shoaib
Introduction
In recent years, the prospect of autonomous and self-
learning weapon systems has attracted significant public
attention. Consequently, advancements in the fields of
artificial intelligence (AI) and cyberspace have raised
both questions and concerns in the domain of national
security as states continue moving towards acquiring
these emerging technologies. Leading military powers of
the world remain increasingly engaged in researching the
process of developing AI applications for a number of
military functions. The applications correspond to the
fields of command and control (C2), intelligence
collection and analysis, logistics, and semi-autonomous
and autonomous weapons platforms.1 Therefore,
employing this technology and AI-based applications
could foster greater effectiveness and strengthening of
existing military functions, especially for carrying out an
offensive.
Due to their classified nature and because they are easier
and cheaper to develop, the first militarily significant
offensive and autonomous weapons system to be AI-
enabled would probably be deployed in cyberspace.2 In
2010, the Economist declared that warfare had entered
the fifth domain of cyberspace.3 A year later, in 2011,
the US Defence Department officially incorporated the
cyberspace as a new domain into its planning, doctrine,
resourcing and operations;4 NATO formally
acknowledged cyberspace as an operational domain in
2016.5 With cyberspace becoming an established domain
Journal of Strategic Affairs
10
of warfare, incorporating the AI technology into the
domain will undoubtedly have a significant impact on
the various strategies and doctrines of warfare. However,
the nexus of AI and cyberspace has also raised several
new concerns regarding its usability. Moreover, the
harnessing of this nexus to bolster both offensive and
defensive cyber capabilities of states has evoked
concerns among policy makers and academicians,
especially in the context of potential threats it carries for
national security.
When coupled with existing cyber warfare capabilities,
AI would enhance the capability and power of cyber
warfare apparatus of states. Rapid advances in AI and
increasing degrees of military autonomy would amplify
the speed, power, and scale of future attacks in
cyberspace. Adapting AI capabilities to existing cyber
warfare tools would make them effective and efficient,
hence augmenting their usability in carrying out
successful cyber-attacks.6
Cyber-attacks are becoming more and more common
and have been recognised amongst the most strategically
significant risks for states’ security. During recent years,
there have been several cyber-attacks against
governments and states’ critical infrastructure, various
private corporations, and non-profit organisations. The
trend signifies that no sector is immune from cyber-
attacks and also that the level of sophistication of the
threats is continually increasing. Malicious actors and
hackers are constantly devising new techniques,
adapting to the latest technology innovations including
machine learning and AI to create more destructive
forms of attack in the cyberspace. Apart from devising
offensive AI-enabled cyber capabilities, the technology
is also being employed to develop defensive
mechanisms in the cyberspace. To counter the emerging
AI-Enabled Cyber Weapons and Implications
11
cyber threats, that are based on technological
sophistication, states and entities also require AI tools in
order to be able to deal effectively with the challenges.
Such intelligent agents likely form the basis of security
solutions for many current and future cyber-related
challenges. Therefore, AI holds a significant position as
a tool for Active Cyber Defence (ACD).
This study is an attempt to conceptualise the significance
of AI in cyberspace as a tool for enhancing the national
security of states. It examines the usability of AI as an
emerging technology for bolstering security in
cyberspace as well as the threats it carries for
cybersecurity. In conducting this study, the following
questions have been considered:- What are AI-enabled
cyber weapons and in what ways could this technology
be coupled with the cyberspace? How does AI-cyber
nexus bolster offensive and defensive cyber capabilities
of states? What are the policy implications of AI-enabled
cybersecurity? The study is qualitative in nature.
Secondary sources of data collection are used, primarily
from published journals, books and newspaper articles,
from both print and online publications. The study
focusses only on AI in cyberspace and AI-enabled cyber
weapons.
Conceptualising AI-enabled Cyber Weapons
AI is generally defined as technology and a branch of
computer science that creates intelligent software and
machines. It is viewed as the study of the design of
intelligent agents, where an intelligent agent means a
system that recognises its environment and takes actions
to enhance its chances of success.7 Intelligent agents are
components of software and carry features of intelligent
behaviour such as pro-activeness, reactivity and the
Journal of Strategic Affairs
12
ability to communicate; in other words, the ability to
make formulate decisions and then to act upon them.8
Additionally, AI may be outlined as the automation of
activities such as learning, decision-making, problem
solving, and examining the computations that make it
possible to discern, reason, and then act accordingly.9 It
can assist in various areas including “planning, learning,
natural language processing, robotics, computer vision,
speech recognition, and problem solving” that require
sizeable memory and processing time.10
AI could be
regarded as a science for creating methods for solving
complex problems that generally require a certain degree
of intelligence such as making the precise decisions
based on substantial amounts of data. Additionally, AI
may also be regarded as a science that aims at
discovering the essence of intelligence and developing
generally intelligent machines.11
These intelligent
machines could be employed for a variety of purposes
including for simple day-to-day use such as smart
phones as well as real world complex analytical tasks
such as data science and machine learning.12
Moreover,
methods for improving machine intelligence are
progressing in areas including language interaction, the
expression of emotion, and also face recognition.13
There are many examples where AI is currently being
used, for example; Deep Blue (IBM’s chess playing
computer), autonomous vehicles driving with traffic in
urban environments14
, IBM’s Watson (the computer
system capable of answering natural language
questions), and Sophia, the world’s first AI humanoid
robot.15
Moreover, several AI technologies such as data
mining and search methods are part of everyday use,
although these might not be obvious to those who are not
working in this field. This phenomenon is called the “AI
effect”, where a technique or method is not considered
AI-Enabled Cyber Weapons and Implications
13
as AI by the time it is commonly used by the general
public. This concept is particularly significant in
understanding public perception of AI, and also the
acceptance of these AI-based tools. Some common
examples of the AI effect include Apple’s Siri mobile
application which uses a natural language user interface
while answering questions and giving recommendations,
Google’s new Hummingbird algorithm which
understands the meaning of the search query in order to
provide more relevant “intuitive” search results, and
Google’s self-driving/autonomous cars.
The AI effect is when an AI tech is no longer considered
an intelligent system as it was being considered
previously, when the technology was introduced. It
usually happens once the technology becomes widely
used. For example, a machine displays seemingly
intelligent behaviour that it could not have done before.
It is labelled as AI. But, after a while the way that the
machine completes that task becomes better understood
by the majority. Suddenly, that ability does not qualify
as true AI, and becomes just another computation.
Therefore, states and entities are employing continual
research and development in the AI domain. The
phenomenon is leading towards rampant incorporation
of the technology into other domains relevant to states’
security apparatuses including in the domain of
cyberspace.
Utilising AI-based technologies and techniques in the
cyberspace; especially in the areas of cybersecurity,
cyber defence or offence and active cyber defence
(ACD) could be explained in terms of the ability to assist
in automation. Many people contend that automation is
vital for dealing effectively with cyber-related threats
and that many cyber defence problems could only be
addressed by applying AI-based techniques. Highly
Journal of Strategic Affairs
14
intelligent malware and new advanced cyber capabilities
are evolving quickly, and according to experts, AI can
provide the required flexibility and learning capacity to
existing or new software.16 Therefore, Intelligent
software is being increasingly used in cyber operations
and according to experts, cyber defence systems could
be further accommodative and dynamically evolve with
changes in network conditions. This can be achieved
through the implementation of dynamic behaviour,
autonomy, and adaptation such as autonomic computing
or multi-agent systems.17
Therefore, increasing
intelligent cyber threats emanating from advanced
intelligent malware could only be addressed through the
use of similar intelligent software and advanced
autonomous cyber defence mechanisms.
This clearly shows that the AI-enabled cyber technology
could be harvested for both offensive and defensive
mechanisms in the cyberspace. With the evolution of
new technology, it can be argued that the future of
cybersecurity will be dominated by more advanced and
complicated threat actors. Future cyber-attackers will
certainly use AI to make the next major advancement in
cyber arms and will ultimately make malicious use of
this technology.18
AI’s essential ability to learn and
adapt will bring a new era in which highly-customised
and human-impersonating attacks are scalable. Offensive
AI mechanism will be able to mutate itself as it learns
about its environment, and to expertly compromise
systems with minimal chance of detection.19
Consequently, the future attacks would be more
penetrative; giving a certain degree of assurance towards
achieving the desired objectives. It is therefore important
to analyse offensive AI in the cyber domain for better
comprehension of AI-enabled cyber threats.
AI-Enabled Cyber Weapons and Implications
15
Offensive AI in Cyberspace
Recent progresses in AI technologies have brought
significant growth in automation and innovation. For
example, AI is being advanced to couple it with retail
marketing, autonomous cars and several other IT
operations. Companies around the world are investing in
teaching machines to think more like humans through
techniques like machine learning and neural networks.20
Although these AI technologies offer significant
benefits, they can be used maliciously.
Highly targeted and evasive attacks in simple and benign
carrier applications; have demonstrated the intentional
use of AI for harmful purposes.21
Distributed Denial of
Service (DDoS) attack, phishing attack, password attack,
malware attack; all such attacks could be operationalised
through simple applications the victims use on their
devices. For example, the 2018 DeepLocker malware
carries a fundamentally different approach from any
other current evasive and targeted malware. DeepLocker
hides its malicious payload in benign carrier
applications, such as a video conference software, to
avoid detection by most antivirus and malware scanners.
Threat actors are constantly changing and improving
their attack strategy with particular emphasis on the
application of AI-driven techniques in the attack process,
called AI-based cyberattack, which can be used in
conjunction with conventional attack techniques to cause
greater damage.22
Despite several studies on AI and
security, researchers have not been able to summarise
AI-based cyberattacks enough to be able to understand
the adversary’s actions and to develop proper defences
against such attacks.
Journal of Strategic Affairs
16
Notwithstanding this lack of information, AI-powered
cyber-attacks are not a hypothetical future idea. The
essential building blocks that are necessary for the use of
offensive AI already exist. These include highly
sophisticated malware, financially motivated criminals
willing to use any means possible to increase their return
on investment, and open-source AI research projects
which make highly valuable information easily available
to the public. It must also be noted that the use of AI-
enabled cyber weapons is not just limited to criminals or
lone actors, but the advancements in this area and
research and development by governments make it
almost certain that states would also employ the
technology to conduct offensive cyber operations against
adversaries.23
Therefore, the threat from AI-enabled
malware is likely to grow and evolve in the coming
years.
AI-enabled cyber weapons are most likely to be
deployed against enemy targets with strong cyber
defences rather than against systems with weaker built-
in security for which ‘normal’ cyber-attacks may serve
the purpose. Whilst a typical cyber-attack is an attempt
by adversaries or cybercriminals trying to access, alter,
or damage a target’s computer system or network in an
unauthorised way. An AI-enabled cyber-attack would
involve highly intelligent programmes that would learn
and mutate along their way according to the environment
and their target. As such, a typical cyber-attack is a
systematic, intended, and calculated exploitation of
technology to affect computer networks and systems to
disrupt organisations and operations reliant on them.24
On the other hand, an AI-enabled cyber-attack would
involve highly intelligent programmes that would learn
and mutate along their way according to the environment
and their target, thereby granting the attacks
manoeuvrability required to bypass security systems and
AI-Enabled Cyber Weapons and Implications
17
remain undetected. An AI-enabled cyber weapon would
manipulate or destroy code in an adversary system, such
as a command-and-control system, an intelligence,
surveillance and reconnaissance (ISR) system or a
deployed kinetic-weapons system.25
Its mission could be
to penetrate, survive (through polymorphism)26
, identify
vulnerabilities that match a predetermined suite of kill
options, select the kill option and, if possible, inform its
creators of its success.27
An AI-enabled cyber weapon could use several
technologies, including deep artificial neural networks, a
set of algorithms based on deep learning, which in turn
is a subset of machine learning, a component of AI more
generally.28
These algorithms would learn from large
amounts of data how to execute polymorphic, multi-
vector attacks; attacks on multiple fronts in which the
malware is able constantly to change its identifiable
features in order to avoid detection. These changes
would be made autonomously and at machine speed; that
is, in milliseconds. The combination of AI and cyber
missions can be particularly effective because, in
comparison to traditional cyber weapons, AI-enabled
cyber weapons would not rely on human operators to
guide an attack and, if necessary, to rewrite software
code to exploit newly found vulnerabilities, as both these
functions are assumed by deep learning algorithms.29
The Emotet trojan remains one of the most infamous
malware, which is a major example of a prototype-AI
attack. The main distribution mechanism of this trojan is
spam-phishing, which usually works through invoice
scams that hoax users into clicking on malicious email
attachments.30
According to analysts, the developers of
Emotet have also recently added another component to
their programme, which steals email data from infected
victims.31
Initially, the intention behind this capability
Journal of Strategic Affairs
18
was not clear, but Emotet has recently been observed
sending out contextualised phishing emails at a large
scale. This means the trojan can automatically insert
itself into pre-existing email threads, advising the victim
to click on a malicious attachment, which then appears
in the final, malicious email. This insertion of the
malware into emails gives the phishing email more
context, consequently making it appear more authentic.32
There are concerns that the criminals behind the
development of Emotet could easily use further AI to
significantly boost this attack.33
Currently, the message
on the final phishing email is usually highly generic; for
example, “Please see attached”, and this may sometimes
provoke suspicion. However, by incorporating an AI’s
ability to learn and imitate natural language by analysing
the context of the email thread, these phishing emails
could become highly personalised to individuals.34
Therefore, an AI-powered Emotet trojan could create
and insert entirely customised, more believable phishing
emails. More importantly, it would be able to send these
out at a massive scale, allowing criminals to vastly
increase the yield of their operations.
These developing attack methods could render highly
destructive consequences, which could sometimes even
be life-threatening.35
Through undermining of data
integrity, these stealthy attacks weaken trust in
organisations, and may even cause systemic failures. For
example, an oil rig using faulty geo-prospection data to
drill for oil in the wrong place, or a physician making a
diagnosis using compromised medical records. With the
continual AI arms race, there is an imminent possibility
of escalation in this domain. Although, AI-based cyber-
attacks recorded so far have been perpetrated by lone
actors, the possibility of state-sponsored AI-based cyber-
attacks in the future remains high owing to increased
AI-Enabled Cyber Weapons and Implications
19
research and development in this domain. Whether from
a lone actor or state-sponsored attack, such sophisticated
attacks could inflict heavy damage on an adversary’s
critical infrastructure. For example, in 2017, the
WannaCry36
ransomware attack hit organisations in
more than 150 countries around the world. The attack
marked the start of a new era in cyberattack
sophistication. It remained successful due to its ability to
move across an organisation in just a few seconds while
paralysing hard drives. The attack incident inspired other
malicious actors to develop multiple copycat attacks.37
With growing advancements in AI and augmented
machine learning skills, it can be argued that the process
of innovation in AI-enabled cyber weapons will continue
in the future. The trend so far suggests that the use of
adversarial AI is likely to impact the security landscape
in three possible ways: 1) Impersonation of trusted users;
2) Blending into the background; 3) Faster attacks with
more effective consequences.
Additionally, AI-based malware could proactively
prioritise the most vulnerable targets on a network, adapt
to the target environment and self-propagate via a series
of autonomous decisions, potentially eliminating the
need for a command and control (C2) channel.38
Due to
their ability to make autonomous decisions based on the
environment they are working in as well as the target, a
back end C2 structure is necessarily not required.
Another concern is the use of domain-generation
algorithms to continuously generate a large number of
domain names to be used as common engagement points
between infected devices and C2 servers, which would
make it considerably difficult to successfully shut down
botnets.39
Journal of Strategic Affairs
20
Moreover, the automation of social engineering attacks
is another potential threat. By collating a victim’s online
information, attackers can automatically generate
malicious websites, emails and links that are custom-
made for clicks from that victim (sent, for example, from
addresses imitating their real contacts). Further
developments in this area could see chatbots gaining
human trust during longer and more creative online
dialogues.40
The increased adoption of AI across the
economies will also create new vulnerabilities which
could be exploited by threat actors. Supply-chain attacks
on training data (data poisoning)41
could cause AI
systems to behave in inconsistent and unpredictable
ways, or allow attackers to install a ‘backdoor’ by which
to take control of a system, for instance by training an
algorithm to classify and identify a particular malware as
benign software.42
Consequently, these advancements are posing dire
challenges to organisations, states and policymakers.
Meanwhile, experts argue that without a high degree of
automation, it is difficult for humans to effectively
handle the sizeable volumes of data and the speed of
processes. Therefore, countering AI-enabled cyber
offence requires a certain degree of intelligent and
automated defence mechanisms. One of the key
challenges facing states and corporations today include
the difficulties in identifying, training and retaining
skilled individuals and experts and there is a perception
that a noticeable increase in the numbers working in this
area is highly necessary. Although increasing the
number of experts in the cyber domain might alleviate
the current gap in cybersecurity skills to a certain degree,
AI and advanced automation of certain tasks could be
highly advantageous and is inevitable over the longer
term. Hence, the mounting levels of cyber threats make
AI-Enabled Cyber Weapons and Implications
21
it inevitable for states and governments to look for and
develop AI-powered cyber defence tools.
AI-Enabled Cyber Defence Mechanisms
Threats emanating from the development and
employment of AI-enabled cyber weapons make it
unavoidable for states and security apparatuses around
the globe to consider an additional defence mechanism
that is also based on the same technology. It has been
established that conventional cyber means alone could
not deliver the required speed and calculations essential
for countering AI-powered cyber-attacks. Hence, it can
be contended that armed forces would progressively rely
on AI-enabled cyber defences that can efficiently detect
and anticipate cyberattacks. Conventional methods for
protection against cyberattack are based on detecting
traffic anomalies, whitelisting, and analyses of former
attacks, although the latter approach is vulnerable to
attackers’ changing their vectors of attack.43
Whereas,
real-time learning AI can be used to detect new attacks
and penetrations of systems. Significant commercial
investment in AI-enabled cyber defence has been seen in
recent years. Here, it is important to consider that a
single machine learning algorithm could simultaneously
conduct both defensive and offensive operations in
cyberspace. An AI-enabled cyber programme used for
defence could quickly be weaponised to mount attacks
on adversaries, blurring the line between offensive and
defensive operations in cyberspace.44
When considering defensive mechanisms in cyberspace,
although many AI-based techniques are currently
available in this areas, there is still an established need
for further advanced solutions including automated
knowledge management, intelligent decision support and
quick situation assessment for more complex cyber-
Journal of Strategic Affairs
22
related problems.45
According to expert reports,
intelligent systems and networks, even self-repairing
networks, could increase flexibility in the longer term.46
For instance, automation designs that are pre-fixed are
not effective enough against evolving cyber incidents.
Novel vulnerabilities, outages and exploits can occur
simultaneously and at any point in time.47
Therefore,
experts argue that it is not possible for humans to
effectively handle the large volumes of data and speed of
processes without elevated degrees of automation. This
means significantly quick reaction to situations,
comprehensive awareness of situations, and a capability
to handle large amounts of information at a rapid rate in
order to analyse events and make decisions is
inevitable.48
In order to meet the requirements for an effective and
fast reaction to any cyber-offence, the concept of Active
Cyber Defence (ACD) has gained significant
prominence among practitioners and policy makers. As
states do not consider current defence measures
sufficient to survive the numerous ways that can attack a
network,49
many experts contend that passive defence –
which focuses more on generally protecting cyber assets
from a variety of possible threats – alone may not be
enough.50
Therefore, several options are being proposed for policy
makers and network security experts to incorporate
lessons such as “the best defence includes an offence”,
or ACD. For instance, former United States Under
Secretary of Defence, William Lynn III, argued that in
cyber, “offence is dominant and we cannot retreat behind
a Maginot Line of firewalls”.51
Therefore, there is a need
for dynamic cyber defences and responses matching
network speed as attacks happen or even before they are
operationalised. Companies and several government
AI-Enabled Cyber Weapons and Implications
23
bodies around the world are considering to use ACD
methods more frequently, and therefore it is important to
explore those aspects of ACD where AI-based systems
could contribute as one of a number of tools within the
ACD domain.
Although a universal definition for the term does not
exist, ACD is means to include highly proactive
mechanisms that are launched to defend against
malicious cyber activities. According to a Centre for a
New American Security (CNAS) analysis52
regarding
ACD options that are available to the private sector, one
of the few formal definitions is outlined within the 2011
United States Department of Defence Strategy for
Operations in Cyberspace. It is defined as the:-
“DoD’s synchronised real-time capability to
discover, detect, analyse, and mitigate threats and
vulnerabilities. It builds on traditional approaches to
defending DoD networks and systems, supplementing
best practices with new operating concepts. It
operates at network speed by using sensors, software,
and intelligence to detect and stop malicious activity
before it can affect DoD networks and systems. As
intrusions may not always be stopped at the network
boundary, DoD will continue to operate and improve
its advanced sensors to detect, discover, and mitigate
malicious activity on DoD networks”.
The CNAS analysis offers a framework emphasising that
employing ACD techniques becomes most significant
during the Delivery phase53
, in the Cyber Engagement
Zone.54
According to the analysis, three ACD concepts
are highlighted to respond to an attack. These include: 1)
detection and forensics, 2) deception, and 3) attack
termination. For detection, several ACD techniques to
detect attacks that outmanoeuvre passive defences may
be considered, and once the information is collected, it
Journal of Strategic Affairs
24
can inform the entity’s response decisions. Detection can
be done through local information collection using ACD
techniques within the organisation’s networks, or
through what is known as remote information collection
where an organisation may gather information about an
incident outside of its own networks (for example by
having access to the C2 server of another entity and
scanning the computer, by loading software, removing
or deleting data, or ceasing the computer’s ability to
function). For attack termination, ACD methods can stop
an attack during its operations by, for instance, guarding
the information from leaving the network or by halting
the connection between the infected computer and the
C2 server. More assertive actions could include
“patching computers outside the company’s network that
are used to launch attacks, taking control of remote
computers to stop attacks, and launching denial of
service of attacks against attacking machines.”55
The above given framework is a helpful tool to figure
out when AI methods might offer a significant role. For
example, the time between the launch of an attack and
systems being affected often take minutes, yet it could
take several months to identify the breach.56
Using AI
methods can therefore be of particular significance in
these preliminary phases of the Cyber Engagement
Zone. They can offer assistance with earlier detection of
compromise and also give situational awareness. This is
important since active defence requires high levels of
situational awareness in order to respond to intrusion
threats.57
They can also be helpful in information
collection and decision support. Certain deception
techniques, including proposals for experimental
frameworks for autonomous hunting and deception58
of
adversaries could also be useful.
AI-Enabled Cyber Weapons and Implications
25
Although these ACD methods are technologically
possible, whether AI tools should be used as possible
ACD techniques remains unclear given their legal
uncertainty. Therefore, legal certainty should be pursued
before the employment of these tools for ACD actions,
so that the existing laws are not breached, even where it
might be contended that the law is “grey” or national and
international law is not clear.59
Furthermore, the conduct
in cyberspace by states, whether offensive or defensive,
requires understanding and addressing certain legal and
policy ramifications.
Policy Implications
The development and possible conduct in AI-powered
cyberspace raises important unanswered questions and
concerns. However, at this point, policy-oriented and
technical solutions in the public domain are very
restricted.60
Therefore, tactile efforts for further
comprehension of these gaps should be carried out as
soon as possible, specifically focusing on ethical and
ideological concerns, public perception, the interplay
between the private and public sectors, economic affairs,
and legal implications that could potentially arise.
Further analysis without any delay in development and
implementation of both policy-based solutions and
technological safeguards from the beginning is pertinent.
It can be argued that the “Internet of the Future” will not
be similar to the Internet of today and further challenges
will also include the “Internet of Things (IOT)”61
and
unexpected new practices.62
According to forecasts
provided by strategic reports, many of these
technological developments could have positive
consequences, including unintended, but some could
also pose threats or have “catastrophic effects”.63
Particularly, these reports outline64
that dependency on
Journal of Strategic Affairs
26
AI could bring new vulnerabilities that could be
exploited by adversaries with significant chances of
malicious states and non-state actors acquiring such
capabilities. Therefore, further attention and focus is
required on how this threat could be prevented and what
possible technological or policy-oriented solutions could
be achieved to undermine the malicious applications of
these tools.
Advanced intelligent systems could also pose challenges
to the interaction between human and automated
components, and the complexity of controlling multiple
autonomous systems and interpreting information could
become extremely difficult. According to expert
forecasts, machines “upgraded” by technology
augmentation or intelligent machines may replace those
systems which are unable to meet these challenges.
Autonomous defences might even be devised to
dominate when human judgement is considered “too
affected by emotions or information overload”.65
So far, a variety of technical recommendations66
from
experts include ensuring in the design and development
of new intelligent cyber weapons that: “1) there is a
guarantee of appropriate control over them under any
circumstances; 2) strict constraints on their behaviour are
set; 3) they are carefully tested (although thorough
verification of their safety and possible behaviours is
apparently difficult); and 4) the environment is restricted
as much as possible by only permitting the agent to
operate on known platforms.” Although the
recommendations appear all-inclusive in terms of
regulating new intelligent cyber weapons, whether they
could be applied in real time is yet to be seen.
However, further clarity and certainty is required
regarding these questions. Moreover, an understanding is
AI-Enabled Cyber Weapons and Implications
27
needed of the possible legal implications where some
findings suggest that there exists a certain amount of
uncertainty. According to Alessandro Guarino, a senior
Information Security professional and independent
researcher67
, there is no difference between autonomous
agents and any other employed cyber weapon or tool and
therefore equally fall under existing international law.
However, it is not clear if a creating state could always
be held responsible if an agent or system exceeds the
tasks assigned to it and takes an autonomous decision.
For example, for attribution purposes, the creators might
not have known in advance the precise technique used or
the precise system targeted. Therefore, Guarino
recommends an identification mechanism for
autonomous agents, perhaps through watermarks or
compulsory signatures implanted in their code. He also
suggests the possible revising of international law,
incorporating autonomous agents. Moreover, Guarino
also recommends that care be taken in the C2 function to
explicitly state the agent’s targets and build in
safeguards if a fully autonomous agent is used as a
weapon for self-defence.68
However, although the suggested mechanisms are
important recommendations, enforcing their
employment could be difficult, especially due to
concerns over noncompliance of malicious non-state or
state actors with technical safeguards. According to
computer experts, seemingly a high risk of misfire or
targeting of an innocent party due to misattribution
persists if defensive actions are carried out with
automated retaliation capability.69
Since the May 2013
Human Rights Council, many countries have already
expressed their concern over the challenges posed by
fully autonomous lethal weapons.70
Journal of Strategic Affairs
28
Although devising national policies appears to be good
starting point, and while international treaties and
national legislation are important, regulation of such
future developments could turn out to be difficult. It
could be close to impossible to enforce an outright ban
and an attempt at enactment of agreements through
international treaties could also pose its own
difficulties.71
Furthermore, in the context of rapid
technological developments, these regulations could
prove to be untimely. Also, even with strict controls in
place, the controlling of these technological
developments could be difficult. Hence, it can be argued
that if there is a capacity to develop a tool or technique,
it is more than likely that it will be developed. Cyber
capabilities in particular are intrinsically difficult to
prevent from being developed and such regulatory
solutions might not deter malicious actors. In addition,
non-state actors will not necessarily feel morally or
legally bound in the same way and state actors may not
always play by the same “version of the rules”.72
A
combination of technical and legal safeguards is required
but further research is still needed to examine whether
more could be done, while also ensuring that innovation
and technological advancements are not undermined.
It can be argued that attempts to use AI maliciously in
the cyber domain will increase alongside the increase in
the use of AI across society more generally. In the
absence of significant effort, attribution of attacks and
penalisation of attackers is likely to be difficult, which
could lead to an ongoing state of low- to medium-level
attacks, eroded trust within societies and between
governments. Consequently, nations will be under
pressure to protect their citizens and their own political
stability in the face of malicious uses of AI. On account
of such threats and uncertainties, certain
recommendations can be brought forward whereby
AI-Enabled Cyber Weapons and Implications
29
policymakers could collaborate closely with technical
researchers to investigate, prevent, and mitigate potential
malicious uses of AI in the cyberspace. Additionally,
researchers and engineers in AI should be more
considerate of the dual-use nature of their work,
allowing for further research and norms focusing on the
malicious use of such technology.
Conclusion
Artificial intelligence (AI) technologies have progressed
rapidly over the last few years and their capabilities have
extended into several domains. Incorporation of AI into
several domains such as AI-based governance systems,
AI-based buildings, AI transportation, and AI-based grid
systems, allows the technology to gather enormous data
and make it more useful. AI technologies are thus useful
for the cybersecurity field as they collect large amounts
of data and then quickly filter it to detect malicious
patterns and anomalous behaviours. Therefore, a lot has
been explained in the existing literature with a focus on
the technological advancements in the domain of AI, but
there has been a lesser focus on the potential dangers of
AI. The existing landscape of potential threats is being
reshaped with the malicious use of AI. As AI capabilities
are becoming more powerful and widespread, its
malicious use implicates threats in several areas
including physical security, digital security and political
security. Therefore, in terms of malicious use of the
technology, the world has to face certain changes in the
threat landscape. It may include an amplification of
existing threats, launch of new threats as well as
changing the characteristics of threats. As discussed, the
first domain to undergo these will be the cyberspace,
owing to the fact that operating in this domain remains
easy, inexpensive and highly secretive due to the
inherent difficulty in attribution.
Journal of Strategic Affairs
30
The cybersecurity community is already heavily
investing in this new future, and is using AI solutions to
rapidly detect and contain any emerging cyberthreats
that have the potential to disrupt or compromise key
data. Defensive AI is not merely a technological
advantage in fighting cyberattacks, but a vital ally on
this new battlefield. Rather than relying on security
personnel to respond to incidents manually,
organisations will instead use AI to fight back against a
developing problem in the short term, while human
teams will oversee the AI’s decision-making and
perform remedial work that improves overall resilience
in the long term. Therefore, intelligence and espionage
services need to embrace AI in order to protect national
security as cyber criminals and hostile nation states
increasingly look to use the technology to launch
attacks. AI-powered attacks will outpace human
response teams and outsmart conventional defences;
therefore, the mutually-dependent partnership of human
and AI will be the basis of defence strategies in the
future. The battleground of the future is digital, and AI
remains the undisputed weapon of choice.
In conclusion, using AI methods and intelligent solutions
for existing as well as future cyber-related challenges,
and in particular for ACD, advances a number of
significant technical questions and policy-related
apprehensions. Despite the need for advanced solutions,
policy-related uncertainty still exists, especially in terms
of future consequences of such tools. Fully autonomous
intelligent agents and potentially disruptive technologies
that incorporate AI into other disciplines require a lot of
attention in terms of their legality and policy-related
confusions associated with them. Several policy
implications have been identified that may ascend in the
future, such as legal uncertainty, ethical concerns,
problems related to public perception as well as
AI-Enabled Cyber Weapons and Implications
31
instruments of public-private domain. The identified
gaps in policy necessitate a deeper examination and
intelligent solutions should be devised to predict
challenges that may arise owing to swift developments
in the domain of cyberspace and AI.
References
1. Benjamin Rhode, “Artificial Intelligence and Offensive
Cyber Weapons,” ISSI Strategic Comments 25, no. 40
(December 2019): 2,
https://www.iiss.org/publications/strategic-
comments/2019/artificial-intelligence-and-offensive-
cyber-weapons. 2. Ibid.
3. “War in the Fifth Domain,” The Economist, Briefing, July
2010,
https://www.economist.com/briefing/2010/07/01/war-in-
the-fifth-domain. 4. “Department of Defence Strategy for Operating in
Cyberspace,” July 2011,
https://archive.defense.gov/home/features/2015/0415_cyb
er-strategy/. 5. “Cyber Defence Pledge,” NATO, July 8, 2016,
https://www.nato.int/cps/en/natohq/official_texts_133177.
htm. 6. James Johnson, “The AI-cyber Nexus: Implications for
Military Escalation, Deterrence and Strategic Stability,”
Journal of Cyber Policy 4, no. 3 (December 2019): 3,
https://doi.org/10.1080/23738871.2019.1701693. 7. John McCarthy, “What is Artificial Intelligence”,
Stanford University, November 2007, http://www-
formal.stanford.edu/jmc/. 8. Enn Tyugu, “Command and Control of Cyber Weapons,”
4th
International Conference on Cyber Conflict, The
NATO Cooperative Cyber Defence Centre of Excellence
(Tallinn, 2012): 336,
https://ccdcoe.org/publications/2012proceedings/5_6_Tyu
gu_CommandAndControlOfCyberWeapons.pdf.
Journal of Strategic Affairs
32
9. Richard Bellman, An Introduction to Artificial
Intelligence: Can Computers Think? (San Francisco:
Boyd & Fraser Publishing Company, 1978), 12. 10.
Ibid. 11.
Enn Tyugu, “Artificial Intelligence in Cyber Defence,” 3rd
International Conference on Cyber Conflict, The NATO
Cooperative Cyber Defence Centre of Excellence
(Tallinn, 2011): 1,
https://doi.org/10.13140/RG.2.2.25720.29441. 12.
“Data Science is the study of where information comes
from, what it represents and how it can be turned into a
valuable resource. While, machine learning is a type of
artificial intelligence (AI) that provides computers with
the ability to self-learn.” It “focuses on the development
of computer programmes that can teach themselves to
grow and change when exposed to new data,” See
Mikhail Mitra, “Artificial Intelligence: Solve Real World
Complex Problems,” Mantra Labs, updated October 22,
2019, https://www.mantralabsglobal.com/blog/solve-real-
world-complex-problems-using-ai/. 13.
“Strategic Trends Programme: Global Strategic Trends –
Out to 2040,” 4th ed, Development, Concepts and
Doctrine Centre (DCDC), UK Ministry of Defence,
January 2010,
https://www.gov.uk/government/collections/strategic-
trends-programme. 14.
“DARPA Urban Challenge,” Defence Advanced Research
Projects Agency (DARPA), United States, November
2007, http://archive.darpa.mil/grandchallenge/. 15.
Alessandro Guarino, “Autonomous Cyber Weapons No
Longer Science-fiction”, Engineering and Technology
Magazine 8, no. 8 (August 2013): 12,
http://eandt.theiet.org/magazine/2013/08/intelligent-
weapons-are-coming.cfm. 16.
Tyugu, “Artificial Intelligence in Cyber Defence.” 17.
Igor Kotenko, “Agent-based Modelling and Simulation of
Network Cyber-attacks and Cooperative Defence
Mechanisms,” in Discrete Event Simulations, ed. Aitor
Goti (St. Petersburg: Institute for Informatics and
AI-Enabled Cyber Weapons and Implications
33
Automation, Russian Academy of Sciences, 2010), 2,
http://doi.org/10.5772/intechopen.83879. 18.
Marcus Comiter, “Attacking Artificial Intelligence: AI’s
Security Vulnerability and What Policymakers Can Do
About It,” Belfer Center for Science and International
Affairs, Harvard Kennedy School, August 2019,
https://www.belfercenter.org/publication/AttackingAI. 19.
William Dixon and Nicole Eagan, “3 Ways AI Will
Change the Nature of Cyber Attacks,” Annual Meeting of
the World Champions, World Economic Forum, June
2019, https://www.weforum.org/agenda/2019/06/ai-is-
powering-a-new-generation-of-cyberattack-its-also-our-
best-defence/. 20.
Daniel Faggela, “Everyday AI and Machine Learning,”
Emerj, April 11, 2020, https://emerj.com/ai-sector-
overviews/everyday-examples-of-ai/. 21.
Nektaria Kaloudi and Jingyue Li, “The AI-Based Cyber
Threat Landscape: A Survey,” ACM Computing Surveys
53, no. 1 (February 2020),
https://doi.org/10.1145/3372823. 22.
Ibid. 23.
“Commodification of Cyber Capabilities: A Grand Cyber
Arms Bazaar,” Public-Private Analytic Exchange
Program, Department of Homeland Security and Director
of National Intelligence, November 2019,
https://www.dhs.gov/sites/default/files/publications/ia/ia_
geopolitical-impact-cyber-threats-nation-state-actors.pdf. 24.
Remesh Ramachandran, “How Artificial Intelligence Is
Changing Cyber Security Landscape and Preventing
Cyber Attacks,” Entrepreneur, September 14, 2019,
https://www.entrepreneur.com/article/339509. 25.
Johnson, “The AI-cyber nexus.” 26.
Polymorphism is the ability of an object to take on many
forms. 27.
Rhode, “Artificial intelligence,” 3. 28.
Ibid. 29.
Ibid. 30.
“AI Makes Cyber Attacks More Destructive,” Cyber
Security Intelligence, July 1, 2019,
Journal of Strategic Affairs
34
https://www.cybersecurityintelligence.com/blog/ai-
makes-cyber-attacks-more-destructive--4365.html 31.
Dixon and Eagan, “3 Ways AI Will Change.” 32.
“AI Makes Cyber Attacks.” 33.
Dixon and Eagan, “3 Ways AI Will Change.” 34.
Ibid. 35.
Ibid. 36.
“The WannaCry ransomware attack was a May 2017
worldwide cyberattack by the WannaCry ransomware
cryptoworm, which targeted computers running the
Microsoft Windows operating system by encrypting data
and demanding ransom payments in the Bitcoin
cryptocurrency.” See
https://www.upguard.com/blog/wannacry. 37.
Dixon and Eagan, “3 Ways AI Will Change.” 38.
Miles Brundage et al., “The Malicious Use of Artificial
Intelligence: Forecasting, Prevention, and
Mitigation,” Future of Humanity Institute, February 2018,
25, https://arxiv.org/pdf/1802.07228.
Pdf. 39.
Daniel Plohmann et al., “A Comprehensive Measurement
Study of Domain Generating Malware,” in Proceedings of
the 25th
USENIX Security Symposium (Berkeley, CA:
USENIX, 2016), 263–78,
https://www.usenix.org/system/files/conference/usenixsec
urity16/sec16_paper_plohmann.pdf. 40.
Brundage et al., “The Malicious Use of Artificial
Intelligence,” 24. 41.
“Machine learning systems trained on user-provided data
are susceptible to data poisoning attacks, whereby
malicious users inject false training data with the aim of
corrupting the learned model.” See Jacob Steinhardt, Pang
Wei Koh and Percy Liang, “Certified Defences for Data
Poisoning Attacks,” NIPS’17: Proceedings of the 31st
International Conference on Neural Information
Processing Systems (December 2017): 3520–3532,
https://dl.acm.org/doi/10.5555/3294996.3295110. 42.
Nicolas Papernot et al., “Practical Black-Box Attacks
Against Machine Learning,” in Proceedings of the 2017
AI-Enabled Cyber Weapons and Implications
35
ACM on Asia Conference on Computer and
Communications Security (New York, NY: Association
for Computing Machinery, 2017), 506–19,
https://dl.acm.org/citation.cfm?id=3053009. 43.
Rhode, “Artificial intelligence.” 44.
Ibid. 45.
Tyugu, “Artificial Intelligence in Cyber Defence”. 46.
“Strategic Trends Programme.” 47.
Luc Beaudoin, Nathalie Japkowicz and Stan Matwin,
“Autonomic Computer Network Defence Using Risk
State and Reinforcement Learning,” in The Virtual
Battlefield: Perspectives on Cyber Warfare ed. Christian
Czosseck and Kenneth Geers (Canada: IOS Press, 2009),
238-248, http://doi.org/10.3233/978-1-60750-060-5-238. 48.
Tyugu, “Artificial Intelligence in Cyber Defence.” 49.
David T. Fahrenkrug, Office of the United States
Secretary of Defence, “Countering the Offensive
Advantage in Cyberspace: An Integrated Defensive
Strategy”, 4th
International Conference on Cyber Conflict,
The NATO Cooperative Cyber Defence Centre of
Excellence (Tallinn, 2012). 50.
Isaac R. Porche III, Jerry M. Sollinger, and Shawn
McKay, “An Enemy Without Borders,” US Naval
Institute Proceedings, October 2012,
https://www.usni.org/magazines/proceedings/2012/octobe
r/enemy-without-boundaries. 51.
William Lynn III, former United States Under Secretary
of Defence, “2010 Cyberspace Symposium: Keynote –
DoD Perspective”, 26 May 2010,
https://archive.defense.gov/speeches/speech.aspx?speechi
d=1477. 52.
Irving Lachow, “Active Cyber Defence: A Framework for
Policymakers”, Centre for a New American
Security, February 2013,
https://www.cnas.org/publications/reports/active-cyber-
defense-a-framework-for-policymakers. 53.
“The adversary transmits the weaponised payload to the
target, often through email, websites and USB tokens,
Lachow,” “Active Cyber Defence.”
Journal of Strategic Affairs
36
54.
Cyber Engagement Zone consists of five steps through
which malicious actors conduct an attack. These include:
Deliver, Exploit, Install, Command & Control, Act. See
Lachow, “Active Cyber Defence.” 55.
Ibid. 56.
Costin Raiu, Kaspersky Labs, “Cyber Terrorism – An
Industry Outlook”, Cyber Security Forum Asia, 03
December 2012. 57.
Fahrenkrug, “Countering the Offensive Advantage.” 58.
Daniel Bilar and Brendan Saltaformaggio, “Using a Novel
Behavioural Stimuli-Response Framework to Defend
against Adversarial Cyberspace Participants”, 3rd
International Conference on Cyber Conflict, The NATO
Cooperative Cyber Defence Centre of Excellence
(Tallinn, 2011),
https://www.researchgate.net/publication/224247777_Usi
ng_a_novel_behavioral_stimuli-
response_framework_to_Defend_against_Adversarial_Cy
berspace_Participants. 59.
Caitríona H. Heinl, “Artificial (Intelligent) Agents and
Active Cyber Defence: Policy Implications,” 6th
International Conference on Cyber Conflict, The NATO
Cooperative Cyber Defence Centre of Excellence (Tallin,
2014): 60,
https://ccdcoe.org/uploads/2018/10/d0r0s1_heinl.pdf. 60.
“Artificial Intelligence and National Security,”
Congressional Research Service Report R45178,
November 21, 2019,
https://fas.org/sgp/crs/natsec/R45178.pdf. 61.
“The Internet of things is a system of interrelated
computing devices, mechanical and digital machines
provided with unique identifiers and the ability to transfer
data over a network without requiring human-to-human or
human-to-computer interaction.” See
https://internetofthingsagenda.techtarget.com/definition/In
ternet-of-Things-IoT. 62.
Mario Golling and Bjorn Stelte, “Requirements for a
Future EWS – Cyber Defence in the Internet of the
Future”, 3rd
International Conference on Cyber Conflict,
AI-Enabled Cyber Weapons and Implications
37
The NATO Cooperative Cyber Defence Centre of
Excellence (Tallinn, 2011),
https://www.researchgate.net/publication/229034186_Req
uirements_for_a_future_EWS-
Cyber_Defence_in_the_internet_of_the_future. 63.
DCDC, Global Strategic Trends. 64.
Ibid. 65.
Bilar & Saltaformaggio, “Using a Novel Behavioural
Stimuli-Response.” 66.
Tyugu, “Command and Control of Cyber Weapons.” 67.
Alessandro Guarino, “Autonomous Intelligent Agents in
Cyber Offence,” 5th
International Conference on Cyber
Conflict, The NATO Cooperative Cyber Defence Centre
of Excellence (Tallinn, 2013),
https:/ccdcoe.org/uploads/2018/10/2_d1r1s9_guarino.pdf. 68.
Guarino, “Autonomous Intelligent Agents.” 69.
Dmitri Alperovitch, “Towards Establishment of
Cyberspace Deterrence Strategy”, 3rd
International
Conference on Cyber Conflict, The NATO Cooperative
Cyber Defence Centre of Excellence (Tallinn, 2011),
https://ccdcoe.org/uploads/2018/10/TowardsEstablishmen
tOfCyberstapeDeterrenceStrategy-Alperovitch.pdf. 70.
“Campaign to Stop Killer Robots,”
http://www.stopkillerrobots.org/2013/11/ccwmandate. 71.
Alperovitch, “Towards Establishment of Cyberspace.” 72.
“Task Force Report: Resilient Military Systems and the
Advanced Cyber Threat,” Office of the Under Secretary
of Defence for Acquisition, Technology and Logistics,
Resilient Military Systems and the Advanced Cyber
Threat, US Department of Defence, Defence Science
Board, January 2013,
https://nsarchive2.gwu.edu/NSAEBB/NSAEBB424/docs/
Cyber-081.pdf.