Gymnasium Haganum Model United Nations Empowering the Sustainability Revolution
Disarmament Commission (DC)Developing an International Legal Framework for the weaponization of Artificial Intelligence and Big Data
6th, 7th and 8th of March 2020Gymnasium Haganum, The Hague
HagaMUN 2020 | 6th, 7th and 8th of March 2020Empowering the Sustainability Revolution
Forum: Disarmament Committee (DC)
Issue: Developing an international legal framework for the
weaponization of Artificial Intelligence (AI) & Big Data
Student Officer: Skander Lejmi
Position: President
IntroductionThere has been a rising fear of the weaponization of both Artificial Intelligence and Big
Data catalysed after what many refer to as the fourth industrial revolution. Some previously
timeless scriptures on war such as “The Art of War” by Sun Tzu are now inevitably losing
their value as humanity moves towards a novel, innovative and previously inconceivable type
of warfare that is comprised not of swords & shields or of guns & rifles but rather of military
robots. Qualified analysts have even made the concerning case that a global Artificial
Intelligence arms race has been ongoing since the mid 2010s.
This is of course massively concerning to the international community as we would not
like to see the inexorable march of both physical (militarized artificial intelligence) and even
more gravely concerning metaphysical (militarized big data) novel technology that poses a
threat to our international security. Our comprehension of these two concepts (artificial
intelligence and militarized big data) and their magnitude/prospects for destruction has been
deemed minimal by experts on the topic. What has become evident through a multitude of
different reports is that these concepts are unequivocally dangerous and as such must be
tackled with great urgency and seriousness.
Definition of Key Terms
Artificial Intelligence (AI)
2
HagaMUN 2020 | 6th, 7th and 8th of March 2020Empowering the Sustainability Revolution
Whilst there is no universally recognized definition of artificial intelligence,
different international bodies (OECD and UNCTAD) have come to a general consensus
regarding the definition of Artificial Intelligence (AI) as: “the ability of machines and
systems to acquire and to apply knowledge and to carry out intelligent behaviour. This
includes a variety of cognitive tasks such as but not limited to sensing, processing oral
language, reasoning, learning, making decisions.” They can also demonstrate an ability to
move and manipulate objects accordingly. Intelligent systems use a combination of big
data analytics, cloud computing, machine communication and the Internet of Things
(IoT) to operate and learn”. This very definition attests to the incomplete comprehension
of international bodies with respect to the capricious and unprecedented concept of
artificial intelligence that will further be expounded upon in the below sections.
Big Data
There are multiple definitions given to Big Data but of course we shall be following the
UN definition which was coined by the UNECE (United Nations Economic Commission
for Europe) that states that Big Data are essentially data sources which follow/fit the
following description: high-volume, high volume, veracity, velocity and variety of data
that demand cost-effective, innovative forms of processing for enhanced insight and
decision making. With regards to the purpose that they serve in warfare and therefore how
they fit into the context of the issue we shall be solving, they are utilized as an
indispensable variable to Artificial Intelligence (AI) in their military operation. Of course,
the prospective danger that Big Data analysis poses on international security outside of
this context is also our (as in the UN) duty to tackle but it is outside of the scope of the
issue on our agenda and as such would detract from the ultimate resolution we produce
and pass.
Military Robots
Military Robots have not been given a definition by any international body but are
generally recognized as autonomous robots or remote-controlled mobile robots that serve
some sort of military purpose. In our particular context, we will be looking more at
autonomous robots which are essentially robots that perform tasks predicated on an
algorithm in order to reach a goal entirely independently. This essentially means that it is
3
HagaMUN 2020 | 6th, 7th and 8th of March 2020Empowering the Sustainability Revolution
the application of Artificial Intelligence manifested in a military robot (which is
in fact greatly prominent and arguably at the core of the issue).
Artificial Moral Agents (AMAs)
Artificial Moral Agents are essentially autonomous robots that have been endowed with
moral reasoning capabilities as such making them less dangerous in their advanced
capacity to comprehend and thereby minimize unethical consequences when they are
attempting to achieve their objective (the achievement of an objective being a framework
that all autonomous robots are endowed with).
Background Information
The issue of the weaponization of Artificial Intelligence and Big Data is of course
multidimensional and therefore we will be separating the background information into two
sections: the danger posed by artificial intelligence and big data, and the legal frameworks
that have already been established pertaining to both of these concepts.
Danger Posed by Artificial Intelligence & Big Data
In late 2017, the UN started talks on the prospective dangers of Artificial Intelligence
(namely autonomous weapons systems) succeeding multitudinous calls on issuing an
international ban on these “killer robots” that have changed the nature of warfare and will
continue to do so in manners that remain ambiguous to us. Officials convened in Geneva
throughout an entire week under the disarmament group by the name of the Convention on
Certain Conventional Weapons (CCW) had come after warnings made by more than 100
leaders in the artificial intelligence industry in August including Tesla’s Elon Musk and
Alphabet’s Mustafa Suleyman attesting to these weapons ability to effectively lead to a “third
revolution in warfare”.
The aforementioned lethal autonomous weapons systems are perfectly capable of
taking human lives predicated on the algorithm that they have been allocated. Often, this
algorithm is reliant on big data analytics and as such the two latter and the former are in fact
interdependent for successful weaponized operation.
4
HagaMUN 2020 | 6th, 7th and 8th of March 2020Empowering the Sustainability Revolution
To exemplify the magnitude of production of these aforementioned weaponized
autonomous systems, the UK's Taranis Drone, an unmanned combat aerial vehicle, is
expected to be fully operational by 2030 and thereby capable of replacing the manned
Tornado GR4 fighter planes that are effectively part of the Royal Air Force. This is
particularly concerning as if said drone is not endowed with proper moral and ethical
reasoning (essentially the aforementioned AMA concept), it is perfectly capable of
committing international crimes if it is conducive to the accomplishment of its goal.
Another grave concern that arises and must be dealt with urgently in the prospect of
these particular falling in the hands of terrorists. Alvin Wilby, vice president of research at
Thales (an organization that supplies espionage/reconnaissance drones to the British Army,
informed the House of Lords Artificial Intelligence Committee that it was merely a matter of
time before the terrorists got their hands on lethal artificial intelligence. Others also make the
case that given the exponential development of technology in recent times it is not merely the
prospect of terrorist organizations on getting their hands on the lethal artificial intelligence
but much more concerning, the prospect of the terrorist organizations actually generating
these military robots of lethal capabilities that obviously wouldn’t follow any international
protocol regarding endowing the aforementioned robots with moral capabilities. The
development of these weapons in the hands of terrorist organizations certainly poses a threat
to international security as quite simply put it would catalyse the terrorist acts of said
organizations and as such launch an entirely novel form of combat against terrorism whereby
they may
Pertinent International Legal Frameworks
The pertinent international legal frameworks that have enlightened the fields of AI
and Big Data are thankfully abundant and all listed below:
Treaty in Open Skies
Enforced in 2002, the Treaty in Open Skies essentially just provides State
Parties the right to conduct short-noticed, unarmed, observation flights over the
territories of other state parties. This effectively enhances mutual understanding and
transparency in regards to the signatories military activities and as such minimizes
5
HagaMUN 2020 | 6th, 7th and 8th of March 2020Empowering the Sustainability Revolution
any unexpected surge in the weaponization of artificial intelligence. This
treaty has also notably been ratified by the following 35 states: Belarus, Belgium,
Bosnia-Herzegovina, Bulgaria, Canada, Croatia, Czech Republic, Denmark, Estonia,
Finland, France, Georgia, Germany, Greece, Hungary, Iceland, Italy, Latvia,
Lithuania, Kyrgyzstan, Luxembourg, the Netherlands, Norway, Poland, Portugal,
Romania, Russia, the Slovak Republic, Slovenia, Spain, Sweden, Turkey, Ukraine,
the United Kingdom, and the United States.
Convention on Certain Conventional Weapons (CCW)
This convention which also commonly referred to as the Inhumane Weapons
Convention was adopted in 1980 and essentially served the purpose of prohibiting or
in the least decreasing the use of weapons capable of inflicting avoidable or
inexcusable (in that it defies the international laws pertinent to war set out by the
Rome Statute amongst many other legally-binding documents) torment to soldiers as
well as involving civilians with no aim. Of course, the convention is only of legally-
binding application when speaking of situations of international armed conflict.
Major Countries and Organizations Involved
CCW
The CCW persistently convenes in order to discuss this issue and has effectively been
addressing it since 1980 (when they established the first legal framework pertaining to the
issue as a whole). As such, the pertinence of the CCW with respect to this issue is
considerably high.
Russia
Russia is actively participating in what has been dubbed the “Artificial Intelligence
Arms Race”. The commander-in-chief of the Russian air force even stated as early as
6
HagaMUN 2020 | 6th, 7th and 8th of March 2020Empowering the Sustainability Revolution
February 2017 that Russia had effectively been working on AI-guided missiles with
the capacity of deciding to switch targets half-way through their flight entirely autonomously
predicated on the algorithms that they had been given.
United States of America
In 2014, the former Secretary of Defense Chuck Hagel posited the “Third Offset
Strategy” that recognized that rapid advances in Ai technology will pave the way to the future
generation of warfare and thereby implicitly acted accordingly. Their involvement in the
weaponization of AI/Big Data is in fact explicitly demonstrated in the U.S. Department of
Defenses increased investment in Artificial Intelligence, Big Data and cloud computing from
the relatively mere 5.6 billion US dollars in 2011 to 7.4 billion US dollars in 2016, attesting
not merely to their recognition of the military prospect of AI/Big Data but also of their active
involvement in the weaponization thereof.
China
A report published in February of 2019 by Gregory C. Allen, “China's leadership -
including President Xi Jinping- believe that being at the forefront in AI technology is critical
to the future of global military and economic power competition”. In order to obtain more
information on the Artificial Intelligence industry in China, please click here.
Timeline of Events
7
HagaMUN 2020 | 6th, 7th and 8th of March 2020Empowering the Sustainability Revolution
Source: https://concordacademy.org/wp-content/uploads/2019/03/CAMUN2019SC-
ArtificialIntelAndNationalSecurity.pdf
Relevant UN Treaties and Events
As previously mentioned, there are multiple treaties of particular pertinence to this issue as a
whole and that have equally been ratified my multiple nations:
● Treaty in Open Skies, 2002
● Inhumane Weapons Convention, 1980
● S/RES/2286, 3rd May 2016
Previous Attempts to solve the Issue
8
HagaMUN 2020 | 6th, 7th and 8th of March 2020Empowering the Sustainability Revolution
All of the efforts made by the international community to “solve” the issue
are manifested in the above documents and treaties. Of course the issue is multidimensional
and as such in order to solve it, a multilateral approach must be adopted but for the moment
the international community is attempting to establish an internationally ratified and
recognized treaty that requires the endowment of autonomous military robots with moral
capabilities that prevent or in the least minimize undesirable and unethical casualties.
Possible Solutions
In order to solve this issue as a whole it is imperative to address many of the sub-
issues that it is made up of. As such, I have provided a multidimensional approach including
ideas on how to solve the issue and most importantly what needs to be addressed:
● Danger of Artificial Intelligence/Big Data
o Moral Reasoning of AI
▪With regards to moral reasoning it is imperative to establish an
international legal framework that effectively prohibits the endowment
of artificially intelligent in the absence of any moral capabilities that
operate as preventative measures that aren’t overridable by the AI
when they are obstructive to achieving said goal.
● In order to do this, an example could be calling for an
international summit comprised of the members of the CCW
and experts on the issue of AI in order to discuss and ultimately
establish a legal framework that will be abided to by all of the
attendee nations
o Terrorist Organizations
▪ Unfortunately, this repercussion of AI/Big Data is simply inexorable.
All we can do as an international community in order to address this
issue is minimize it by all means possible by doing the following:
9
HagaMUN 2020 | 6th, 7th and 8th of March 2020Empowering the Sustainability Revolution
● Conducting research and investigations in order to
determine whether said terrorist organizations even have these
aforementioned weapons and if so, determine the level of
technology they are using
● Develop our technology and consistently ensure that in
accordance with the previous bullet point, our AI is of superior
capabilities as to combat the AI utilized by the terrorist
organizations successfully
● Hold accountable any and all perpetrators of the previously
established legal framework (as they are required by
international law to abide by this framework if it is ratified by
the UNSC)
Bibliography
Allen, C., et al. “Critiquing the Reasons for Making Artificial Moral Agents.” Science and Engineering Ethics, Springer Netherlands, 1 Jan. 1970, link.springer.com/article/10.1007/s11948-018-0030-8. Accessed 12 Feb. 2020.
“Big Data.” Wikipedia, Wikimedia Foundation, 14 Feb. 2020, en.wikipedia.org/wiki/Big_data#Government. Accessed 12 Feb. 2020.
“Ethics of Artificial Intelligence.” Wikipedia, Wikimedia Foundation, 11 Feb. 2020, en.wikipedia.org/wiki/Ethics_of_artificial_intelligence. Accessed 12 Feb. 2020.
Marr, Bernard. “Weaponizing Artificial Intelligence: The Scary Prospect Of AI-Enabled Terrorism.” Forbes, Forbes Magazine, 23 Apr. 2018, www.forbes.com/sites/bernardmarr/2018/04/23/weaponizing-artificial-intelligence-the-scary-prospect-of-ai-enabled-terrorism/#3e1d479e77b6. Accessed 12 Feb. 2020.
10
HagaMUN 2020 | 6th, 7th and 8th of March 2020Empowering the Sustainability Revolution
11