Artificial Intellgience in Weapons Systems: Opportunities, Risks and Skepticism
The technological acceleration of the past few decades in processors, big data and machine learning has led to the revolutionary progress of artificial intelligence. These developments have renewed attention and increased the pessimism regarding the potential risks coming from the use of this technology, mainly when applied to the military domain. For this reason, scholars and experts started requiring new and increased regulations, control and even banning this technology. However, it is worth investigating whether these widespread fears are legitimate and whether artificial intelligence can endanger more risks than opportunities. This Spotlight will try to answer this question by focusing on three main aspects. First, we will summarize AI’s origins and developments. Second, we will analyze the implications of this new technology applied to the military domain and weapons systems. Third and final, in light of the previous analysis, we will assess scholars’ and experts’ main criticisms of the dangers of AI’s use.
Artificial Intelligence: development and skepticism
The developments and acceleration of artificial intelligence since the early 2000s have been accompanied by an equal wave of criticism and pessimism regarding its use. In particular, the main concerns relate to using AI in the military domain. With the advent of new autonomous weapon systems, some experts and scholars, in fact, argue that we are facing a new military revolution capable of changing its technology: redistributing military power, causing new, more frequent, more lethal conflicts and upheavals in the world balance of power. Before verifying these predictions, however, it is appropriate to understand the origins, development methods, functions and future growth prospects of artificial intelligence. The widespread pessimism towards AI is based, in fact, on three generic affirmations: the continuous and growing progress, the predominance of the commercial sector causing a greater diffusion and finally, the great pervasiveness of AI. The purpose of this article will hence be to understand the limits of both technological development and skepticism by analyzing the developments, technical and technological articulation, and the military application of AI.
Technological Revolution: AI’s origins and developments
In this section, we will analyze the technological development that led to the enormous acceleration of artificial intelligence and define what Artificial Intelligence is, what functions it can have and finally, what kind of technological structure it needs to continue its expansion. We will then describe the two approaches to AI: from Good Old Fashion AI (GOFAI) to Deep Learning (DL), whose transition is due to exponential progress in processors, big data and machine learning.
The continuous advances and developments in technology have led to three different industrial revolutions in the history of humankind, endangering social and political changes. This era is characterized by an exponential increase in the development and diffusion of technology, in a more profound and faster way than the previous revolutionary waves, capable of leading to significant transformations with socio-economic and international political implications. The Covid-19 pandemic itself has led, paradoxically, to a further acceleration towards the digitization of our lives, forcing remote work and the use of digital technologies. Today, indeed, we are facing what some, like Karl Schwab, call the fourth industrial revolution, or others the Second Machine Age, as evidenced in the increase in computational power, the levels of precision, in the advent and development of new electronic and computer technologies, from artificial intelligence (AI) to machine learning (ML) – the computational key of artificial intelligence – and big data (BD).
Artificial intelligence, the heart of this new revolutionary technological wave, is defined by some as general-purpose technology (GPT), more generally it can be described as a type of technology aimed at simulating the intelligence of human beings and the impact of which, it is already possible to predict, will occur in multiple fields, from the economy to the military. Artificial intelligence can be applied to different domains for different functions: it can be used to direct physical objects, such as robots, without human control, or to process and interpret information or finally, through the overlap of multiple specific functions, it can be used for new forms of command and control (C2). There is no consensus among scholars on what the predetermined field of application is or should be. However, it is a common opinion that the impact, precisely due to its pervasive nature, will be such to generate changes, transformations, and renewed competition in the paradigm of international politics. As for the processors, there are two types of artificial intelligence: one is defined as general, capable of performing multiple functions in parallel, representing the type of AI that in a future scenario would be able to replace human beings; the other model, instead, has a more limited and specialized approach to specific fields of action, therefore defined as narrow.
Moreover, there are two types of meta-approaches to AI: one top-down and the other bottom-up. The first, also defined as Good Old Fashion AI (GOFAI), is based on a deductive approach, for which all information must be encoded and entered ex ante. This was the predominant approach until the 2010s, and precisely because of this structure, which implies a theoretical codification of every possible scenario, we can better understand AI’s clear limits and learning gaps. Since the 2010s, computational power, which is the basis of the development of AI, has undergone an exponential increase and improvement thanks to the amelioration of three strategic components: processors, algorithms, and big data.
The development of processors is evident in their progressive specialization. There are, indeed, two different types of microprocessors, the specialized ones used for specific functions and sequences and the generic ones created instead for multiple and parallel applications. Specialized processors started being developed in increasing numbers, particularly since 2010, because of the increase in demand given their fundamental role in sequential operations in the functioning of machine learning (ML) algorithms and applicability to several fields. The improvements in algorithms, on the other hand, are the consequence of new ML techniques and the expansion in the use and functions of software, including its application to artificial intelligence systems. The progress in algorithms was made possible also by the increase in the production and availability of data. With the digitization of different types of information, and the spread of portable devices, such as telephones, laptops, tablets, etc., the amount of data available has become exorbitant. To quantify these transformations, just think about the increase in data production, from five exabytes in 2003 to 59 zettabytes – or 59 trillion gigabytes in 2020; the rapid decrease in costs of 3D Lidar (Light detection and ranging) sensors from $30,000 in 2009 to just $80 in 2019. Calculation sequences, moreover, that would have taken 89 years in 1982 are now solved in a matter of seconds.
These three enhancements, in microprocessors, ML techniques and available data, have led to renewed attention, and new investments in the field of artificial intelligence, allowing the transition from GOFAI to the inductive, or bottom-up, approach based on deep learning: meaning allowing the AI to learn and improve itself thanks to the patterns, trends and prediction capabilities, which can be obtained from the enormous amounts of data entered and from its interactions with the world. A concept that, as such, had existed since 1965, when it was developed at Dartmouth College in Hanover, New Hampshire, but has accelerated in its development and application since 2010 due to the exploitation of deep learning.
Military implications
The defence and security sector has started integrating AI into its force structure for several years. The consequences of these transformations for military, defence and security dominance are still at an early stage, and the implications of their use are not entirely clear. Certainly, however, these new technologies pose new opportunities and benefits but also questions, challenges, risks, and concerns. However, the debate often focuses solely on the consequences, arousing concern and criticism about an arms race, the risks due to the greater diffusion and pervasiveness of autonomous weapon systems and the potential change in the balance of power. In this section, we will focus on these points and analyze whether or not these fears and concerns are justified.
The great powers are, effectively, competing for the research and development of military and commercial technologies related to artificial intelligence, but this does not correspond to the traditional definition of "arms race." Yet, the acceleration in the development and potential uses of artificial intelligence in the military field are evident and emphasized by political leaders, CEOs and academics as a real military revolution. Examples of this are the Chinese government's 2017 goal of achieving world hegemony in the field of artificial intelligence, the introduction of military strategies for AI by the most important European powers and NATO, and the Pentagon program Project Maven.
Artificial intelligence aims at imitating human behaviour and reasoning through a chain of information that goes from perception to cognition and finally to action. This operation means that autonomous AI systems establish their own action based on probabilistic reasoning and calculations determined by the inputs of sensors, which must perceive the surrounding world and deconstruct it, but without logical connections. Something that would require a computer enormous amount of data to process and some training is carried out in a few seconds by the human brain, demonstrating the paradox of this so-called fourth industrial revolution: the more new technologies are articulated, developed and complicated, the more there is and will be needed for human beings to control, direct and interpret them.
Opportunities. The military implications of AI are manifold and have various advantages for the defence and security domain. First, AI allows the extrapolation, collection, transmission, and analysis of greater data, thanks to improvements in radar and sensors, increasing intelligence, surveillance and reconnaissance actions (ISR). Second, it allows for optimization and logistical improvement using unmanned autonomous vehicles on land, air or sea. Third, it increases the accuracy of enemy targeting through precision-guided weapons, the implication of which can be seen as a reduction in collateral damage to civilians. Furthermore, it accelerates the time of warfare, consequently increasing the need for predictive analysis and decision-making, which AI can perform faster than a human brain. By using AI, one may increase the speed and precision owing to algorithms that enable users to accelerate assault times and discriminate against enemy targets with better precision, achieving speeds beyond the capability of humans.
Risks. In this industrial revolution, machines are trying to replace the cognitive abilities of individuals. At this stage, however, algorithms and robotics are still mainly used for 4D missions (dangerous, dull, dirty, dumb), as the limits of deep learning cannot yet allow uses that entirely replace human beings. The use of AI in the military field, however, also introduces a command and control (C2) problem, related to human-machine interaction, as AI generates a question of trust and reliability between the (military) commander and robots or automated systems. This relates to how to train and instruct humans to better work with these systems, such as understanding human psychology in interacting with autonomous and automated systems. Finally, the industrial base must be considered: in the post-industrial era, characterized by software and big data, it will be necessary to adapt the industrial base to this new phase and to ensure future security, keeping, however, a proper balance between conventional capabilities and EDTs, such as AI.
Despite the incredible developments in the field of autonomous systems, air, land or sea, the transition to effective implementation in military operations is still a long way off, partly as a consequence of the necessary organizational, financial and structural adaptations, because of the enormous costs, partly due to the priority given to the development of traditional vehicles and weapon systems and finally, according to some, because at this stage the leading sector in the development and use of UVs, such as drones or driverless cars, is precisely the commercial and private sector.
Criticisms and fears. The development and integration of autonomous weapon systems have raised widespread debate and growing concerns about the beginning of the robotic era and the diminishing human control in the military. The most common concerns of scholars and experts range from the risk of greater instability and conflicts to the shaking of the international order and balance. The criticisms generally raised are based on three main assumptions: a greater diffusion of autonomous systems in the years to come, a greater pervasiveness of these systems and continuous progress of AI. In this paragraph, starting from the criticisms, we analyze the risks that AI may or may not entail.
The first concern about the developments of AI, mainly driven by the commercial sector, is its more rapid and pervasive spread for two reasons: the decrease in the unit price of production and the easier dissemination of these by private individuals incentivized by profit and economies of scale. Greater propagation would limit military strategic advantages on the one hand and increase and expand the pervasiveness of these technologies on the other. The risk connected with this possibility would imply that more actors can have easy access to both commercial AI systems and lethal weapon systems, resulting in increased conflicts and international political instability. This type of criticism, however, is based on the assumption that AI is a cheaper technology that is easier to implement, replicate and disseminate than traditional weapon systems. If, on the one hand, it must be recognized that commercial technology is relatively cheaper, on the other, it is necessary to specify that once it has moved to the military field, that technology needs increasingly specific and expensive requirements for a smaller number of units, which prevents the exploitation of economies of scale.
The second type of criticism concerns the pervasiveness and acceleration of the use of AI. Another reason why lethal automated weapon systems, or LAWs, create apprehension and adversity, both in the public opinion and among the military themselves, is that a broader spread could result in a change in the character of warfare, making it faster, more lethal, and unstable. According to some scholars, drones, for example, are easier to produce, imitate and less expensive, and could change the dynamics of international politics by redistributing military power, resulting in increased instability and more lethal conflicts. Indeed, the increase in the use of robots incorporating artificial intelligence, or autonomous weapons – unmanned autonomous vehicles (UAVs) – on the one hand, is hailed as the first stage towards a new technological-robotic era. On the other hand, instead, they are increasing concerns, ultimately leading to the 15 years-long debate on the possibility of banning what is defined as "killer robots" that would lead to crises, violence and violations of human rights, for example, due to a malfunction or inaccuracy of the algorithm. This criticism, however, can be seen as exaggerated. In fact there have been no cases of wars conducted solely by killer robots, and the number of losses inflicted by drones in recent conflicts, for example, in Libya, Syria or Nagorno-Karabakh, is considerably lower compared to the victims caused by traditional clashes.
These fears and apprehensions, more generally, exacerbate the factual reality of the development of UAVs and artificial intelligence itself. Both two meta-approaches used in artificial intelligence for autonomous weapons have limitations. The deductive approach, or top-down on the one hand, based on ex-ante programming, which must include any eventuality, is unattainable with current technology, especially if the killer robot has to act in an ever-changing environment. The inductive approach, on the other hand, or bottom-up, is based on collecting vast amounts of data to extrapolate trends and models that the software will have to learn through machine learning systems. This approach will expose military technologies to more significant vulnerabilities and operational risks on the battlefield, such as cyber-attacks. This second approach also has hardware limitations: the current computer architecture cannot handle the enormous amounts of data produced and does not have the sufficient computational power to process them, with the recent explosion of deep learning.
After describing and analyzing the technological development of artificial intelligence, its potential application to the military domain and its criticism, we can conclude that the widespread pessimism regarding this new type of technology presents clear limits: it is not supported nor legitimated by the current stage of AI’s development, which is a promising and revolutionary tool for military and civilian purposes, but still far from having immediate disrupting effects on the character of warfare.