By Dr. Ali Masoudi Lamraski
Physically removing weapons users or controllers from the battlefield has been considered the primary driving factor for advancements in military weapons technology. This tendency has led to the so-called “third revolution of warfare” following gunpowder and nuclear weapons, namely Lethal Autonomous Weapon Systems (LAWS).
During the past decades, as robots were finding their way out of the heart of science fictions into reality, they have been given the power to decide over the life and death of human beings by certain States. Development of LAWS continued unabated during these years and they have been subject of much research and investment by various States. As a result, LAWS are no longer conjured up by Hollywood for their entertainment value; instead, they are increasingly used in today’s armed conflicts.
While there is no standard, universally-accepted definition of some of the key terms related to the LAWS, attention must be given to the fact that the use of “autonomy” in the field of robotics should not be confused with how it is used in philosophy, politics, individual freedom or common parlance; rather “autonomy in robotics is more related to the term ‘automatic’ than it is to individual freedom.” In their broadest sense, they are automated systems that interact with their environment and can operate outside direct human control and independently dispense lethal force in the battlespace based on internal programming. Taking a functional approach, the ICRC has authoritatively defined LAWS as weapon systems “that can select (i.e., search for or detect, identify, track, select) and attack (i.e., use force against, neutralize, damage or destroy) targets without human intervention.”
Current technology is headed in such a direction that human interference may be excluded entirely from the critical functions stage of the LAWS. This is where grave concerns and imperative questions arise. The question of how to respond to these concerns has steadily climbed the international agenda since the issue was first discussed at the UN HRC in 2013 and the intergovernmental discussion under the framework of the CCW in 2014. The ICRC in its collaborative work with Stockholm International Peace Research Institute categorized the challenges posed by the use of LAWS in armed conflicts as (a) “numbers”, (b) “context”, and (c) “predictability” challenges.
From an IHL perspective, it is evident that every weapon can be used in an unlawful manner. Nevertheless, inherent attributes of certain weapons cause their use, in some or all circumstances, to be unlawful per se. According to the ICRC, certain LAWS would be inherently unlawful under IHL since “their effects, in their normal or expected circumstances of use, could not be sufficiently understood, predicted and explained”. On the other hand, in order for LAWS to not be considered inherently unlawful, it must be ensured that their operation will not result in unlawful outcomes with respect to the fundamental principles of IHL – namely military necessity and the principles of distinction, proportionality and precaution.
Adherence to these principles requires analysis and application of the mind to each situation as and when they arise. Rather than quantitative and technical indicators, they require cognitive, qualitative, evaluative, and context-specific judgments on one hand, and conduct-, intent- and causality-related legal assessment on the other to proceed further. These are the qualities that cannot be encoded into a weapon control system and machines. Furthermore, a machine is inherently incompetent to tackle such situations no matter how high the degree of its automation. Such machines remain subject to their programming, which is devoid of the ability to make any of these types of judgments in each case, and carries with it certain degrees of unpredictability and unreliability.
With LAWS posing serious challenges for compliance with fundamental rules of IHL for protection of civilians, action must be taken to prohibit the use of fully autonomous weapons, and regulations must be adopted so that humans can exert “meaningful control” and judgment over the use of LAWS. Increased autonomy, whether as a result of the development and use of more advanced and complex weapon systems, or the deployment of swarms of remote-controlled robots, exacerbates the concerns with respect to LAWS compliance with IHL. In this regard, the number of stakeholders seeking a ban on their use is growing. Since 2013, ninety-seven States have publicly elaborated their views on LAWS, the vast majority of which consider human control and decision-making as critical to the acceptability and legality of LAWS. Among these, thirty States have called explicitly for a ban on LAWS, including China and Pakistan in the Asia-Pacific region.
At the same time, the Campaign to Stop Killer Robots, a coalition of 165 non-governmental organisations in sixty-five countries, including Human Rights Watch, has been working to ban fully autonomous weapons and thereby retain a regime of requiring meaningful human control weapons. In addition, since 2018, the UN Secretary General has repeatedly called upon States to ban LAWS, considering them to be “politically unacceptable and morally repugnant”. As of May 2021, the ICRC has endorsed a ban on unpredictable autonomous weapons, and also on the use of autonomous weapons to target humans.
In sum, to achieve compliance with IHL, it does seem necessary that using LAWS to target human beings be prohibited. Further, LAWS, which by their nature select and engage targets without meaningful human control, should also be prohibited. Finally, meaningful human control must be preserved at all times, particularly during the operation of LAWS in the battlefield with the option of deactivation and fail-safe mechanisms. This will be achieved by using measures to regulate the design and use of LAWS that are not prohibited, and as the ICRC has indicated, through a combination of legally binding limits on the types of targets, the duration, geographical scope and scale of the use of LAWS, and situations of their use.
Dr. Ali Masoudi Lamraski has a PhD in International Law from Shahid Beheshti University, Tehran, Iran. He is an international law, human rights and humanitarian law researcher. He can be reached via email at firstname.lastname@example.org.
The author has thoroughly discussed these issues in the article titled “Preliminary Remarks on Lethal Autonomous Weapon Systems from an IHL Perspective.“