By Dr. Ali Masoudi Lamraski
During the past decades, developments for lethal autonomous weapons systems continued unabated and various States have dedicated significant resources to research and development on these systems. While there is no standard, universally-accepted definition of some of the key terms related to them, in their broadest sense, lethal autonomous weapon systems can operate outside direct human control and independently dispense lethal force in the battlespace based on internal programming. While these weapons, like all others, must comply with the law of armed conflict, there seems something troubling about this prospect. On one hand, there is growing advocacy asserting that these weapons can adhere to the law and even deliver more humanitarian outcomes in their dispensation of violence. Such advocacy assumes much about the normativity of the law. On the other hand, there exist grave concerns as to these weapons’ level of autonomy. The fundamentals of IHL, best exemplified by the principles of distinction, proportionality, precaution and humanity require qualitative, context-dependent cognitive reasoning and judgment. These are the qualities that cannot be encoded into a weapon control system and machines are inherently incapable of. That is the reason humanitarians, roboticists and States have been struggling to define the extent to which weapons may be developed to conduct military operations without human control. This paper seeks to canvass the legal challenges posed by using autonomous weapons systems from an IHL perspective and interrogate how developments of the legal framework should be made on this matter.
Keywords: Lethal Autonomous Robots, Autonomous Weapons Systems, Human Control, New Technologies, Contemporary Challenges to IHL