Written by Amber Yuan (Keystone ’25)
In an age of rapid technological advancement and innovation, it is imperative to prevent the overwhelming influence of artificial intelligence from losing control. Artificial intelligence in death-dealing weapons systems is one of the more risky, difficult, and complex examples. Lethal Autonomous Weapons (LAWs), more commonly known as “killer robots”, are machine-learning algorithmic systems that select and destroy targets with little to no human interference. While LAWs have the potential to decrease the death rate of soldiers and carry out tasks with higher accuracy, its extensive development by countries, in recent years, poses far greater threats to human safety and leaves a series of unanswered questions.
An Appalling Crisis
The fatal attacks of autonomous drones have taken between 8,500 and 12,000 innocent lives in Pakistan, Yemen, Somalia, and Afghanistan. Hence, it is crucial to unveil the features that allow Lethal Autonomous Weapons to be this lethal. One of the distinct characteristics of killer robots is the low requirement for battlefield presence and military personnel. Specifically, since LAWs can be deployed from a farther distance using remote-control technologies and GPS-tracking devices, traditional soldiers and their workload in operational processes become unnecessary to a large extent. As a result, the costs to go to war significantly decrease, making conflict escalation to be much more probable.
In fact, the Russo-Ukrainian conflicts have prompted the United States to continue developing autonomous drone swarms and to test- fire the prototype of a super cannon as soon as 2024, which could be used in the swarms to strike targets as far as 1,000 miles away. And because the over-effectiveness of swarms, which can undermine nuclear capabilities of the enemy in destroy operations, Russia claims to fear the neutering capabilities of LAWs, especially from the United States. From the intensifying tension between opposing superpowers, it can be inferred that countries being able to sacrifice fewer human soldiers by replacing them with more advanced LAWs encourages the incentive for more conflicts to occur between one nation and another, which pushes the humanity onto the new brink of war that is unprecedentedly dangerous.

Moreover, the artificial-intelligence-powered nature of LAWs and killer robots is extremely problematic. The algorithm within the deadly weapons have the possibility of data breaches, which makes LAWs highly vulnerable to inevitable incidents of hacking. This means that in situations of cyberattacks, hackers could directly turn the systems against their developers and cause unimaginable damages or accidental strikes of wrong targets. For example, a cyberattack would take place every 39 seconds approximately, yet 95 percent of all data breaches are targeted towards governmental organizations and technology companies, the main sources of the development of LAWs.
In other words, when algorithmic systems are used in weaponry to determine the life-and-death situation of individuals, such a high likelihood of mishaps would lead to irreversible consequences, such as casualties of innocent people or even the collapse of the entire technological system of a country’s killer robots. Thus, the problems and negative impacts caused by endlessly developing autonomous weapons can never be more dire.
Countries Refuse to Ban Killer Robots
For more than five years, the United Nations has been advocating for a ban on the development and utilization of LAWs including all types of killer robots. However, the reality of this proposed idea is nowhere near sufficient support and effectiveness. The larger countries that are leading the trend of developing LAWs have been particularly reluctant to sign any binding contracts to impose regulations on killer robots. For instance, when experts have been attempting to gather signatories for banning all LAWs systems, Russia strongly suggested that it was unwilling to follow any form of international restrictions on the use of killer robots and lethal intelligence weaponry, mainly due to the desire to attain benefits from the upwards trajectory of LAW usage for its national security. In addition, the Biden Administration just one year ago rejected the request to sign an international binding contract on reducing the use of LAWs, and has not yet agreed on being a part of any strict treaty that controls the development of killer robots to this day.

These perspectives make it clear that nations firmly believe in the need of proceeding the development of killer robots, despite the existential threats to individual safety and international relations from LAWs’ overpowering expansion. When evaluating the issue from a larger scope, countries including the United Kingdom, India, Israel, and so forth have all been taking an opposing stance against limiting LAWs as well, even when the Geneva conference, held by a United Nations disarmament committee, has ended after weeks of discussion.
What is worse, the development of killer robots will proceed more uncontrollably even if a ban is imposed. After a ban is imposed, the current leading countries that negate the ban will choose to develop autonomous weapons systems secretly without others knowing, enabling the destructive capabilities of LAWs to be more lethal, unpredictable, and terrifying. When a country states that all of its deadly drones are flown by humans, it would be developing a robot or an artificial intelligence system at the same time, which also flies the drones, with no one ever having the opportunity and ability to check whether it is a person in charge of the flight or a robot.
The decrease in transparency and increase in unpredictability cause the weapons to have no guarantee over the quality of the programs. In the end, it is proved that multiple influential countries are most likely going to decide to continue the development and use of killer robots under all circumstances, neglecting the risks of a catastrophic loss of control. No matter how far the risks go, even one accident of LAWs is able to lead to thousands of civilian casualties.
To Be Resolved: Possibilities for the Future
The uncontrolled development of LAWs needs to be addressed before the imminent consequences result in even more massive losses of human lives. Nevertheless, it is not too late for us to grasp the boundaries of killer robots and put a timely “stop” to the problem before it exacerbates. In circumstances where restrictions are possible to be enforced, slowing down the replacement of humans on the battlefield helps us maintain the current mutually assured destruction and deterrence between nations to lessen the harms on people.
In cases where the continuous development of LAWs is unavoidable, ensuring that the accuracy issues, safety hazards, and cyber risks are reduced to the minimum is useful in producing much less casualties than the status quo. Eventually, we can be “caging” the killers, not “making” them.