Autonomous Weapons: The Ethical Crossroads of AI and Modern Warfare

Reading Time: 3 minutes
A demonstration of an autonomous tank in France is depicted on July 18, 2018. (AP)

Imagine a nuclear missile armed with an explosive yield equivalent to thousands of kilotons of TNT, capable of producing thermal damage and shock waves that extend miles out from the initial strike point. Next, imagine an advanced AI system capable of data analysis and targeting with such meticulous precision that it is borderline terrifying. Combine these two and you have what are called autonomous weapons – systems that can identify and engage targets absent any human input, thus eliminating any margin of human error.

The widespread adoption of AI, coupled with growing innovation within weapon development, led to the combining of the two spheres of technology. The result? The field of autonomous weaponry, which is revolutionizing modern warfare.

However, as with any technological breakthrough, the development does not come without its risks. Combining AI with weapon systems brings forth grave concerns regarding safety and ethicality, as well as questions of adoption, deployment, and regulation. 

The first documented use of an autonomous weapon to kill is thought to have occurred in Libya during March 2020. The weapon was a Turkish-made Kargu-2 drone that reportedly attacked members of the Libyan National Army. However, the very initial development of autonomous technologies can be traced all the way back to the early twentieth century. The first US weapon system with autonomous functionalities was deployed in World War II: a passive acoustic homing torpedo that used hydrophones to locate and track undersea boats.

However, development did not just end after World War II. Countries today employ these technologies more than ever. For example, the Israel-made Harpy is an autonomous, anti-radiation loitering munition system that is designed to attack enemy radar systems. Additionally, numerous types of autonomous sentry guns are used in South Korea, with integrated systems that carry out surveillance, tracking, firing, and voice recognition. In Australia, there is also the Jaeger-C – a machine that charges at targets and detonates an explosive once it gets close, all without any human maneuvering or input.

Several issues and concerns arise in discussions about autonomous weapons. For instance, there is the question of who is responsible for the decisions carried out by them. If the artificial intelligence makes a choice that does not reflect the intention of its deployer, who is to blame?

There is also the issue of compliance with international law. Historically, we have seen that states do not always stay true to the treaties or agreements they sign off on – proven by the nuclear arms race between Russia and the United States after the Nuclear Non-Proliferation Treaty (1969), the Soviet Biopreparat Program despite signing the Biological Weapons Convention (1972), and Syria’s continued development of chemical weapons even after the Chemical Weapons Convention (1993). With these examples serving as a cause for concern, the effectiveness of regulation remains uncertain. 

Next, there are also questions of safety that come with the actual technology itself. Right now, large language models and chatbots show biases in algorithms and are not immune to providing faulty output. Imagine that being the case in high-stake scenarios such as large weapon deployment. The consequences would be disastrous. The potential for malfunction and faulty tech pose a serious threat to the integration of AI within militant technology. 

Arguments that AI integration would not be faulty, but incredibly exact, are met with an entirely different argument. What if the newfound precision escalates warfare between states? By making attacks easier, there is the possibility that they could also become more frequent. 

With all that being said, there have also been potential benefits expressed by advocates for the technology. For example, there is the prospect of significantly reducing civilian and soldier casualties. AI weapons would reduce the need for human soldiers and are predicted to have a much smaller chance of error as opposed to human-deployed ones. The possibility for enhanced efficiency and precision is one of the most-used justifications in defending AI-integration. 

Current efforts and proposals for regulations are being undertaken by intercontinental bodies such as the UN and the International Committee of the Red Cross, who have called for a treaty to regulate and prohibit autonomous weapons systems by next year. United Nations Secretary General António Guterres has stated that “machines with the power and discretion to take lives without human involvement are politically unacceptable, morally repugnant and should be prohibited by international law.”

While there is no doubt that autonomous weaponry holds significant implications for the future of warfare and possibly humanity itself, they also display what could possibly be a serious threat to international peace and security. While regulations are being discussed, the future of these technologies and the way in which they will shape militant efforts is not yet clear. The field remains an ethical slippery slope, with both the potential for conflict and some proposed benefits. While we wait to see how the developments will unfold, ask yourself, can humanity afford to leave war in the hands of machines?

Written by Saanvika Gandhari

Share this:

You may also like...

X (Twitter)
LinkedIn
Instagram