Site icon The Bangladesh Chronicle

It’s AI powered slaughterbots, not ChatGPT, that should worry us

People take part in a demonstration as part of the ‘Stop Killer Robots’ campaign in front of the Brandenburg Gate in Berlin on March 21, 2019. Photo: Wolfgang Kumm/AFP

Of late, there has been much hullabaloo about ChatGPT which is harmless and perhaps fun, too. We should rather talk about Artificial Intelligence (AI) powered autonomous weapons – often called killer robots – that can kill anyone, all by itself.

When science fiction icon Isaac Asimov wrote the laws of robotics in 1942, he didn’t think of killer robots. His first law states: “A robot may not injure a human being or, through inaction, allow a human being to come to harm.” But that was mid-20th century, and we are now well into the 21st century, when the whole paradigm of science, technology, warfare, and most importantly, ethics has changed.

What are killer robots, exactly? In formal parlance they are called Lethal Autonomous Weapons Systems (LAWS). Also known as slaughterbots, they use AI to identify, select, and eliminate targets without human intervention. Conventional unmanned military drones can fire only when a remote human operator decides to do so. But a slaughterbot can independently engage a target.

Human controlled drones have been used in warfare for years. But, combined with readily available image-recognition and autopilot software, they have turned into killer robots. In a depressing show of machines’ superiority over humans, such robots took to the skies in Libya and hunted down adversaries like eagles swooping down on helpless chickens.

In March 2020, when soldiers loyal to the Libyan strongman Khalifa Hafter were retreating, dozens of small ordinary looking drones, Turkish made STM Kargu-2, came buzzing down from the sky, using cameras to scan the terrain and onboard computers to identify a target. Then they “decided” to attack, divebombing trucks and individual soldiers, exploding on contact, massacring the ragtag remains of Haftar’s men.

Turkey used such AI-powered drones to patrol its border with Syria and help Azerbaijan in the war with Armenia. Often called “loitering munitions“, these weapons autonomously patrolled an area for enemy movements and divebombed when they wanted to, killing men and destroying military hardware as they exploded on impact with the targets.

These killer robots are programmed to attack targets without requiring data connectivity between the operator and the munition. This marks the third time in human history that the nature of warfare has undergone a fundamental transformation. First, there was gunpowder. Then came nuclear arms. Now we are witnessing the LAWS, observed Kai-Fu Lee, influential Taiwanese computer scientist and writer. They come with lethal AI-powered autonomy and can search for, decide to engage, and obliterate a life, with no human involvement at all. It’s no wonder that in most of today’s wars, LAWS are carrying out reconnaissance, identification, engagement, and destruction.

All these raise several questions.

First, let’s take the case of identifying a target. Autonomous drones covering a moving convoy look for any human forms close enough to launch an anti-tank weapon. Though AI-powered, the drone has no way to determine if the human is actually carrying such a weapon or is just an innocent bystander and terminates them as a potential threat. Even if its algorithm can identify humans carrying a weapon with a sling, it could easily confuse it with a school kid carrying a backpack. Not discouraged by such potentially disastrous AI pitfalls, American AI company Clearview has provided its facial recognition technology to Ukraine for identification of individuals, both dead and alive, on battlefields and elsewhere. Clearview is already facing several lawsuits in the US, and other legal issues and fines elsewhere. One major concern about facial recognition technology is that it can be used by authoritarian governments against minorities and political opponents. Such a technology is, however, finding more use despite the ongoing concerns about intrusion into privacy and the fact that it is fundamentally biased.

The second issue is no less grave, but perhaps more so. AI is growing fast and is expected to grow even faster while the cost will go down as computers and processors get more powerful. This will allow not only smaller nations but also non-state actors to procure, and perhaps develop, their own LAWS. What if a terrorist outfit gathers a swarm of 10,000 drones that could wipe out half a city? Or if it hires an assassin drone to kill a political opponent? A human suicide bomber won’t be needed anymore, an autonomous assassin would do the job without any hesitation about self-destruction.

Then there is the serious question of ethics. How would we define the line of accountability, and find out who is responsible in case of an error? With conventional systems, critical decisions rest with humans with different levels of authority. But when the killing is assigned to an autonomous-weapon system, where accountability should lie becomes unclear. That an algorithm decides whether to take a life is against the fundamental principle of humanity. UN Secretary General Antonio Guterres twitted in March 2019, “Autonomous machines with the power and discretion to select targets and take lives without human involvement are politically unacceptable, morally repugnant and should be prohibited by international law.”

But all the big players such as the US, Russia, and the UK are against such a ban. LAWS are too convenient, too lethal, too affordable, and too effective not to be used. In 1983, Soviet Colonel Stanislav Petrov didn’t follow computer’s instructions to retaliate when it mistakenly spotted an impending American nuclear attack saving the world from a nuclear meltdown. With LAWS, fateful decisions will all be taken by algorithms. Humans will have no chance to intervene and can only watch.

Dr Sayeed Ahmed is a consulting engineer and the CEO of Bayside Analytix, a technology-focused strategy and management consulting organisation.

Exit mobile version