How Will Killer Robots Affect the Pacific Islands?

Details

PUBLISHED
February 28, 2022
AUTHOR
Jeanne Wills

An arms race with lethal autonomous weapons is on. Lethal autonomous weapons, also known as Killer Robots, are a threat that the international community must address now. SafeGround is looking at how these emerging weapons may affect the Pacific Island nations. The history of nuclear weapon testing and the dumping of munitions during and after WW2 is worrying. We do not want to see peaceful Pacific Island nations damaged again by weapons testing or wars where Killer Robots are used. We need to create a legally binding treaty that bans Killer Robots. 

The Burden of War & Weapons in the Pacific

Through our work in reducing the impact of war, we have seen how the Pacific region has continuously been impacted by wars. Pacific countries have never started international wars but were invaded by other countries waging wars in their region. And still, eighty years after WW2 ended, lives and livelihoods in the Pacific are threatened by explosive remnants of war littered across the land and ocean. Only two years after WW2 ended another dark chapter began for many Pacific islanders. Three foreign states began using their land to test nuclear weapons. 2021 marked seventy-five years since the US launched its nuclear testing program in the Marshall Islands, which saw 67 tests conducted over twelve years. The UK and France followed suit, and between 1947 and 1962 more than 300 nuclear bombs were detonated in the Pacific region. They are still emitting radioactive material. The hurt is felt to this day, and Pacific Islanders are still being denied access to land and good health. 

What further impact could killer robots have in the Pacific?

Will the Killer Robots recognise them as civilians?

Lethal Autonomous Weapons explained

Lethal autonomous weapons systems, or Killer Robots, are defined by the United Nations as weapons that “locate, select, and engage human targets without human supervision”. The word ‘engage’ here is to kill. Basically, ‘killer robots’ are weapons that use artificial intelligence to decide whether to attack the target humans based on whatever data it receives through its sensors such as cameras, and other instruments. The international community has expressed that killer robots have too many ethical and functional flaws which make the future of warfare look extremely worrying. 

Robots cannot be programmed to feel human compassion and to show human reasoning. Artificial intelligence systems are vulnerable to cyberattacks. Who is accountable when civilians are killed by killer robots? In times of war, the act of killing should not be left to a robot, it is morally wrong. The killer robots will reduce the threshold to go to war. In other words, make it easier to press a button and begin a conflict. Easier to release millions of machine armies. The scale of battle with killer robots will simply come down to economics and who can afford larger arsenals. Weapons are likely to run into millions. 

The systems could be programmed to recognise certain symbols and make decisions. But when human compassion and reasoning is removed, could a killer robot distinguish between a child playing with a toy gun where it has been programmed to ‘engage’ that shape in a warzone? How does a killer robot reason seeing a soldier with a gun entering a red cross marked hospital? Does the fact that an armed soldier enters a hospital illegally, against international norms, make the hospital a target?

Killer robots are already used and deployed

The Kargu Drone, developed by Turkish company STM, is an autonomous system that has reportedly been deployed in Libya. The Kargu is developed and uses AI-driven facial recognition technology to kill with NO human assistance or oversight. It is similar in size to a domestic drone and is designed to carry 1.5kg of explosives (STM n.d.). 3 grams of explosives is enough to kill a person at close range (Russell 2021).

Racial bias in AI

Autonomous weapons, especially those with facial recognition capabilities, would have a disproportionate impact in the Pacific because of the racial algorithmic bias that exists in facial recognition technology (Nouri 2021). Artificial intelligence is being increasingly relied upon to make decisions about hiring, criminal sentencing, advertising, to name a few. Artificial intelligence learns from a set of data that produces an output for some people and objects. We are seeing that the data sets are faulty because they are unrepresentative and reflect historical inequalities (Lee, Resnick & Barton 2019). A 2018 study conducted at MIT examined three facial recognition software publicly available for testing and showed that the error rate for identifying light-skinned males was 0.3%, and the error rate for identifying darker-skinned females was over 34% (Buolamwini & Gebru 2018). Basically, facial recognition technology performs poorly on historically underrepresented groups.

How can we expect autonomous weapons to differentiate between a civilian and soldier in the chaos of war when facial recognition systems guided by humans cannot even identify race and gender in everyday life?

The world needs a treaty

The need for a pre-emptive ban on autonomous weapons systems is urgent. A minority of parties to the United Nations Convention on Certain Conventional Weapons (CCW) are determined to block the majority of countries that are in support of a new legally binding instrument. At the end of last year, the Sixth Review Conference of High Contracting Parties to the CCW delegated over four days in Geneva to regulate Killer Robots. The Campaign to Stop Killer Robots recognised this as an urgent opportunity “to recognise the need to draw clear legal and moral lines to ensure meaningful human control over the use of force, and respond with a strong and focused mandate to begin negotiations on a legally binding instrument on autonomous weapons” (Jones 2021). However, large military powers already investing in the development of autonomous weapons systems, most notably the US and Russia, utilised their veto power to block negotiations for a legally binding instrument. The result has fallen starkly short of what the world wants and needs. The international community is in support of a pre-emptive ban. The majority of countries, artificial intelligence experts, civil society groups and public opinion are all on the same side, and it has become clear that there is a need to pursue an alternative avenue to form a treaty and ban these emerging weapons.

Pacific Voices on lethal autonomous weapon systems

SafeGround’s main area of focus has been reducing the impact of legacy and emerging weapons. Our ‘Pacific Voices on AI’ project focuses on increasing the Pacific’s involvement in the conversations surrounding the dual use of AI and lethal autonomous weapon systems. Our aim is to engage Pacific nations on the issue of autonomous weapon systems and build awareness and capacity for Pacific states to join a legally binding instrument. We recognise how AI is being used for good in the Pacific context to assist with environmental conservation, medicine, education and communication. AI exists in everyday life across the globe, including the Pacific. For this reason, we need to address the issues of dual-use – and emerging weapons technology. These autonomous weapon systems will affect the Pacific. We urge you to stand up against harmful AI before it is too late.

Recent posts