STAznanost

Experts call for AI regulation in defence and security sector

Ljubljana, 28 December - Autonomous weapons based on artificial intelligence (AI) systems used to belong mainly in the domain of speculations and science fiction. But computer vision, robotics and AI integration with other technologies have developed to such an extent that these weapons are now entirely possible. Warning about ethical dilemmas, experts call for regulation.

AI-based facial recognition technology has become commonplace, and experts believe route planning and tracking control, the key components of autonomous vehicles, will soon follow suit.

There is practically no more technological barriers standing in the way of the development of autonomous weapons, say Ivan Bratko and Danijel Skočaj, lecturers at the Ljubljana Faculty of Computer and Information Science.

What is key is to teach machines to understand what they see

Computer vision is one of the key areas, which in addition to many other applications allows the development of autonomous weapons and video surveillance systems, according to Skočaj.

It is a computer science field that deals with algorithms for the analysis and interpretation of visual data, such as photos, videos or a 3D point cloud.

Methodology-wise, the solution to most computer vision issues lies in machine learning - in the past decade mainly in deep learning. This means that the machine's learning process includes a vast amount of images and explanations for them, Skočaj explains. "That way the machine learns to understand what is featured in similar images, and application of this knowledge is of course very broad."

An example of this is the automation of quality control in manufacturing involving surface defect detection and autonomous vessels using sea-surface object detection, an area that is studied at the Visual Cognitive Systems Laboratory, a facility headed by Skočaj.

Experts at the laboratory have developed a sea-level projection system based on meteorological data or weather forecasts, which is more precise and half a million times faster than the existing physical models, Skočaj told the STA.

Universal methodology enables solutions to a vast array of problems

What is beautiful about different problems regarding deep learning and computer vision is that the way to solve them is very similar. "The appropriate model architecture is not the same for every problem, but the methodology behind it is universal, as is the process to solve these problems."

This means that the deep learning methodology used in, for example, sea-surface object detection can be also used in teaching the model to recognise human emotions, which is then applied to video surveillance systems.

The use of systems for large-scale facial and emotion recognition is highly restricted in the EU due to privacy protection, but this does not stop the development of such systems elsewhere, for example in China.

In defence and military industry in general, there are many potential examples of using computer vision and AI, Skočaj said, and they are most essential when it comes to the efficiency and speed of data processing.

"The AI system can detect some things much faster and much more precise than a human. For example, it can recognise a swarm of drones, track it and take action as instructed. Moreover, the AI system can process a large amount of data. If we take satellite images, for example, human analysts cannot go over thousands of images and detect changes in them as fast as systems, which do that automatically," he said.

Ethical dilemmas and regulation against the backdrop of a potential new arms race

AI use in various fields comes with risks and ethical dilemmas - ranging from data privacy to the issue of human rights. What triggers most ethical dilemmas and concerns is definitely the issue of integrating AI into weapons systems.

The first time experts addressed this challenge in a major way was in 2015, when it became clear that the technology was sufficiently developed to be used effectively for military purposes, Ivan Bratko said.

At an IJCAI international conference, one of the leading AI expert meetings, an open letter was presented urging a ban on lethal autonomous weapons without human control and a way to avert another potential arms race. There were many renowned scientists among the 20,000 signatories, including Stephen Hawking, Kathryn McElroy, Stuart Russell, Demis Hassabis and Steve Wozniak.

The opinion that was widely accepted in scientific circles at the time was that compared to conventional weapons, especially nuclear weapons, autonomous weapons are more accessible and cheaper, which poses a major risk of terrorist organisations misusing these weapons or of the spread of conflicts between countries, Bratko said.

One of the main reasons for the open letter was the ethical concerns. "By definition, autonomous weapons have the ability to pick the target themselves and decide to attack, and in making this decision they can, of course, make a mistake - for example, instead of the planned targets they can kill children. In this case some expected that since the machine makes the decision, no one would be to blame, no one would be held responsible, which is unlike the situation following the usual decisions in warfare."

There were lively debates about the ban on lethal autonomous weapons for a while, according to Bratko, but around 2020 they lost steam. "It seems that there was a strong backlash by military superpowers, which counted on gaining military advantage through the development of autonomous weapons. And these countries managed to put international discussions about this on the back burner."

Like in many other AI areas, the process of regulation is caught between the challenges of socially responsible restrictions and efforts to maintain the economic or, in this case, the military competitive edge, Bratko said. But he believes that despite the increasing pace of development, efforts to come up with concrete international agreements and safeguards should remain the global priority.