Balance between human rights and development top challenge in AI regulation

Ljubljana, 23 December - The regulation of artificial intelligence (AI) is necessary, experts say, but the key challenge is to balance the protection of human rights with development and innovation. In the EU, regulation will come in the form of the upcoming AI act, which will have to be updated to keep up with the rapid development of this technology.

The Council of the EU and the European Parliament reached a framework agreement in December on an act containing the rules for the use of AI in the EU. The agreement was also endorsed by the European Commission.

It aims to ensure the security of AI systems and respect for fundamental rights and values, while promoting investment and innovation in this field.

Although the US and China are more advanced in the development of AI than the EU and have already taken some steps towards regulating such technologies, it is the EU that is currently on the path to more effective and comprehensive regulation, according to Žiga Škorjanc, a researcher at the Department of Innovation and Digitalisation in Law at the University of Vienna.

"The EU AI Act is certainly the first regulation in the West that covers all sectors in the field. It is also the first comprehensive legal regulation at global level," he told the STA.

Marko Grobelnik, an AI researcher and technical partner of the OECD.AI Policy Observatory, agrees and believes that the AI Act actually marks the start of a period where there will be a legally binding regulation. "Until now, only certain recommendations, for example from the Organisation for Economic Co-operation and Development (OECD) and UNESCO, have been in force," he stressed.

The most complex issue is high-risk AI systems

The European regulation does not deal with technology, but its use in a particular sector, Grobelnik said. "We need to bear in mind that the use of ChatGPT in medicine to diagnose a patient, for example, is completely different from its use in the entertainment industry," he noted.

The act will set out obligations for providers and users according to the level of risk posed by the technology. According to Škorjanc, some practices that clearly threaten people's fundamental rights will be banned.

For uses of AI that are not prohibited and where there is no high risk only transparency will be ensured, he said. "The end-user will have to be informed that a particular tool is AI-based," said Škorjanc. AI use in areas where the level of risk is very low or even negligible will not be regulated.

However, the most complex part of the act will include a section on high-risk AI systems related to product security. Examples of such systems are critical infrastructure for water, gas and electricity supply, systems for determining access to education or employment, or some systems used in law enforcement, justice, etc.

These systems will therefore have to meet strict requirements, notably to implement risk management methods, provide clear information or instructions to users, be based on human control, a high level of robustness, accuracy and cyber security.

The political agreement on the act now needs to be formally endorsed by the European Parliament and the Council of the EU.

The AI Act will be implemented two years after it steps into force, except for some specific provisions. Prohibitions will apply after six months, while the rules on general-purpose AI will apply after twelve months.

Main challenge is democratisation of AI development

Both Grobelnik and Škorjanc agree that this type of regulation will certainly enable the development of safer and more efficient AI-based systems. However, any regulation has its challenges, they warn, and some issues in the democratisation of AI development remain open.

"We must be aware that the stricter the regulation, the more it favours the largest technology providers, because it imposes the least burden on them in relative terms. They can more easily afford to implement compliance measures and initiate legal action if they disagree with the supervisory institution.

"But it is a different story with small or medium-sized enterprises, where innovation can be hindered because of their lower financial resources. This was a key point in the debate on the act. It is essential that these companies are subject to less strict rules to ensure they have a route to market," Škorjanc said.

According to Vasilka Sancin, a lawyer and professor at the Faculty of Law at the University of Ljubljana, the accessibility of AI is certainly a major challenge and can raise several questions about exercising individual human rights. These range from the right to an effective remedy and a fair trial if AI systems are only accessible to one of the litigants, to the exercise of the right to health if certain health care services, such as neurotechnologies using AI, are only available to the wealthier segments of the population.

"The prohibition of any discrimination, in particular on the grounds of race, gender, language, religion, political or other opinion, etc., must be one of the fundamental guiding principles in any kind of development and use of AI. There is already too much evidence and studies showing that already developed AI systems are in some cases highly discriminatory, which means that they should be subject to a moratorium and further development should be banned until all prejudices are eliminated," she stressed.

"It would therefore be sensible to take a precautionary approach: to use AI if it is first proven that it is not discriminatory, which must be established at all stages of the AI lifecycle. Transparency is key here," she said.

An additional challenge in the regulation of AI arises, among other things, in the area of copyright, in particular how to define authorship when AI is involved in a creative process, Škorjanc said. Under the current EU copyright law, only a natural person can be the author of content.

"So when an individual uses technical systems, whether AI-based or not, as a tool, he or she is still the author of the product or content. Therefore, if content was generated by a system without any human influence, copyright law would not consider the AI system to be the author. But we know that such systems do not exist yet," he explained.

The key question is not whether the tool or the individual holds the copyright, but whether the user or the provider of the tool holds the copyright in the content created. "There are still some ambiguities that need to be clarified," he noted.

Despite the rapid development, regulation likely to be effective

The extremely rapid development of AI raises the question of whether it is possible to design effective regulation to keep pace with this development. Škorjanc believes that there is no reason why the EU AI Act would not be effective in achieving its objectives. "It is true, however, that smart regulation is harder to make than unwise regulation that simply bans everything or is outdated the day it comes into force," he said.

"Regulation is a cyclical process that is never quite finished." This is why the AI Act will be open-ended and will be updated and upgraded as AI technology develops, he added. "At the same time, it is crucial that the technical concepts defined as AI are set broadly enough so that the legal rules also apply to newly developed technologies."