Slovenian researchers developing tool to detect deepfakes
Ljubljana, 22 October - With deepfakes getting increasingly easy to create and spreading rapidly on social networks, researchers at the Ljubljana Faculty of Electrical Engineering are developing more advanced technology to detect deepfakes. The idea is to train detectors to recognise real videos, instead of training them on deepfakes.
Deepfakes are images, videos or audio which are edited or generated using artificial intelligence tools, and which may depict real or non-existent people. They are a type of synthetic media.
Vitomir Štruc, the head of the DeepFake DAD project at the Laboratory for Machine Intelligence, which was launched last year, told the STA that the purpose of creating deepfakes is usually malicious.
"People often try to use them to blackmail others, to spread false information, also for various abuses and identity theft," he said, adding that due to the rapid development of technology, it is now possible to recreate entire scenes and not just replace the faces of people.
"It is not possible yet to create five-minute videos that would run consistently, but it is already possible to create short, up to 20 seconds long videos that are very convincing," Štruc said.
He believes that it will be possible to easily make longer and more realistic videos in the future and, because the technology has become widely available and easy to use, it can also be easily misused.
For this reason, it is important to develop effective detector technology that will make it easier to recognise deepfakes, but Štruc notes that this is an extremely demanding task.
Machine learning-based detectors are usually trained to recognise specific types of deepfakes and distinguish them from real videos, but the problem is that there are many different ways to create deepfakes, and new ones are constantly emerging.
"Tomorrow someone can create a much better deepfake that we have never seen before. Detectors cannot recognise such deepfakes because they have not been trained on such examples," Štruc said.
This prompted the researchers to launch the three-year DeepFake DAD project in collaboration with the Computer Vision Laboratory at the Ljubljana Faculty of Computer and Information Science in a bid to develop more advanced technology to detect deepfakes.
"The idea is to train detectors to recognise real videos, instead of training them on deepfakes. We want to develop a classifier that will determine whether something is real, and everything that will not be recognised as real will be flagged as deepfake," Štruc explained.
A certain number of initial models have already been developed and they are quite effective, he said, adding that the ultimate goal is to develop a detector that will be able to recognise deepfakes appearing in the future.
The researchers would like the detector to be freely available to the public. They say such tools will increasingly become a necessity because deepfakes are spreading over social networks at an incredible pace.
Štruc noted that this is particularly problematic in politics, where deepfakes can influence public opinion, while those that work in real time can pose a serious threat in matters where identity is being proven by means of video calls.
He noted that increasing the trust level for video calls is being discussed at the EU level and also in Slovenia so as to enable major transactions, such as the sale of real estate, to be performed without the parties being present in person.
"If identity was verified in such a way, someone could use a deepfake to sell your house via video call, which is very problematic," he warned.
According to him, a bank fraud took place in the United Arab Emirates two years ago in which hackers used fake audio to defraud banks that used voice-based identity verification and steal millions in cash. There are many other such examples.
In cases where important content that can have a major impact and consequences on society is involved, experts perform a detailed manual review to determine whether the content is true or not.
"Common sense is often enough. If something seems unusual to us, the content is probably not real," Štruc said, adding that one must pay particular attention to details when identifying deepfakes.
"Usually, a fake character has unnatural facial expressions, the shadows of the body or the object are inconsistent or are projected on the wrong side, the eyes are not round, but slightly deformed, the hands are often distorted, with extra fingers appearing or a finger missing," he explained.
With more than a billion pictures and videos being uploaded to social networks on a monthly basis, it is practically impossible to review them manually.
"It is necessary to develop software that will recognise such content even before it is uploaded to social networks. This is the only way to ensure a safe digital environment," he concluded.