Facebook's new system can tell where deepfakes come from using a reverse engineering method.

0
While we've grown accustomed to seeing deepfakes used for a variety of harmless purposes, ranging from a cheerleader's mother trying to give her daughter a leg up against her peers to a viral Tom Cruise impersonator, they can also be used to create more seriously malicious content, such as putting someone else's face on a pornographic scene or sabotaging a politician's career.
Deepfakes are artificial intelligence-assisted digitally manipulated films or images of a person's face that appear to be too real. They can be so damaging that the state of California, for example, outlawed their use in politics and pornography last year.

To combat the spread of such manipulated footage, Facebook and Michigan State University researchers stated that they had developed a new method for recognising deepfakes and determining which generative model was used to build them.

Deepfake detection systems already exist, but because they're usually trained to detect specific generative models, they can't figure out where the deepfake came from when a new model surfaces that the system wasn't trained on.

The team hopes that its method will provide researchers and practitioners with "tools to better investigate incidents of coordinated disinformation using deepfakes as well as open new directions for future research."

How the system works

Decided to take it a step further and expand the image attribution outside the training's limited selection of models.

Reverse engineering is a big part of it.

"To build a single deepfake image, our reverse engineering strategy focuses on finding the unique patterns behind the AI model," says the researcher.

"We can estimate attributes of the generative models used to construct each deepfake using model parsing, and even link several deepfakes to the model that produced them. This gives you knowledge about each deepfake, even if you didn't know anything about them before "said the group.

The team used a false image dataset of 100,000 synthetic images generated by 100 different publically available generative models to train their system. Their results outperformed earlier detection methods by a wide margin.

This type of deepfake detection algorithm could be useful, especially for governmental organisations, law enforcement, and social media platforms that are desperately trying to prevent fake news from spreading on their platforms. There's no word on when this model will be available, but it's reassuring to know that academics are working on it.

Post a Comment

0 Comments
* Please Don't Spam Here. All the Comments are Reviewed by Admin.
Post a Comment (0)

buttons=(Accept !) days=(20)

Our website uses cookies to enhance your experience. Learn More
Accept !
To Top