AI on AI Action: Gyfcat Uses AI to Fight Deepfakes’ Morphed Celebrity Porn Videos

Author Photo
Feb 16, 2018
14Shares
Submit

Machine learning, facial recognition, and AI are standing testament to human ingenuity and innovation. The technology wasn’t publicly available until recently, and the first thing we do with it make porn. Thanks to a Reddit user /u/deepfakes, people are now creating AI-assisted face-swap porn, often featuring a celebrity’s face mapped onto a porn star’s body. However, several sites such as Reddit, Pornhub, Gyfcat and others have taken a stand against deepfakes and are actively deleting any content hosted on their website.

Most platforms rely on keyword banning and users manually flagging content. The method is effective, but a lot of content often slips through the cracks. Several videos stay up for extended periods of time before being flagged. It all began on Reddit and even though the original subreddit is banned now, it has a hard time keeping track of users and subreddits that actively host/discuss deepfakes related stuff.

geforce-rtx-2080-ti-gallery-aRelated NVIDIA GeForce RTX 2080 Ti GPU Is 6x Times Faster In Ray Tracing Performance and 10x Faster In AI – RTX 2080 And RTX 2070 Also Equipped With RT and AI Engines

Gyfcat starts the never-ending cat and mouse game

Popular image hosting site Gfycat says it’s figured out a way to train an artificial intelligence to spot morphed videos. The technology builds on a number of tools Gfycat already used to index the GIFs on its platform. Gyfcat seems to have figured out how to use AI and machine learning to automatically spot any morphed content and ban it right off the bat.

Gfycat’s AI approach leverages two tools it already developed: Project Angora and Project Maru. Project Maru is one way that Gfycat can spot a deepfake by determining when a GIF only partially resembles a celebrity. Most content is created by new users and it’s far from believable as the frames don’t quite match up. Project Maru is not nearly as forgiving as the human brain.

Deepfakes Fail

samsung-4-13Related Samsung Betting Huge on 5G Connectivity and AI With a Whopping $160 Billion Investment That Will Also Prop up Its Mobile Business

However, Project Maru likely can’t stop all deepfakes alone. As the quality of content gets better and more believable, the program won’t be able to use the uploaders’ ineptitude against them. Sometimes a deepfake features not a celebrity’s face but of someone known to the creator. To combat that variety, Gfycat developed a masking tech that works similarly to Project Angora.

If Gfycat suspects that a video has been altered to feature someone else’s face the company can “mask” the victim’s face and then search the internet for existing footage of the body and if it exists, it could search the internet and turn up the original footage it borrowed from. If the faces don’t match between the new GIF and the source, the AI can conclude that the video has been altered.

Some videos can still give the software a slip

The only situation in which both pieces of software fail is one in which both the face and body that don’t exist elsewhere online. For example, someone could film a sex tape with two people and then swap in someone else’s face. If the footage isn’t available elsewhere online, it would be impossible for the software to find out whether the content had been altered.

For now, that seems like a fairly unlikely scenario because making a deepfake requires access to a lot of videos and photos of someone. So, an ex-lover can still make revenge porn and upload it on Gyfcat, given he/she has access to enough photos/videos of his/her partner. The only way to tackle the problem is with manual intervention and strict moderation. The company also uses other metadata, like where it was shared or who uploaded it, to determine whether a clip is a deepfake.

While Gfycat offers a potential solution, it may be only a matter of time until creators learn how to circumvent it. A lasting solution will involve several protocols designed to detect fraudulent imagery so that it becomes extremely difficult for a deepfake to circumvent them all. We’re hoping that more companies invest resources for weeding out morphed content as we’re headed to a future where it’s impossible to tell whether a video is real or fake. The slope is extremely slippery once people begin to figure out more malicious uses for the software, and frankly, we’re already halfway down the aforementioned slope.

Source: Wired

Submit