Naked Security reported on September 9 that Facebook has launched a project to detect deepfake videos, and it’s pledging more than $10M to the cause. Of course, that is chump change for Facebook, but still noteworthy. It has pulled in a range of partners for help, including Microsoft.
If you are still uncertain about what deepfakes are, they are videos that use AI to superimpose one person’s face on another. They work using generative adversarial networks (GANs), which are battling neural networks. One of the networks focuses on producing a lifelike image. The other network checks the first network’s output by matching it against real-life images. If it finds inconsistencies, the first network has another go at it. This cycle repeats until the second network can’t find any more mistakes.
This leads to some highly convincing pictures and videos. Deepfake AI has produced fake porn videos, fake profile pictures, and (for demonstration purposes) fake presidential statements. They’re also getting easier to create.
Some are just for fun and some are just porn, but imagine a fake clip spreading on Facebook in which Trump says he’s bombing Venezuela. Or perhaps a deepfake where the CEO of a major US company says that it’s pulling out of China and taking a massive earnings hit, tanking its stock. No longer so funny, huh?
As you might imagine, Facebook’s DeepFake Detection Challenge will help people uncover deepfakes. AI relies on lots of data to generate its images, so to create AI that spots deepfakes, Facebook has to come up with its own dataset. It will take unmodified, non-deepfake videos, and then use a variety of AI techniques to tamper with them. It will make this entire dataset available to researchers, who can use it to train AI algorithms that spot deepfakes.
Along with Microsoft, Facebook is working with the Partnership on AI, and academics from Cornell Tech, MIT, Oxford University, UC Berkeley, the University of Maryland, College Park, and the University at Albany. These partners will create tests to see how effective each researchers’ detection model is.
One impressive aspect of all this (to me, quite impressive), is the way that Facebook is generating and handling the dataset. Probably wary of the privacy implications of just utilizing its own user data, the company is making every effort to do it right. It is working with an agency that is hiring actors. The actors will sign a waiver agreeing to let researchers use their images. Facebook says it will only share the dataset with entrants to the contest, so that black hats can’t use it to create better deepfakes.
I hope the security on that dataset is better than some of Facebook’s other security. Though I am not a Facebook fan, I wish the company well on this venture. May they do some good here – we need all the help we can get.
Ride the Lightning will be on sabbatical until September 23rd.
Sharon D. Nelson, Esq., President, Sensei Enterprises, Inc.
3975 University Drive, Suite 225|Fairfax, VA 22030
Email: email@example.com Phone: 703-359-0700
Digital Forensics/Cybersecurity/Information Technology