Showing posts with label DeepFake. Show all posts
Showing posts with label DeepFake. Show all posts

April 23, 2018

Fighting fire with fire

Last week, I posted about this Kotaku article, which discussed the use of Machine Learning algorithms to enable feats of video face-swapping which are increasingly difficult to spot. The video example, in which Jordan Peele masquerades as Barack Obama to warn us about the dangers of the coming "Fucked up dystopia," while exhorting us all to "stay woke, bitches," was both hilarious and disturbing:


If that video gave you both belly laughs and nightmares, then MIT Technology Review has an antidote for you.... kinda.
The ability to take one person’s face or expression and superimpose it onto a video of another person has recently become possible. [...] This phenomenon has significant implications. At the very least, it has the potential to undermine the reputation of people who are victims of this kind of forgery. It poses problems for biometric ID systems. And it threatens to undermine public trust in videos of any kind.
[...]
Enter Andreas Rossler at the Technical University of Munich in Germany and colleagues, who have developed a deep-learning system that can automatically spot face-swap videos. The new technique could help identify forged videos as they are posted to the web.
But the work also has sting in the tail. The same deep-learning technique that can spot face-swap videos can also be used to improve the quality of face swaps in the first place—and that could make them harder to detect.
Artificial Intelligence: making your life both better and worse since 2018.

The fact that the same techniques that make detecting DeepFakes harder also makes them easier to fake in the first place creates an awkward conundrum. I'm inclined to think that arming individuals with the power to spot fakes more easily is a good thing, but it's not exactly a no-brainer. Do you arm individuals with the tools they need to tell real videos from clever ML-powered forgeries, knowing that some of those individuals will use those same tools to create more clever ML-powered forgeries? Would withholding this power from rule-abiding individuals help prevent the DeepFake apocalypse, or just leave them helpless to protect themselves against society's black hats and bad actors, who will almost certainly be disseminating these tools on darknets anyway?

And this is still just Machine Learning, and not the full-blown Artificial Intelligence that it may well lead to. Count on it: things will only get wilder from here.

We now return you to the Singularity, already in progress....