Showing posts with label Machine Learning. Show all posts
Showing posts with label Machine Learning. Show all posts

April 23, 2018

Fighting fire with fire

Last week, I posted about this Kotaku article, which discussed the use of Machine Learning algorithms to enable feats of video face-swapping which are increasingly difficult to spot. The video example, in which Jordan Peele masquerades as Barack Obama to warn us about the dangers of the coming "Fucked up dystopia," while exhorting us all to "stay woke, bitches," was both hilarious and disturbing:


If that video gave you both belly laughs and nightmares, then MIT Technology Review has an antidote for you.... kinda.
The ability to take one person’s face or expression and superimpose it onto a video of another person has recently become possible. [...] This phenomenon has significant implications. At the very least, it has the potential to undermine the reputation of people who are victims of this kind of forgery. It poses problems for biometric ID systems. And it threatens to undermine public trust in videos of any kind.
[...]
Enter Andreas Rossler at the Technical University of Munich in Germany and colleagues, who have developed a deep-learning system that can automatically spot face-swap videos. The new technique could help identify forged videos as they are posted to the web.
But the work also has sting in the tail. The same deep-learning technique that can spot face-swap videos can also be used to improve the quality of face swaps in the first place—and that could make them harder to detect.
Artificial Intelligence: making your life both better and worse since 2018.

The fact that the same techniques that make detecting DeepFakes harder also makes them easier to fake in the first place creates an awkward conundrum. I'm inclined to think that arming individuals with the power to spot fakes more easily is a good thing, but it's not exactly a no-brainer. Do you arm individuals with the tools they need to tell real videos from clever ML-powered forgeries, knowing that some of those individuals will use those same tools to create more clever ML-powered forgeries? Would withholding this power from rule-abiding individuals help prevent the DeepFake apocalypse, or just leave them helpless to protect themselves against society's black hats and bad actors, who will almost certainly be disseminating these tools on darknets anyway?

And this is still just Machine Learning, and not the full-blown Artificial Intelligence that it may well lead to. Count on it: things will only get wilder from here.

We now return you to the Singularity, already in progress....

April 19, 2018

Machine Learning is a transformative technology... and its transformations won't all be good ones

This is both hilarious and terrifying. From Kotaku:
Last year, University of Washington researchers used the technology to take videos of things that former President Barack Obama had already said, then generate faked videos of him spitting out those lines verbatim in a machine-generated format. The research team stopped short of putting new words in Obama’s mouth, but Get Out director Jordan Peele and BuzzFeed have done just that in a PSA warning malicious actors could soon generate videos of anyone saying just about anything.
Using technology similar to the University of Washington study and Peele’s (fairly good!) imitation of Obama’s voice, here’s a clip of the former POTUS saying “So, for instance, they could have me say things like, I don’t know, Killmonger was right. Or uh, Ben Carson is in the sunken place. Or how about this, simply, President Trump is a total and complete dipshit.”
[...]
“We’ve covered counterfeit news websites that say the pope endorsed Trump that look kinda like real news, but because it’s text people have started to become more wary,” BuzzFeed CEO Jonah Peretti wrote. “And now we’re starting to see tech that allows people to put words into the mouths of public figures that look like they must be real because it’s video and video doesn’t lie.”
[...]
As colleague Adam Clark Smith noted before, there are countless potential uses of this technology that would qualify as mundane, like improving the image quality of video chat apps, or recreating mind-blowing facsimiles of historic speeches in high-definition video or holograms.
But machine-learning algorithms are improving rapidly, and as security researcher Greg Allen wrote at the time in Wired, it is likely only a matter of years before the audio component catches up and makes Peele’s Obama imitation unnecessary. Within a decade, some kinds of forensic analysis may even be unable to detect forged audio.
Here's the clip:


Machine Learning is only a baby step on the road to Artificial Intelligence, but it's already at least powerful enough to convincingly swap celebrities’ faces with those of porn actors, and the potential chaos that this almost certainly will cause in our public discourse is mid-blowing. It's still a little crude, with FakeApp Obama still lodged firmly inside the Uncanny Valley... but we're also clearly on the upslope that leads out of that valley, and not that far away from the day when even the video that you see on the Internet simply can't be trusted.

This is a lot of power to put into the hands of almost everyone on Earth, and if there's one thing that we know, it's that this power will be used for evil. FakeApp Obama is just a proof of concept; the genuinely malicious fake videos are coming, and you're going to need to be very alert to spot them. Especially since we're living in an era when the actual news of the day is... surreal, to put it lightly. Stay woke, bitches.

We now return you to the Singularity, already in progress.