“The camera never lies” is a thing of the past.
A new computer program can manipulate a video such that the person on-screen mirrors the movements and expressions of someone in a different video. Unlike other film-fudging software, this program can tamper with far more than facial expressions. The algorithm, to be presented August 16 at the 2018 SIGGRAPH meeting in Vancouver, also tweaks head and torso poses, eye movements and background details to create more lifelike fakes.
These video forgeries are “astonishingly realistic,” says Adam Finkelstein, a computer scientist at Princeton University not involved in the work. This system could help produce dubbed films where the actors’ lip movements match the voiceover, or even movies that star dead actors reanimated through old footage, he says. But giving internet users the power to create ultrarealistic phony videos of public figures could also take fake news to the next level (SN: 8/4/18, p. 22).
The algorithm starts by scanning two videos frame by frame, tracking 66 facial “landmarks” — like points along the eyes, nose and mouth — to map a person’s features, expression, head tilt and line of sight. For example, those videos might show former President Barack Obama and Russian President Vladimir Putin. Then, to make Putin mimic Obama’s behavior, the program distorts Putin’s image to adopt Obama’s head pose, facial expression and eye line in each frame. The program can also tweak shadows, change Putin’s hair or adjust the height of his shoulders to match his new head pose. The result is a video of Putin doing an eerily on-point imitation of Obama’s exact motions and expressions.
Copycat
A new computer program analyzes the appearance of someone in one video (the “input”) and transfers that person’s facial expression, head pose and line of sight onto a person in another video (the “output”). That can generate footage of the second person doing and saying things they never actually did.
Computer scientist Christian Theobalt of the Max Planck Institute for Informatics in Saarbrücken, Germany, and colleagues tested their program on 135 volunteers, who watched five-second clips of real and forged videos and reported whether they thought each clip was authentic. The dummy videos fooled, on average, 50 percent of viewers. But people may have been more critical of doctored footage during the study than they would have been if they naturally encountered these clips online because they were primed to anticipate fakes. Even when study participants were watching genuine clips, 20 percent, on average, still believed the clips were not real.
The new software still has some limits: The program can only fiddle with videos shot by a stationary camera, framed to show someone’s head and shoulders in front of an unchanging background. And the algorithm can’t shift a person’s pose too much from their original video. That is, a clip of Putin speaking directly into the camera couldn’t be edited to make him turn around, because the software wouldn’t know what the back of Putin’s head looks like.
Still, it’s easy to imagine how this kind of digital puppetry could be used to spread potentially dangerous misinformation. “The researchers who are developing this stuff are getting ahead of the curve — in the sense that now [this algorithm] is out there, and people are more aware of the types of manipulations that are possible,” says Kyle Olszewski, a computer scientist at the University of Southern California in Los Angeles. That may encourage people to treat internet videos with more skepticism, he says.
“Learning how to do these types of manipulations is [also] a step towards understanding how to detect them,” Olszewski says. A future computer program, for instance, could study both true and falsified videos to learn how to spot the difference.