This is not the first time we have to wonder whether “seeing” is actually “believing”.
Earlier this week, YouTube channel Shamook unveiled the latest deepfake technology, which greatly improved scenes with two notable characters in Rogue One: A Star Wars Story. This was the latest use of deepfakes in pop culture.
What was impressive about this recent use of technology was that the character of Governor Tarkin was more lifelike and more similar to Peter Cushing, the late actor who originally played the role in the 1977 film Star Wars. The same technology was used on the YouTube channel earlier this year to “age” Robert De Niro, and the results were well above what was actually seen in the original Netflix film The Irishman.
Both “original” versions seem to be much more colorful and fall into the “uncanny valley” of not being entirely real. Despite the “wrong” part, deepfakes just seem more realistic than current CGI techniques.
Shamook used the same technology to include Tom Selleck in the Indiana Jones films. That way, fans can see if the mustache actor could actually have taken on the role before he had to turn it down due to his television commitments. While the YouTube videos are only a few minutes and not a full movie, the fact that Tarkin and Princess Leia look far more compelling suggests that this technology has serious potential for video tampering.
Deepfakes is based on auto-encoders and other machine learning techniques and is often so good that it’s hard to tell if the content has been tampered with. It is the latest form of manipulated media in which a user can take an existing picture or video and replace the person or object with the use of another through artificial neural networks.
The dangers of deepfakes being used for nefarious purposes are so great that social media giant Facebook banned such content earlier this year.
“Real-time, photorealistic image creation technology can create amazing, low-cost video and it can do an incredible amount of damage,” said Rob Enderle, Enderle Group’s technology industry analyst.
“Imagine, for example, that you intercepted a widely watched political event and changed it so that the politician apparently calls for a civil war,” added Enderle. “On the flip side, it only takes a little time on YouTube to watch the Dust videos to realize that developers can now do amazing things on a budget.”
One of the dangers is that the video manipulation technology is fantastic and is getting better every day – as the Shamook videos show.
“The real innovation, however, is to package such tools so that they are generally available,” said Jim Purtilo, associate professor at the University of Maryland’s Department of Computer Science. “Good research could tell us how to deceptively bend video, but it takes hard core engineering to figure out how to do this on a large scale. I guarantee you that engineering is now underway.”
However, this is not the first time we have had to wonder whether “seeing” is actually “believing”.
“So it was with audio and photos,” added Purtilo. “Today’s consumers routinely manipulate records and images in ways that were previously only possible in a research laboratory. When these technologies were packaged for consumers, we accepted that not everything we heard or saw was necessarily what it was We need to take this reality into account as video manipulation tools are also packaged for widespread use. “
Right now, this is not achievable for everyone with just a desktop or laptop computer.
“These tools are more computationally intensive, but for some it’s not a bug, it’s a feature,” said Purtilo. “Hardware manufacturers looking to increase demand for their products are among the most active players in the world of multimedia research. Deepfakes will sell a lot of computers.”
Technology is not unstoppable either. There are now AI-based tools that can identify and tag deepfake videos, but unfortunately these are not widely known or used.
“You have to be when people are back to relying on video evidence to convey facts, especially in a world where more and more questionable websites are presenting themselves as authentic news sources when they are anything but too often funded by hostile states” she warned Enderle. “Microsoft has developed an AI tool to identify deepfakes, but they admit that it is not enough to prevent serious problems, at least not by itself. Other tools exist, but they are probably too difficult for most to use, and we need to develop further. ” this area if we want to prevent deepfakes from being used effectively against us. “