Deepfake: An Unrealistic Realism
During the holiday release schedule in 2010, Disney released both a sequel to a niche movie, as well as a piece of technology that introduced an innovative technology to the masses. Tron: Legacy brought to light a technology used to de-age one of the film’s original and sequel stars: Jeff Bridges. The question posed today asks what if you utilized technology to create a “Deepfake” instead of de-age the actor… Well people, the future is here. But first… What is a deepfake?
The famous (or infamous) Wikipedia defines deepfakes as “synthetic media in which a person in an existing image or video is replaced with someone else’s likeness. While the act of creating fake content is not new, deepfakes leverage powerful techniques from machine learning and artificial intelligence to manipulate or generate visual and audio content that can more easily deceive.” I sense a spiderman moment that has failed. Artificial Intelligence and Machine Learning are powerful tools. Moreso, these tools have become readily available for the masses to design powerful applications. Great power requires great responsibility. This is exceptionally apparent with AI and ML. Deepfakes have the capability to do real damage. This damage can be in the form of war, career-loss, and/or influence campaigns of any sort.
Look at that initial example that Disney utilized. The image on the left is what they de-aged Jeff Bridge’s character CLU to make him look like a version of himself in the 1980’s. This was a great leap forward for Disney and their Imagineers as they were successful in their ability to capture what a young Jeff Bridges looked like, but with some identifiable digitalization of the facial structures. But check out the right image. Looks a whole lot more real right? This is an exceedingly small example of how deepfakes can capture a person’s identity easily.
So, what can we do about deepfakes? Great question. The Defense Advanced Research Projects Agency (DARPA) is tackling this very problem. DARPA has an entire program called the Media Forensics Program (MediFor) that builds algorithms to detect manipulated videos and images. DARPA is fighting AI with AI. DARPA’s defensive models directly target how people move their head and facial muscles. Utilizing different data points, DARPA can effectively counter the deepfakes and identify whether they are real or fake. A larger program known as semantic forensics or SemaFor will not just be able to detect the manipulated media but be able to attribute the media to specific sources.
Long story short, be cognizant of media that elicits a response quite different from an individual’s known thought process. Unfortunately, deepfake technology will be here forever, but soon we will have effective ways to defend against these types of manipulated media.