Deepfake: An Unrealistic Realism

During the holiday release schedule in 2010, Disney released both a sequel to a niche movie, as well as a piece of technology that introduced an innovative technology to the masses. Tron: Legacy brought to light a technology used to de-age one of the film’s original and sequel stars: Jeff Bridges. The question posed today asks what if you utilized technology to create a “Deepfake” instead of de-age the actor… Well people, the future is here. But first… What is a deepfake?

The famous (or infamous) Wikipedia defines deepfakes as “synthetic media in which a person in an existing image or video is replaced with someone else’s likeness. While the act of creating fake content is not new, deepfakes leverage powerful techniques from machine learning and artificial intelligence to manipulate or generate visual and audio content that can more easily deceive.” I sense a spiderman moment that has failed. Artificial Intelligence and Machine Learning are powerful tools. Moreso, these tools have become readily available for the masses to design powerful applications. Great power requires great responsibility. This is exceptionally apparent with AI and ML. Deepfakes have the capability to do real damage. This damage can be in the form of war, career-loss, and/or influence campaigns of any sort.

A photo comparison of a de-aged Jeff Bridges on the left and a deepfake on the right.

Look at that initial example that Disney utilized. The image on the left is what they de-aged Jeff Bridge’s character CLU to make him look like a version of himself in the 1980’s. This was a great leap forward for Disney and their Imagineers as they were successful in their ability to capture what a young Jeff Bridges looked like, but with some identifiable digitalization of the facial structures. But check out the right image. Looks a whole lot more real right? This is an exceedingly small example of how deepfakes can capture a person’s identity easily.

So, what can we do about deepfakes? Great question. The Defense Advanced Research Projects Agency (DARPA) is tackling this very problem. DARPA has an entire program called the Media Forensics Program (MediFor) that builds algorithms to detect manipulated videos and images. DARPA is fighting AI with AI. DARPA’s defensive models directly target how people move their head and facial muscles. Utilizing different data points, DARPA can effectively counter the deepfakes and identify whether they are real or fake. A larger program known as semantic forensics or SemaFor will not just be able to detect the manipulated media but be able to attribute the media to specific sources.

            Long story short, be cognizant of media that elicits a response quite different from an individual’s known thought process. Unfortunately, deepfake technology will be here forever, but soon we will have effective ways to defend against these types of manipulated media.

Offices:

Maryland Headquarters
6950 Columbia Gateway Drive,
Suite 450 
Columbia, MD 21046 USA
443-563-1870

Georgia Office
100 Grace Hopper Lane,
Suite 3700
Augusta, GA 30901 USA
706-955-1211

Texas Office
3331 General Hudnell Dr,
Suite 3
San Antonio, TX 78226 USA

Contact Info:

Email:
info@intelligenesisllc.com
Fax:
866-511-1193

Identifiers:

DUNS Number
793224366
CAGE Code
4QLA5

Locations:

•Maryland
•Texas
•Georgia
•Colorado
•Hawaii
•Alaska
•Utah