Tech glitches offer clues to exposing disinformation

By Davor Devcic, John Noren, Benjamin Friedman

The Global Engagement Center’s (GEC) Technology Engagement Division ran a hybrid Tech Engagements Talk regarding artificial intelligence (AI) tools, July 14, 2022, at the Harry S Truman Building.

Recent advances in generative AI tools have made it dramatically quicker and easier to create convincing synthetic media. Traditional “deep fake” photos and videos relied on algorithmic changes to an existing, original piece of audio or visual content to create new, synthesized media. The next generation of AI tools allows for the creation of entirely synthetic images, video, or audio without any source material or technical knowledge; all that is needed is a brief text prompt. These tools have the potential to enable malign actors to unleash sophisticated inauthentic content at a speed and scale that were previously unimaginable, delivering content that is uniquely effective at delivering disinformation from seemingly trusted sources and raising potentially critical challenges to democracy and national security. 

While human detection of video deepfakes is becoming less reliable as deepfake fidelity improves, certain current technical glitches can be helpful indicators, according to University of California Berkeley’s Dr. Hany Farid and Defense Advanced Research Projects Agency’s Dr. Wiliam Corvey, who served as GEC’s guest speakers. The Massachusetts Institute of Technology’s Media Lab also highlighted the program. The program examined  the following techniques to identify synthetic videos of human speakers: 

Pay attention to the face: Manipulated visual media almost always feature facial transformations. 

Pay attention to the cheek and forehead: Does the skin appear too smooth or too wrinkly? Is the agedness of the skin similar to the agedness of the hair and eyes?

Pay attention to eyes and eyebrows: Do shadows appear in places that you would expect?

Pay attention to eyeglasses: Is there any glare on the lenses? Is there too much glare? Does the angle of the glare change as the person moves around?

Pay attention to the facial hair or lack thereof: Does this facial hair look real? Deepfakes might add or remove mustache, sideburns, or beards. 

Pay attention to facial moles.

Pay attention to blinking.

Pay attention to lip movements: Some deepfakes are based on lip synching. 

Higher-quality deepfakes may defy human visual detection, and GEC has developed an in-house suite of algorithms to detect the use of computer-generated profile pictures on social media and is currently working to detect synthesized images at scale. GEC’s operational cycle has a series of complementary programs to identify, assess, test, and rapidly implement commercial and other technologies against foreign propaganda and disinformation. Contact GEC for more information. 

Davor Devcic is a counter-disinformation technology advisor on the Technology Engagement Team in the Global Engagement Center.

Previous articleProfessional Growth
Next articleRetirements | June 2023