Even The AI Behind Deepfakes Can’t Save Us From Being Duped; ‘Believe None Of What You Read & Half Of What You See’

Even The AI Behind Deepfakes Can’t Save Us From Being Duped;’ Believe None Of What You Read & Half Of What You See’
     The title above comes from Will Wright’s article he posted this week on technology and security website of WIRED.com. Deepfakes, making someone or something appear different than they or it really is — is a huge and growing problem. As he notes, “Deepfakes have captured the imagination of politicians, the media and the public. Video and media manipulation and deception have long been possible, but advances in machine learning have made it easy to capture a person’s likeness and stitch it on to someone else. That’s made it relatively simple to create fake [or revenge] porn, surreal movie mashups, and demos that point to the potential for political sabotage,” Mr. Wright wrote. Obviously intelligence agencies around the world have no doubt taken advantage of this burgeoning new genre for offensive intelligence operations and the spread of fake news.
     And, the bad news is that the darker angels of our nature have the upper hand for now, as artificial intelligence, big data mining, and machine learning are all empowering the advances in the use and sophistication of deepfake technology — outpacing our ability to determine what is real, and what is fake. There is a ‘cottage’ industry of companies both here in the U.S. and abroad who are trying to perfect an AI-enhanced technique that would be better able to quickly separate fact from fiction — but, we aren’t close…..yet.
     “Researchers are working on automated techniques for spotting videos forged by hand as well as by AI,” Mr. Wright wrote. “These detection tools increasingly rely, like deepfakes themselves, on machine learning, and lots of training data.” The Pentagon’s research arm, the Defense Advanced Research Projects Agency (DARPA), is funding research on developing automated forgery detection tools that is increasingly aimed at ferreting out deepfakes Mr. Wright added.
     “Sam Gregory, Program Director for Witness, a project that trains activists to use video evidence to expose wrongdoing, warns that sharing deepfakes in the wild are always likely to be more difficult/challenging to spot quickly, given how they may be compressed or remixed in ways that may even fool the best of detectors,” Mr. Wright wrote.  “As deepfakes improve, [which they are] Gregory and others say it will be necessary for humans to investigate the origins of the video, or inconsistencies — a shadow out of place, or the incorrect weather for a particular location — that may be imperceptible to an
algorithm.”
     “Videos can, of course, be manipulated to deceive, without the use of AI,” Mr. Wright notes. “A report published last month by Data & Society, a non-profit research group, noted that video manipulation already goes well beyond deepfakery. Simple modifications and edits can be just as effective in misleading people, and are harder to spot using automated tools.
Arms Race Between Those Creating Deep Fakes, And Those Trying To Detect Them
 
     Sarah Scales had an interesting article last year in WIRED.com, “These New Tricks Can Outsmart Deepfake Videos For Now,” describing an ‘arms race,’ between those creating and exploiting deepfakes, and those trying to detect and thwart them. She described how “for weeks, computer scientist Siwei Lyu had watched his team’s deepfake videos with a gnawing sense of unease. Created by a machine learning algorithm, these falsified films show celebrities doing things they’ve never done. They felt eerie to him, and not just because he knew they had been ginned up [or were fake]. “They don’t look right,” he recalled thinking but, it’s very hard to pinpoint where that feeling comes from.”
     “Finally, one day, a childhood memory bubbled up in his brain,” Ms. Scales wrote. “He, like many kids, had held staring contests with his open-eyed peers.” “I always lost those games,” he said, “because when I watch their faces and they don’t blink, it makes me very uncomfortable.”
     “These lab-spun deepfakes, he realized, were needling him with the same discomfort,” Ms. Scales wrote. “He was losing the staring contest with these film stars, who did not open and close their eyes at the same rates as typical [normal] humans.”
     “Deepfake programs pull in a lot of images of a particular person — you, your ex-girlfriend, Kim Jong-Un — to catch then at different angles, with different expressions, saying different words,” Ms. Scales explained. “The algorithms learn what this character looks like, and then synthesize that knowledge into a video of that person doing something he or she never did.” Make them appear in a porn ‘movie,’ or any number of character-damaging scenarios. “
     “These fakes, while convincing if you watch for a few seconds on a phone screen aren’t perfect (yet),” Ms. Scales wrote. “They contain tells,like creepily ever-open eyes, from flaws in their creation process. In looking into DeepFake guts, Lyu realized that the images the program learned from, did’t include many with closed eyes (after all, you wouldn’t keep a selfie where you were blinking would ” “This becomes a bias,” Lyu said. “The neural network doesn’t get blinking,” Ms Scales noted. “Programs might also miss other “psychological signals intrinsic to human beings,” said Lyu’s paper on the phenomenon such as breathing at a normal rate or having a pulse. While the research focused specifically on videos created with this particular software, it is a truth universally acknowledged that even a large set of snapshots might not adequately capture the physical human experience, and so any software trained on those images may be found lacking.”
     “Lyu’s blinking revelation revealed a lot of fakes,” Ms. Scales wrote. “But, a few weeks after his team put a draft of their paper online, they got anonymous emails to links with deeply faked YouTube videos whose stars opened and closed their eyes normally. The fake content creators had evolved. Which means, deepfakes will likely become (or stay) an arms race between the creators and the detectors. But, research like Lyu’s can at least make life harder for the fake-makers.” “We are trying to raise the bar,” Lyu said. “We want to make the process [of deepfakes] more difficult and more time consuming.”
     At the Los Alamos National Laboratory, cyber scientist Justin Moore envisions a nightmare scenario for an average citizen: “Tell an algorithm that you want a picture of a certain individual robbing a drugstore. Implant it in that establishment’s security footage; and send him to jail.” “In other words,” Ms. Scales wrote, “Moore is worried that if evidentiary standards don’t evolve” [or keep pace] with the deepfake genre, “people could be easily framed,” or their character and reputations ruined by a well-placed/well-timed — embarrassing — but, completely fake video. And, even if the video is later proven to be fake, the damage may already be done; and a certain percentage of the population may still believe the fake video regardless. Despite clearly landing on the Moon in 1969, there is still a certain sector of the population that thinks the landing was faked.
     This indeed a brave new world we are in. Where the cyber wilderness of mirrors has metastasized into both deepfake videos, and ‘fake news,’ The darker angels of our nature appear to have the upper hand for now. I can imagine many more horror stories of innocent people suffering personal and professional penalties through no fault of their own — because a jealous spouse, co-worker, etc., take advantage of a gullible public and corporate structure — to plant fake videos or stories that have no basis in fact. It is hard to see how AI will be able to outsmart a sick and twisted, and devious but sophisticated individual, nation-state, intelligence entity, and so on, from using deepfakes for their own personal gain or advantage. No wonder there is a burgeoning off-the-grid network. I remember my mother long ago once warned me: “Don’t believe anything you read, and half of what you see.” She doesn’t know how right she would be now, all these years later. Lon Cheney, eat your heart out. RCP, fortunascorner.com

2 comments

  1. Its like you read my mind! You appear to know a lot about this, like you wrote the book in it or something. I think that you could do with a few pics to drive the message home a little bit, but other than that, this is excellent blog. A great read. I will definitely be back.|

  2. Love to spend time near Sea !

Leave a Reply

Your email address will not be published. Required fields are marked *