Military Deception: AI’s Killer App? If You Do Not Practice To Deceive — You Are More Susceptible To Being Deceived

Military Deception: AI’s Killer App? If You Do Not Practice To Deceive — You Are More Susceptible To Being Deceived
     Deepfake videos and ‘fake news’ are revolutionizing the denial and deception domain — an elegant warfare technique and operation that can be devastating to the enemy. But now, with the emergence and application of artificial intelligence (AI), deception is being taken to a whole new level of sophistication and use across a much wider spectrum than ever before.
     Edward Geist and Marjory Blumenhthal posted an October 23, 2019  “Military Deception: AI’s Killer App?” to the national security and military website/blog, WarOnTheRocks. I refer you to that publication for the entire article. Mr. Geist and Ms. Blumenthal observe that “the combined use of AI and sensors to enhance situational awareness could make new kinds of military deception possible. AI systems will be fed data by a huge number of sensors — everything from space-based synthetic-aperture radar to cameras on drones to selfies posted on social media.”
     The authors warn: “If technological progress boosts deception. it will have unpredictable effects. In some circumstances, improved deception benefits attackers; in others, it bolsters defenders. And, while effective deception can imperil an attacker to misdirect his blows, it does nothing to shield the defender from those that do land. Rather than shifting the offense-defense balance. AI might inaugurate something qualitatively different; a deception-dominant world in which countries can no longer gauge that balance.”
    Their conclusion: “That’s a formula for a more jittery world. Even if AI-enhanced military intelligence, surveillance, and reconnaissance prove effective [at ferreting out clever deception] states that are aware that they don’t know what the enemy is hiding are likely to feel insecure. For example, even earnest, mutual efforts to increase transparency and build trust would be difficult because both sides could not discount the possibility their adversaries were deceiving them with the high-tech equivalent of a Potemkin village. That implies more vigilance, more uncertainty, more resource consumption, and more readiness-fatigue will follow. As Paul Braken observed, “the thing about deception is that it is hard to prove it will really work,” but technology ensures that we will increasingly need to assume that it will.”
     A couple of observations. If you do not practice/train to carry out deception operations, you are more susceptible to being deceived; and, pulling off a critical deception operation will be much harder to do. It is an elegant and unique skill-set that requires much practice and intricate thought. And, I do not believe the U.S. Intelligence Community places enough emphasis, nor is developing the talent necessary to conceive of sophisticated deception operations — and thus — is less able and prepared to recognize when they are being deceived. Unless things have significantly changed, analysts are too dependent on big data mining algorithms to do their research and critical thinking. A clever adversary employing deception can make mincemeat of big data mining. And, AI has empowered even a novice practitioner to employ sophisticated deception on the cheap. Big data mining and machine learning can significantly enhance our understanding of the adversary much more quickly than in the past. But, these same techniques can be used against us, if we do not develop an analytical mindset that understands that the sick and twisted can still outsmart and outmaneuver even the best algorithms. The mantra should be: Don’t believe anything you read, and half of what you see. RCP,

Leave a Reply

Your email address will not be published. Required fields are marked *