DIA Experiment Shows Artificial Intelligence Can Outperform Human Analysts In A Key Area; But, A Clever, Sick-And-Twisted Adversary, Denial & Deception Still Trumps AI
Patrick Tucker’s posted an April 29, 2020 article, “Artificial Intelligence Outperforms Human Analysts In A Key Area,” to the national security and technology website, DefenseOne.com
He begins by noting that “in the 1983 movie, Wargames, the world is brought to the edge of nuclear destruction, when a military computer using artificial intelligence (AI) interprets false data as an imminent Soviet missile strike. Its human overseers in the Defense Department, unsure wether the data is real, can’t convince the AI it may be wrong.” Fast forward to today. Mr. Tucker reports that “a recent finding from the Defense Intelligence Agency, or DIA, suggests that in a real situation where humans and AI were looking at enemy activity, those positions would be reversed.”
“AI can actually be more cautious than humans about its conclusions in situations where the data is limited,” Mr. Tucker wrote. “While the results are preliminary, they offer an important glimpse into how humans and AI will complement one another in critical national security fields.”
DIA is the Pentagon’s intelligence entity. Terry Busch, Director for DIA’s Machine-Assisted-Analytic-Rapid-Repository-System, or MARS, recently sat down with Mr. Tucker/DefenseOne to discuss DIA’s efforts to incorporate AI into analysis and decision-making.
Mr. Tucker notes that “earlier this year, Busch’s team set up a test between a human and an AI. The first part was simple enough:use available data to determine whether a particular ship was in U.S. waters.” “Four analysts came up with four methodologies; and the machine came up with two different methodologies and that was cool. They all agreed that this particular ship was in U.S. waters,” Mr. Busch said. “So far, so good. Humans and machines using available data can reach similar conclusions.”
“The second phase of the experiment tested something different,” Mr. Tucker explained: “conviction” in the judgement/conclusion that was reached. “Would humans and machines be equally certain in their conclusions if less data was available? The experimenters severed the connection to the Automatic Identification System, or AIS, which tracks ships worldwide.”
“It’s pretty easy to find something if you have an AIS feed, because they’re going to tell you exactly where a ship is located in the world,” Mr. Busch said. “If we took that away, how does that change confidence; and do the humans and the machine get to the same end state?”
“In theory,” Mr. Tucker wrote, “with less data, the human analyst should be less certain of their conclusions, like the characters in War Games. After all, humans understand nuance, and can conceptualize a wide variety of outcomes. The researchers found the opposite.”
“Once we began to take away sources, everyone was left with the same source material — which was numerous reports, generally social media, open source kind of things, or references to the ship being in the United States — so everyone had access to the same data,” Mr. Busch explained. “The difference was that the machine, and those responsible for machine learning, took far less risk — in confidence — than the humans did. The machine actually does a better job of lowering its confidence than humans do….There’s a little bit of humor in that because the machine still thinks its pretty right.”
Mr. Tucker notes that “the experiment provides a snapshot of how humans and AI will team for important analytical tasks. But, it also reveals how human judgement has limits — when pride is involved.”
“Humans, particularly experts in specific fields, have a tendency to overestimate their ability to correctly infer ojutcomes when given limited data,” Mr. Tucker wrote. “Nobel prize-winning economist and psychologist, Daniel Kahneman has written on the subject extensively. Kahneman describes the tendency as the “inside view.” Dr. Kaheman “cites the experience of a group of Israeli educators assigned to write a new textbook for the Ministry of Education. They anticipated that it would take them a fraction of the amount of time they knew it would take another similar team. They couldn’t explain why they were overconfident; they just were. Overconfidence is a human and particular trait among highly functioning expert humans, one that machines don’t necessarily share,” Mr. Tucker noted.
“The DIA experiment offers an important insight for military leaders, who hope AI will allow [them to make] faster and better decisions, from inferring enemy postions, to predicting possble terror plots,” Mr. Tucker wrote. “The Pentagon has been saying for years that the growing amount of intelligence data that flows from an ever- wider array of intelligence [collection] sensors and sources, demands algorithmic support.”
“DIA’s eventual goal is to have human analysts and machine intelligence complement each other, since each has a very different approach to analysis, or as Busch calls it “tradecraft,” Mr. Tucker wrote. “On the human side, that means more “transitioning the expert into a quantative workflow,” mr. Busch said. “Take that to mean helping analysts produce insights that are never seen as finished; but, can change as rapidly as the data used to draw those insights. That also means teaching analysts to become data literate to understand things like confidence intervals and other statistical terms.”
Mr Busch “cautioned that the experiment doesn’t imply that defense intelligence work should be handed over to software,” Mr. Tucker concludes. “The warning from War Games is still current.” “On the machine side, we have experienced confirmation bias in big data. [We’ve] had the machine retrain itself to error….Thats a real concern for us.”
We are in the early era of actual use of artificial intelligence across almost every domain one can think of. Victory in future warfare may depend as much on algorithms as ‘bullets,’ as has been said before. And, the intelligence comnunity is particularly challenged. The number of topics and the timelines to understand their implications are severely constrained as compared to any previous time in our history, i.e.: cyber, hypersonic weapons, nanotechnology, space/counter-space, use and employment of UAVs/UUVs, autonomous systems, miniature and micro robotics, biometrics, genetic manipulation, etc. Indeed, gentic editing is becoming as easy as editing Microsoft Word. Second, the number of other emerging genres: use of social media by adversaries to spread disinformation/fake news, denial and deception, demand that the IC adopt new intelligence indicators. New technology or capability surprise warning indicators are needed to address these diverse and evolving — and, potentially disruptive technologies, as well as the means/methods used to employ them. AI could certainly help here. But, AI is far from a panacea for the IC and the military. Yes, it is a force multiplier when analyzing troves of data in a compressed, short time period. But, more often than not, rather than missing a strategic or capability surprise — we are much more vulnerable to missing clever and creative ways technologies that we understand very well — can be used in ways we did not anticipate or understand very well. The use and emergence of IED’s early in the Iraq war is an example. Focusing on the new and high-tech can cause us to overlook innovative applications of current capabilities. AI likely would fall short in this area.
And of course, the adversary gets a vote as General Petraeus likes to remind us. A devious, clever, sick-and-twisted adversary is likely to trump AI at almost every turn. I know there are serious, on-going efforts to use AI to help ferret out clever, sophisticated ‘fake news,’ and deception — but, we’re a long ways from that as far as I know. Imagination, and the use of surprise by an adversary is something I doubt AI copes with very well. Sometimes the data is telling you one thing; but, your ‘gut’ is telling you something else. The wisdom of the crowd — or in this case — the wisdom of the data…….isn’t always right.
Having said all that, I applaud the DIA effort.
Finally, as horror writer Stephen King once wrote, “God punishes us for what we cannot imagine.” RCP, fortunascorner.com