U.S. Army PFC Terry Hollywood, assigned to 224th Military Intelligence Battalion, conducts maintenance on a Grey Eagle in preparation for Project Convergence at Yuma Proving Ground, Arizona. Photo by SGT Marita Schwab
By Patrick Tucker
Science & Technology Editor, Defense One
Unflappable and expendable, unmanned weapons could reduce collateral damage in war, if only U.S. leaders realized it, says a former commander of U.S. Special Operations Command.
But the United States is “unfortunately…dawdling along” in deploying artificial intelligence and unmanned systems in high-stakes scenarios, Tony Thomas said Thursday at the National Press Club.
Thomas recalled the 1988 downing of Iran Air Flight 655 by the U.S. guided missile cruiser Vincennes, which mistook a radar blip for a hostile jet fighter. Its captain “made the fateful decision to shoot down an airliner,” he said. “But put an unmanned capability out there in the Strait of Hormuz, doing what your gray hulls are required to do in terms of monitoring transit and freedom of navigation, that sort of thing. An unmanned capability doesn't have that duress. [It] doesn't have the fear and then the bias. It can offer itself up, get blown out of water. We'll replace it with another one out there.
“Think of that opportunity and flash forward it to any other number of places right now where we have humans in harm's way under a lot of fatigue, a lot of pressure,” he said. A person in such a situation is “potentially bound to make a fatal decision.”
The U.S. Navy is experimenting with unmanned systems in the Central Command region but not in the South China Sea, where many believe a conflict with China could emerge in the next few years.
Thomas’ argument echoed pitches by robot makers who say American police forces would kill fewer people if robots took the place of human officers in some dangerous situations.
Thomas led Special Operations Command when it began to experiment with Maven, an artificial-intelligence tool that helped human analysts with targeting decisions. When Google engineers discovered their company was helping with Maven, some quit in protest and company leaders eventually left the program. (Thomas currently serves as an advisor to AI company Primer.)
Thomas said that what too many people in companies like Google don’t understand is that the U.S. military wants to use AI not simply to accelerate operations but also to reduce collateral damage and make operations more precise. He recalled sitting with other commanders making decisions about how and when to undertake operations. They “sat for days, months, weeks in joint operation centers, pondering whether or not to take a shot. And the criteria was always, without exception, zero collateral damage. Did we get it right every time? No. It was war and we got it wrong. There was fog and friction. There were malfunctions of weapons systems. But our U.S. way of war was zero collateral. And [with Maven] we were trying to progress, to pursue that at scale, you know, with modern technology.”
Andrew Moore, who currently advises CENTCOM, was working on Maven at Google when company leaders decided to step away. “I expected to arrive in an environment where like 50% of the engineers were very sophisticated-thinking about the need for national security and 50% would be super-naive and just sort of thought that all kinds of security-related stuff was bad,” he said during the recent Global SOF event in Tampa, Florida. “Turned out I was wrong. 98% were extremely supportive. 2% probably” were not.
Moore laid the decision to turn away from Maven squarely at the feet of the company’s former leaders who did not clearly or consistently explain to employees just what they were doing with the United States U.S. military or why. “If in an attempt to appease all of your employees, by listening to all of them on and kind of saying yes to all of them, you can actually cause a catastrophic failure of a place to have a real sense of mission,” he said.