The US Military Needs to Urgently Rethink its Deep Learning Strategy

Carlos E. Perez
Intuition Machine
Published in
6 min readMar 18, 2018

--

Photo by James Wainscoat on Unsplash

A public report by Harvard reveals how unprepared the US Military is when it comes to the Artificial Intelligence (AI) technology known as Deep Learning. The study by Harvard’s Kennedy center was published in July 2017, written by Greg Allen Taniel Chan, and was conducted with funding from IARPA. The research is titled “Artificial Intelligence and National Security”. I’ve written about the many tribes of AI and about the use of the term AI being too ambiguous and meaning too many things to too many people. Where do we find Deep Learning in this report from Harvard? It is hard to see, it’s in a small footnote buried in page 63. Very easy for the casual reader to miss:

Note: There are many different paradigms of machine learning. Most recent technological progress has been within the neurology-inspired connectionist paradigm, which includes Deep Learning.

In more recent news, Google (same guys who aren’t supposed to do evil) are helping UAVs analyze video images using Deep Learning. The project discussed is known as Project Maven that was proposed in April 2017:

Although we have taken tentative steps to explore the potential of artificial intelligence, big data, and deep learning, I remain convinced that we need to do much more, and move much faster, across DoD to take advantage of recent and future advances in these critical areas.

Information about project Maven was made public by the DoD on July 2017.

So if you follow the paper trail, only ‘tentative’ steps were spent in exploring AI technologies that included deep learning. This was a little less than one year ago. Interestingly enough, as reported by The Intercept, Eric Schmidt (who I quote in my previous article) was involved in facilitating a collaboration with Google:

The DIB — which is chaired by Eric Schmidt, former executive chair of Alphabet, Google’s parent company — recommended “an exchange program and collaboration with industry and academic experts in the field.”

Greg Allen, the author of the earlier Harvard report added his insider perspective:

Before Maven, nobody in the department had a clue how to properly buy, field, and implement AI

Let me reemphasize what he just said. Nobody had a clue when the April 2017 report was disseminated department wide. Nobody had a clue less than a year ago. The use of “nobody” is actually not correct, I know of some people in the department who had a clue.

So if you parse the public information that is out there, the US military work with Deep Learning appears to be confined only to a single project. To be perfectly fair, DARPA had funded projects in 2017 to explore “Context Adaptation”, “Explainability” and “Biologically Inspired Architecture” that is addressing state of the art ideas. DARPA perhaps has the few folks that did have a clue. I’ve also given a talk to researchers inside the department that have already been doing Deep Learning since 2016. However, there’s a massive gap between what happens in research and what is ready for deployment. There’s is a difference between a few folks studying the subject and that of a real urgent initiative dive into Deep Learning.

There is ample evidence that “Deep Learning” isn’t taken serious enough by the US military. Here is a job that was posted just 3 days ago by the US Army. The US Army is seeking a Ph.D. in Deep Learning with a whopping salary of $52,000. I guess there’s a misprint here in that adding an extra digit may be a more of a credible offer. In all fairness, Deep Learning is an exponentially fast moving field and I can’t expect every department to be coming up to speed with the developments. But seriously now, where is the urgency?

An F-35 fighter jet can cost at least $300 million. To put that in perspective, Google acquired DeepMind for $500 million. The F-35 from the lens of current progress in Deep Learning may become as antiquated a military strategy as the use of horses before WW II. In the early days of the US involvement in WW II, the army were still using horses in the battlefield. The F-35 is likely expensive because of its sheer complexity. However, a swarm of 10,000 drones costing less that 1/300th the cost of an F-35 can be equally effective as a weapon.

The F-35 fiasco is evidence of a much larger problem in the development of more advanced weaponry. A recent investigation by the Wall Street Journal reveals this inability of military manufacturers to manage bigger projects that may perhaps be heart of the problem:

”There’s a whole generation of German engineers who haven’t worked on a major defense project,” said Mr. Mölling, the defense expert. “It’s not that they lost this skill; they never learned it.”

This is the tip of the iceberg. The bigger problem is in the software. Large defense oriented companies have never learned how to build software in the agile manner that more nimble companies like Google and Apple. This is hinted at in the same WSJ article:

defense companies have failed to attract the graduates needed to develop sophisticated new systems that are increasingly centered on software

So at an even basic level, the military industrial complex does not have the software skills to build complex systems that are robust and bug free. This is of course compounded with the newer reality that “Software 2.0” in the form of Deep Learning enabled system is emerging. The industrial complex is now two generations behind in understanding how complex software driven systems should be built.

There is a stark difference between how a nimble company like Space X is able to deliver a revolutionary space delivery system as compared to the capabilities of its more traditional competitors. This is the problem with the current industrial complex, the skills are simply missing and no amount of money thrown at the problem can fix it.

Not only are we designing weapons using obsolete skillsets, we have the problem of preparing for future wars using technology from a bygone era. We cannot continue to spend $300 million on unbounded complexity that will be technologically obsolete on the first day it is deployed against AI inspired weaponry. What is needed is not piecemeal ‘tentative’ explorations of the capabilities of Deep Learning. Rather, what needs to be done is an urgent rethink in military strategy and tactics. There is a vast difference in capability when AI is used to invent weaponry as compared to humans.

The last thought is just plain scary.

Additional Commentary

On further contemplation, there are grass roots work in Deep Learning however it takes excruciating time for understanding to diffuse up the leadership and then back down to the rank and file. The US military is very hierarchical and therefore information dissemination is also restricted in inefficient and artificial ways. If leaders are too busy putting out fires on yesterday’s technologies and strategies, then they simply have no bandwidth to address more advanced ways of doing things. Compounding the problem, experts have a harder time absorbing new concepts that are contrary to their expert understanding. Unfortunately, nobody has the luxury to ignore the developments in Deep Learning. It is happening too fast to wait for large organizations to finally “get the memo”. In addition, there is an issue with American culture that fears Terminator style technology.

Updates

Explore Deep Learning: Artificial Intuition: The Improbable Deep Learning Revolution
Exploit Deep Learning: The Deep Learning AI Playbook

--

--