The Pentagon is spending $400 billion developing and building F-35 jet fighters. Lockheed Martin photo

Just How Much Money Should We Let the Pentagon Blow on (Not) Developing Weapons?

Air Force officer Dan Ward asks which military investments pay off — and how we even know

David Axe
War Is Boring
Published in
6 min readJul 30, 2013

--

Too little information — it’s a long-standing critique of military weapons development. There’s a lack of accessible data related to acquisition programs, data that’s essential to understanding and improving our ability to invent and buy new gear.

We spend billions of dollars … and really have no idea how much we’re wasting.

Even though this business produces data by the truckload, as I explained in an article for Small Wars Journal last year, sometimes we have an embarrassingly difficult time answering even basic questions. Like how much time do we spend developing new gear?

We don’t know, because no one is collecting the data. We used to keep track of this sort of thing, as shown in this graph from a 1998 Defense Science Board report.

But confronted with a basic question — are we doing better or worse than we were 10 years ago? — in February 2012 Undersecretary of Defense for Acquisition, Technology and Logistics Frank Kendall admitted he doesn’t have an answer.

The absence of readily available up-to-date info is an even bigger problem than the troublesome upward trend depicted in the chart.

But good news, something is being done about this. A June report signed by none other than Kendall himself provides 126 pages of measurements, metrics, charts and graphs, helping to show how the weapons-development community is doing.

“Data collection is a strategic project well underway,” the report states. “Rigorous analysis is supported by a new analysis cell … as well as the continued use of rigorous and objective internal and external analytic entities.”

“Subsequent reports will extend and expand these findings,” the document asserts.

That’s easier said than done. But lo and behold, the report actually backs up this assertion with honest-to-goodness data.

Wonk that I am, I sincerely love almost everything about this report. The numbers and figures are great on their own, but what really blew my mind was the transparency, honesty and humility in these pages. The authors write about the challenges and limitations of the study, openly and repeatedly acknowledging that the data and accompanying findings are only “a step in that process” of seeking deeper understanding.

There is no triumphal claim to a comprehensive data set. Instead, the authors admit to providing just a glimpse at the overall picture, with a lot more work still to be done. In fact, if this report were a piece of military equipment, it would be the proverbial 70-percent solution so many of us advocate building.

In my eyes, that makes it just about perfect.

No doubt some people would prefer to wait for a more comprehensive set of figures, but they could be in for a pretty long wait. I find wisdom in this iterative approach, sharing the data we have today and rolling out fresh info as it becomes available.

That happens to be a good way to build new gear, as well—one iteration, increment and block at a time.

I also love the way the report invites the reader to participate, challenging independent-minded thinkers and objective analysts to draw their “own conclusions and observations about the performance of the defense acquisition system, its sufficiency and the degree of progress made to date.”

The report offers some analysis, but for the most part “interpretation of performance and the implication for policies are left largely to the reader.” That’s a good thing because it encourages reader engagement. I look forward to many productive — although to outsiders, admittedly boring — discussions based on this one document.

Let me to kick off one such debate. Figure 2.3 on page 13 shows just how much money the military services have spent on projects that were subsequently canceled without producing much useable hardware.

As the report unflinchingly shows, that’s a ton of cash with no product to show for it. But what should we make of these undeniably large sunk costs? Is it an indication of bad decisions? Or is cancelling these under-performing programs a sign of good judgment and strong leadership? Maybe it’s both at once.

Maybe it’s something else altogether.

Before we jump to the obvious “OMG how wasteful!” conclusion, consider this. There are situations where cancellations are justified and wise, so zero is almost certainly the wrong target.

Maybe it’s good news that “every year from 1996 to 2010, the Army spent more than $1 billion annually on programs that ultimately were canceled,” according to the June report. Perhaps this is proof the Army is making the tough call and terminating programs that once were good ideas but now aren’t — an admirable show of strong-minded leadership and a willingness to stop throwing good money after bad.

Or maybe not. Spending oceans of money on projects that don’t deliver is hardly a desirable scenario. Maybe the Army would be better off only spending $500 million a year on such efforts. How about $100 million? $47 million? Then again, if the venture capitalist model is what we’re going for, where eight out of 10 projects fizzle but one or two change the world, maybe $2 billion is a better number.

Where is the sweet spot? I doubt anyone knows for sure, so additional analysis and research is clearly called for. Personally, I suspect the ideal number is far south of $1 billion, but at least some numbers are on the table now. As the report says, it’s a start.

Remarkably, the report does not accompany this chart with explanations, excuses or assertions. It simply presents the unvarnished data and challenges the reader to wrestle with the ambiguity of it all. I think that’s fantastic.

Before we leave this topic, is anyone else as surprised as I am that so many projects actually get terminated? Conventional wisdom says weapons programs linger forever and are nearly impossible to kill, no matter how dysfunctional they might be.

But these numbers seem to tell a different story. This is not the only instance where data runs counter to popular perception, which is precisely why we need this info and precisely why I’m so excited about it.

I’m also glad to see the way the report defines acquisition success. “Our ultimate measure of performance is providing effective systems to the war-fighter that are suitable for fielding, at costs that are affordable, while ensuring taxpayers’ money is spent as productively as possible.”

Historically, definitions of acquisition success tend to be programmatic (“deliver on time, on budget”) or technical (“produce the most advanced systems”). In contrast, this definition is operationally focused and economically pragmatic. Instead of concentrating on program managers or technologists, this definition addresses the interests of the two primary acquisition stakeholders: the troops who fight our wars and the taxpayers who fund them.

That’s important, because the way we define success shapes the way we decide what to do, what to reward and what to cancel.

Look, this report’s not perfect. It’s not complete. Some of the numbers are undoubtedly more meaningful, accurate and useful than others. As the report explains, it’s simply a step towards a still distant goal. However, it’s one of the biggest steps in that direction I’ve seen in a long time.

Now if you’ll excuse me, I have some data to analyze.

Dan is an Air Force acquisitions professional. The views expressed in this article are solely his and do not reflect the official policy or position of the U.S. Air Force or Department of Defense. We previously covered the decline of the Royal Navy. Subscribe to War is Boring: medium.com/feed/war-is-boring.

--

--