Performance of the Defense Acquisition System: 2016 Annual Report

U.S. Department of Defense

(Click here to read the full report)


Eliminate all other factors, and the one which remains must be the truth.
— Sir Arthur Conan Doyle in “The Sign of the Four”

As this report is being published, I am concluding 5 years of serving as the Under Secretary of Defense for Acquisition, Logistics, and Technology. This fourth report in the series continues my long-term effort to bring data-driven decision making to acquisition policy. This report demonstrates that the Department of Defense (DoD) is making continuing progress in improving acquisition. The overall series presents strong evidence that the DoD has moved — and is moving — in the right direction with regard to the cost, schedule, and quality of the products we deliver. There is, of course, much more that can be done to improve defense acquisition, but with the 5-year moving average of cost growth on our largest and highest-risk programs at a 30-year low, it is hard to argue that we are not moving in the right direction.

Each year we add cumulative data and new analysis to the report. This year is no exception. While that data can show us ways and places to improve, I believe there is no secret to what it takes to achieve good results in defense acquisition. The short form of this is to: (1) set reasonable requirements, (2) put professionals in charge, (3) give them the resources that they need, and (4) provide strong incentives for success. Unfortunately, there is a world of complexity and difficulty in each of these four items.

Creating new — and sometimes well beyond the current state of the art — weapons systems that will give our warfighters a decisive operational advantage far into the future will never be a low-risk endeavor. That risk can be managed, however, and while we should not expect perfection, we should be able to keep the inevitable problems that will arise within reasonable bounds. We should also be able to continuously improve our performance as we learn from our experience and work to improve our ability to make sound acquisition decisions. This volume and its predecessors are dedicated to these propositions.

We open this volume with some accrued insights and an attempt to refute some popular myths about defense acquisition. Too much of our decision making on acquisition policy has been based on cyclical and intuitive conventional wisdom and on anecdote — or just the desire, spurred by frustration, to affect change. As I’ve worked in this field for more than four decades, it has become clear to me that there is no “acquisition magic” — no easy solution or set of solutions that will miraculously change our results. Most attempts to direct or legislate acquisition “magic” in some form have been counterproductive and often only increased the system’s bureaucracy and rigidity or led to excessive risk taking — neither of which is helpful. What we need, and always will need, is professionalism, hard work, attention to detail, and flexible policies and incentives that the data show align with the results we desire. Improving each of these is a continuous endeavor of which this volume is a part.

A U.S. Air Force B-2 Spirit aircraft navigates into position during an aerial refueling mission with a KC-135R Stratotanker aircraft over the North Atlantic. Photo by U.S. Air Force Tech. Sgt. Paul Villanueva II

Principles for Improving Defense Acquisition

  • People matter most; we can never be too professional or too competent.
  • Continuous improvement will be more effective than radical change.
  • Data should drive policy.
  • Critical thinking is necessary for success; fixed rules are too constraining.
  • Controlling life-cycle cost is one of our jobs; staying on budget isn’t enough.
  • Incentives work — we get what we reward.
  • Competition and the threat of competition are the most effective incentives.
  • Defense acquisition is a team sport.
  • Our technological superiority is at risk and we must respond.
  • We should have the courage to challenge bad policy.

Logistics Specialist Seaman Sanndra Ton, assigned to the Arleigh Burke-class guided-missile destroyer USS Lassen, signals an MH-60R helicopter from the “Jaguars” of Helicopter Maritime Strike Squadron 60, April 6, 2016. Photo by U.S. Navy Mass Communication Specialist 2nd Class Huey D. Younger Jr.

Actionable Insights

The highlights below and detailed data in this report provide useful insights for stakeholders and practitioners.

Myth Busters

Myth: All defense acquisition programs have large cost growth.

Reality: Cost control has improved significantly. Not only is cost growth significantly lower than historical levels, but recent efforts have dramatically lowered cost growth further. Multiple measures summarized below show statistically lower cost growth on major programs: number of Nunn-McCurdy breaches; Section 828 “overruns” on programs since 2009; proportion of programs needing less funding than originally planned; biennial cost growth in development and production; total production cost growth; and annual growth of contracted costs. Historical analyses also show that cost controls are better than in the decades before Goldwater-Nichols. We do still have legacy problems on older programs. Total research, development, test and evaluation (RDT&E) cost growth is still rising due to older programs. We need to do better through continued evolutionary improvements, but recent improvements focused on acquisition fundamentals and an empowered government workforce have been more successful than laissez-faire acquisition reforms of the mid-1990s or prior to the passage of Goldwater-Nichols and the Packard Commission reforms of the late 1980s.

Myth: Defense programs usually cut quantity (e.g., to pay for cost growth).

Reality: Most major programs deliver the original baseline quantity or more. We don’t as a rule cut program quantity. As discussed below and on p. 104, most MDAPs actually produce the quantities we originally planned at Milestone (MS) B. This runs counter to the impression given from just focusing on certain high-visibility programs such as the F-22 program or the DDG 1000 Zumwalt-class destroyer that incurred major cuts in quantity.

Myth: Swings in O&S cost estimates indicate poor program management.

Reality: The dynamics of cost estimates indicate that O&S costs appear to be heavily driven by external inflation factors. Analysis shows that the recent dynamics of program O&S costs estimated during acquisition correlate with the dynamics of labor, health-care, fuel, and maintenance costs. While this aligns with intuition, it also indicates that O&S cost increases involve both factors that the acquisition system cannot control (e.g., wages, health-care costs, and fuel costs) as well as some that can, in part be controlled (e.g., system reliability, fuel efficiency, and ease of maintenance). Operational tempo also affects O&S costs through many of these factors (e.g., the amount of fuel consumed and maintenance costs), and changes in forecasted tempo will affect O&S costs independent of both inflation and weapon system performance. Thus, while the acquisition system needs continued attention to the levers it can control (with full knowledge that their effects often will not be seen for decades), stakeholders need to recognize the strong influence of other factors on O&S costs. (click here to see the discussion on this topic)

Myth: Program requirements are unstable.

Reality: High-level requirements seldom change on major programs, and very few programs have many changes. About 85 percent of MDAPs showed no changes that we could trace from the original MS B baseline to the latest Selected Acquisition Report (SAR) report for the program. Moreover, of the few programs with any traced changes, most had only one. This is commensurate with experts’ experience and GAO’s findings, which also indicate that changes are largely made at the engineering level as development seeks ways to meet high-level requirements. Changes, however, are not always bad. Some changes reflect prudent requirement reductions to unforeseen high costs of options uncovered in development or new affordability pressures. Other changes address new threats that otherwise would render an unmodified system obsolete upon delivery. Thus, flexibility, prudence, and continued tradeoffs, together with ruthless management attention to cost implications, are more important in the end than simple edicts at the extremes of change control. (Click here to see the discussion on this topic)

Myth: The DoD cannot acquire systems quickly.

Reality: DoD acquisition can be timely and responsive. Despite criticism that defense acquisition is too slow, the highlights below show that schedule growth is lower than cost growth in development, and cycle times for major programs have increased only from about 5 years to 7 years since the 1980s with dramatic increases in weapon system complexity. Also, unpublished analysis indicates that the DoD has successful approaches for rapidly acquiring urgently needed capabilities that leverage mature technology. These approaches generally are limited by available technology restrictions on reprogramming appropriated funds.  This is not to say that internal processes cannot be improved, so efforts continue to institutionalize streamlining and tailoring.

Myth: Increased bid protests reflect a deteriorating ability to conduct source selections.

Reality: Contracting processes are generally fair, rigorous, and objective — and protests are rarely sustained. Despite concerns arising from increased numbers of protested solicitations and contract awards, GAO data indicate that protests and sustainments remain very low both in number and as a percentage of solicitations and awards. Protests to GAO have averaged about 2.5 percent of solicitations and about 0.25 percent of contracts. The sustainment rate remains very low — about 30 per year, or 2 percent of the approximately 1,300 annual protests. (Click here to see the discussion on this topic)

Myth: The DoD is pursuing cost savings at the expense of contractor profits.

Reality: Major defense companies remain profitable despite the DoD’s increased success at tying profits to performance. Further data build on prior reports to show that the DoD’s efforts to improve cost performance are not a war on profits but a reasonable alignment of industry and government goals. (see fig. H-21 below)

Myth: Defense acquisition is broken.

Reality: The acquisition system for decades has given the United States the most capable military in the world and has been improving both in the past and more recently. While there is no absolute definition for sufficiency, the data in these annual performance reports indicate that the system functions reasonably well compared to the past and continues improving. We cannot look at a single metric to measure the performance of the defense acquisition system, and many metrics work at odds with each other. For example, the so-called “iron triangle” of cost, schedule, and technical performance has long shown that emphasizing one or two dimensions often is done at the expense of the others. While cost (followed by schedule) metrics are the easiest to quantify, data for all three dimensions indicate stability and, in many cases, significant improvement.

The U.S. Air Force RQ-4 “Global Hawk” unmanned aircraft 2019 sits on the flight line at Edwards Air Force Base, CA. The A2019 was the first “Global Hawk” to reach the 10,000 flying hour milestone. Photo courtesy of the U.S. Air Force.

Insights for Current and Future Leadership

First, let us discuss insights that primarily affect both DoD-wide and DoD Component (including the Office of the Secretary of Defense (OSD), military departments, and all defense agencies, DoD field activities, and other entities within the DoD that are authorized to award or administer contracts, grants, cooperative agreements, and other transactions) leadership.

The lack of programs in our “new product pipeline” may be putting technological superiority at risk. Both RDT&E budget levels — particularly Engineering and Manufacturing Development budgets — and program new-start data indicate a slowdown since the mid-2000s. Total budget reductions limit what we can do, but it is important to step back and watch these macro trends in the context of increasing threats (technologically, pace, and diversity). The DoD’s recent response has been to add a number of early stage experimental prototyping efforts. This is an important and necessary step but does not deliver capability or designs that are ready for production and fielding in any substantial quantity.

Be particularly careful to ensure realistic program baselines — especially when budgets are tight. Further analysis published in this report reinforces prior concerns that excessive optimism or risk tolerance may be particularly acute when programs are initiated during tight budget periods (such as at present), leading to the higher cost growth seen on these programs. We should explicitly recognize this and avoid setting up our successors for large overruns. For example, acquisition and DoD Component leadership should ensure adequate risk reduction before MS B and apply healthy skepticism about novel approaches that are marketed as offering substantial cost reductions (i.e., if it sounds too good to be true, it probably is. In a tight-budget climate, industry is motivated to be optimistic and take greater risk in order to win new business. DoD programmers also are motivated to put pressure on acquisition professionals to lower cost estimates and funding requirements. Because of these tendencies, the Defense Acquisition Executive (DAE) is focusing particularly on cost and schedule realism for Acquisition Category (ACAT) I and Major Automated Information Systems (MAIS) programs at milestone decision reviews. (Click here to see the detailed analysis and discussion of this topic)

Be prepared to incur statutory overrun penalties. As shown in Table H‑2 below, the Army and Navy are a few billion dollars away from incurring a Section 828 penalty for Program Acquisition Unit Cost (PAUC) cost growth on MDAPs that started since 2009. Growth on individual programs may be warranted in order to address threat or critical engineering issues, and the prospect of penalties should not deter sound decisions on program content or requirements. The penalties were created to encourage better program planning, but the impact, which will come years after program initiation, is more likely to affect decisions made after cost growth is realized. Penalty avoidance, like Nunn McCurdy avoidance, should not be the primary decision criterion once cost growth has been realized; the priority should be getting critically needed capability to the warfighter at the best cost possible.

NOTE: This shows growth in total unit recurring flyaway needed funding after adjusting for quantity changes; it is independent of RDT&E funding but reflects any work-content changes. These are percentage changes after adjusting for inflation and any quantity changes from original MS B baseline of actual past and estimated needed future funding as reported in the programs’ latest SARs. White bars between DAE shaded regions represent periods with no confirmed executive. Not shown are relatively new programs that have not spent at least 30 percent of their original EMD schedule.

We need a metric for the portion of O&S costs related solely to weapon system design and performance. Analysis shows that many of the factors that correlate with growth of O&S cost estimates reported during acquisition are outside the control of the acquisition system (i.e., wages, health-care costs, and fuel prices). Current O&S metrics do not separate acquisition program effects from these external effects. However, a new metric could be developed to measure these internal program effects by holding the external variables constant from MS B forward (solely for purposes of comparison) so that the effects solely from the acquisition system are revealed.

Listen to feedback from the DoD’s professional acquisition leadership. The annual program manager (PM) assessments sent to the DAE provide useful perspective on the realities of conditions where acquisition actually takes place — in program offices. Our PMs tended to be positive about strategy, system performance, program cost, and contracting (although the latter was raised often as both a success and issue). Conversely, funding difficulties, risks, and cyber issues top the list of concerns. Some topics have high levels of both success and problems — especially schedule performance, contractor performance, and the implications of changing technology. (click here for the complete analysis)

Just as important, our program executive officers (PEOs) raised a number of system issues across their portfolios while making insightful suggestions on how we can improve the defense acquisition system. For example, the PEOs note that system improvements (e.g., savings) come at a cost — namely, we need sufficient workforce to think through and execute more efficient acquisition approaches. Blind “headquarter” or other cuts in government and contractor workforce can be extremely counterproductive.

U.S. Marine Corps Lance Cpl. Caleb Gonzalez, leads 1st Lt. Taylor Zehrung, an F-35B Lightening II pilot, in post flight checks after his first official flight with Marine Fighter Attack Training Squadron 501 aboard Marine Corps Air Station Beaufort, SC. Zehrung is the only 1st Lt. to fly in the F-35B Lightening II directly after flight school. Photo by U.S. Marine Corps Lance Cpl. Kayla L. Douglass.

Program-Level Insights

Focusing on acquisition fundamentals and cost control makes a difference. Proactive management and creative thinking contribute significantly and measurably to cost control. Multiple measures and analyses in this and prior annual reports (see fig. H-4 below for an example) show that fundamentals work in controlling costs. We need to keep up the good work. These savings are dependent on workforce expertise, sufficiency, empowerment, and the degree to which we can illustrate and prove these linkages will go a long way toward ensuring continued success. The institution of “should cost” management and its consistent emphasis over the last 

NOTE: Five-year moving average of annual growth in contracted total costs is shown relative to negotiated cost targets on major contracts of Major Defense Acquisition Programs (MDAPs), as well as MAIS that are MDAPs, in EMD and early production that reported earned-value (EV) data (i.e., almost no firm-fixed price or full-production contracts). This contract cost measure is different than statutory measures of program cost growth relative to baselines. Such changes reflect added work and overruns after adjusting for inflation. This measure should not be mistaken for the total costs of these programs because it excludes non-contracted costs and the majority of production contracts, which tend to be firm-fixed-price and do not report the data used for this analysis. These data summarize 18,470 EV reports on 1,123 major contracts for 239 MDAPs.

6 years by the acquisition chain-of-command been a success and should be a permanent feature of the DoD’s acquisition culture. Staying within budget is not the definition of success.

Don’t neglect suitability (reliability, maintainability, etc.) in pursuing system performance. Operational tests show that major programs are often effective when they tested as operationally suitable, but the converse is not true (see Table 2–2 below). This correlation by itself does not prove causality, but it reinforces the logic that the so-called “-illities” (e.g., interoperability, availability, maintainability, reliability) are important to achieving the mission. For example, well-engineered systems that address suitability factors are probably also better positioned to be effective. Also, no matter its features, a weapon system may not serve its function if it is unreliable and unavailable to the warfighter.

NOTE: When columns in a category do not sum to 100%, there are small round off differences in the values of the observed conditional probabilities.

Don’t neglect O&S cost implications in early system requirements and design. Analysis shows that many of the factors that correlate with growth of SAR O&S cost estimates (i.e., wages, health-care costs, and fuel prices) are outside of program management control. While PMs cannot control these external factors, they can affect fuel efficiency and maintenance costs (e.g., system reliability, ease of maintenance, and repair automation). Usually, these aspects must be addressed very early in the system’s design, so don’t neglect them in early program planning and management. That is why the new affordability process sets goals and caps on life-cycle costs early in the program’s life (e.g., at the point of the Materiel Development Decision (MDD) and MS A, when bigger design changes can be made). Don’t neglect them just because you cannot control the external factors and uncertainties remain.

Don’t let up on ensuring rigorous source selections that align government value structures, source-selection rules, and industry’s goal of winning. While GAO data on source selections provide encouraging news that our practices generally are fair and rigorous, we should not let up on efforts to improve source selections. The basic integrity and fairness of our processes are fundamental to maintaining public confidence in how taxpayer resources are spent.

Use fixed-price contracting judiciously in development. In our updated guidance on contract incentives, data from prior annual reports, and experience indicate using fixed-price contracts in development can be very risky and counterproductive, while incentive contracts can yield good cost control at lower risk and lower prices. All of the following five criteria should generally be met before using fixed-price contracts in development.

  1. Requirements are stable
  2. Technologies are mature
  3. The contractor is experienced
  4. The contractor can absorb overruns
  5. The contractor has a business case for absorbing any overruns that occur

Soldiers assigned to the U.S. Army 23rd Infantry Regiment, scout out an area during a foot patrol in the Naib Kalay area of Afghanistan, June 23, 2012. Photo by the U.S. Department of Defense

Actions Since the Last Report

This annual report measures institutional performance trends using a variety of metrics. This year we also provide a summary of major actions and events that have occurred since the last report.

Major Program Actions by the DAE

  • Delegation. Milestone Decision Authority was delegated from the DAE to the respective DoD Component (usually a military department) on 28 ACAT I programs since July 2015.
  • CH-53K. Advanced procurement to support the first CH-53K heavy-lift helicopter’s low-rate initial production (LRIP) lot was approved in April 2016 to position the program for MS C without further impacting projected Initial Operational Capability (IOC) date. Should MS C ultimately not be approved, these parts and material would be used for other rotary wing aircraft within the current Navy inventory.
  • DEAMS. Limited Deployment was granted to enable the Air Force to improve over initial operational test results prior to returning for a Full-Deployment Decision of this accounting and management system. The Air Force will also conduct a critical change review to ensure that: the new cost and schedule estimates for the program include plans to reduce defects consistent with a program of this maturity; the Oracle R12 upgrade is completed effectively; and future deployments are tied to performance gains verified through demonstrated software stability and logically sequenced test events.
  • EPAWSS. The high reliance on off-the-shelf components allowed the F-15 Eagle Passive/Active Warning and Survivability System (EPAWSS) acquisition to be highly tailored, have a very short Technology Maturation and Risk Reduction (TMRR) phase, and have Milestone Decision Authority (MDA) delegation to the Air Force.
  • F-35. We continue to negotiate better prices aligned with actual costs and employ strong contract performance incentives on LRIP lots to drive costs down, improve performance, and minimize concurrency problems. The Air Force declared IOC in August 2016 for the conventional F-35A variant fighter jet.
  • FAB-T. The Family of Advanced Beyond Line of Sight Terminals (FAB-T) Command Post Terminal (CPT) subprogram passed MS C in October 2015 to support Presidential and National Voice Conferencing operational transition from existing aging assets and maintain the earliest possible FAB‑T CPT IOC schedule
  • GBSD. The Ground Based Strategic Deterrent (GBSD) program was approved in August 2016 to enter TMRR, but significant cost and industrial-base uncertainties remain given limited historical data and the long period since the last Intercontinental Ballistic Missile (ICBM) development program. TMRR will produce more current and directly applicable information to support higher confidence cost estimates and inform baselines.
  • GPS OCX. The DAE, Secretary of the Air Force, acquisition chain of command, and prime contractor’s chief executive officer together are conducting quarterly “deep dive” reviews as a result of continued cost increases and schedule slips. The Global Positioning System (GPS) next-generation Operational Control System (OCX) program also breached its critical Nunn-McCurdy threshold in June 2016 (click here for additional information).
  • JAGM. Given the strong potential for future international sales, the Joint Air-to-Ground Missile (JAGM) program is implementing Defense Exportability Features in Engineering, Manufacturing and Development (EMD), and the Army plans to obtain the appropriate Technology Security/Foreign Disclosure approval authorities prior to MS C.
  • KC-46A. MS C and approval for LRIP were authorized in August 2016 at a higher quantity to permit an orderly increase in the production rate of this military aerial refueling and transport aircraft upon completion of operational testing, which was delayed to correct design and manufacturing issues and to complete performance verification and hardware certification. The contractor, Boeing, has now recorded reach-forward losses totaling about $1.7 billion on the EMD phase contract.
  • RMS. The Remote Minehunting System (RMS) was canceled in March 2016 due to unsatisfactory progress on system reliability and availability (click here for additional information).
An aerial view of the Pentagon as seen from a Marine Corps CH-46 Sea Knight helicopter, July 8, 2011. Photo by U.S. Marin Corps Lance Cpl. Tia Dufour.

Institutional and Policy Changes

  • Acquisition Workforce Development. The DoD continues to increase the capabilities of our workforce, leveraging legislated authorities and funding such as the Defense Acquisition Workforce Development Fund (DAWDF) as well as the Force of the Future initiatives (click here for additional information).
  • M&As. The Department of Justice clarified DoD authorities on mergers and acquisitions (M&As), resolving concerns that we had insufficient authorities to address our wider concerns regarding about the ongoing consolidation trend in the defense industry.
  • Innovation and Technical Excellence. The DoD essentially has completed the initial implementation of the BBP 3.0 set of acquisition policy initiatives and continues monthly follow-up through the Business Senior Integration Group.
  • Commercial Outreach. The DoD renewed outreach to the commercial sector through the Defense Innovation Unit, Experimental (DIUx).
  • Independent Research and Development (IR&D). In February 2016, the DoD proposed a Defense Federal Acquisition Regulation Supplement requirement that an appropriate DoD official be notified of new IR&D efforts in order to ensure that these investments are of potential interest to the DoD. Then results would have to be reported to facilitate utilization. We also issued in February an advanced notice of proposed rulemaking that would preclude misusing future IR&D expenditures to reduce evaluated bid prices in competitive source selections.
  • Intelligence Support to Acquisition. To better address emerging threats, we are improving the latency, dissemination, and relevance of intelligence to inform acquisition planning and system updates.
  • Contracted Services. A new DoD Instruction (DoDI) 5000.74 was issued in January 2016 to establish a management structure for the acquisition of contracted services while authorizing DoD Component decision authorities to tailor the procedures to best achieve cost, schedule, and performance objectives (click here for additional information).
  • Affordability. We continue to apply and enforce affordability constraints on MDAPs and smaller programs, driving requirements tradeoff and management decisions during execution (click here to see additional information on affordability).
  • Independent Technical Risk Assessments. We released a new Risk, Issue, and Opportunity Management Guide in June 2015 to help programs better identify risks, quantify their potential effects, and develop strategies to address and mitigate those risks.
  • Cybersecurity. We are ensuring that new cybersecurity regulations are applied to DoD contracts to better secure unclassified controlled technical information resident in the defense industry.
  • Source-Selection Procedures. A common set of principles and procedures for effectively conducting competitively negotiated source selections was updated in March 2016, including new guidance on Value-Adjusted Total Evaluated Price (VATEP) tradeoffs and appropriate uses of Lowest-Price Technically Acceptable (LPTA).
  • Incentive and Other Contract Types. A major DDPAP guidebook update provides advice on the selection and negotiation of the most appropriate and effective contract type and incentives for a given acquisition situation, emphasizing how to apply judgment and tailor our contracting to improve outcomes and contractor performance.
  • O&S Cost Management. Published in February 2016, a new guidebook for PMs and product-support managers provides tools and best practices for O&S cost analyses to inform early life-cycle decisions, effect reliability trades, and identify Should-Cost initiatives having the greatest effect on future O&S costs.
  • Performance-Based Logistics. A March 2016 update of our Performance-Based Logistics (PBL) Guidebook reviews common myths about PBLs, adds new guidance regarding intellectual property issues, and continues to provide best practices, selection criteria for when PBLs are appropriate, and practical examples to maximize successful outcomes.
  • PM/PEO Assessments. To better understand performance issues and successes in the acquisition system, we expanded the annual PM assessments to include PEO assessments sent directly to the DAE and Service Acquisition Executives (SAEs). (click here to read the complete section on PM assessments)

Soldier from the 17th Infantry Regiment briefs the Under Sectretary of Defense for Acquisition, Technology, and Logistics, Frank Kendall during a network integration evaluation in October 2015 at Fort Bliss, Texas. Kendall predicts that acquisition improvement will happen best at the operator level — project and product managers, contracting officers, and engineers — because they are most familiar with the needs of the Warfighter. Photo by U.S. Army Spc. Lauren K. Harrah.

Funding Growth and DAEs

Policy, sound planning, and execution decisions by DoD executives should bear on the effectiveness of the overall acquisition system. This is particularly true for the program structure and associated baselines set at MS B against which future cost performance is measured. Therefore, in our annual reports we track the performance of programs started under different acquisition executives to help reinforce accountability and provide an initial look for possible trends for further analysis.

Figure H‑1 and Figure H‑2 (see below) show growth in MDAP Planned Total Funding in development and procurement (respectively) for active and completed MDAPs against original baselines as reported to Congress in the SARs. Note that SAR funding data reflect current PM estimates of total needs by the end of the program for the current program configuration, including past actual funding, the current budget request, planned funding in the Future Years Defense Program (FYDP), and planned funding beyond the FYDP to the end of the program. Growth is measured against the baseline set at the original MS B and can be positive or negative.

NOTE: This shows total RDT&E funding growth independent of procurement funding and quantity changes; it reflects any work-content changes. These are percentage changes after adjusting for inflation from original MS B baseline of actual past and estimated future funding as reported in each program’s latest SAR. Total RDT&E is an insightful measure because it is necessary, regardless of quantity. White bars between DAE shaded regions represent periods with no confirmed executive. Not shown are relatively new programs that have not spent at least 30 percent of their original EMD schedule.
NOTE: This shows growth in total unit recurring flyaway needed funding after adjusting for quantity changes; it is independent of RDT&E funding but reflects any work-content changes. These are percentage changes after adjusting for inflation and any quantity changes from original MS B baseline of actual past and estimated needed future funding as reported in the programs’ latest SARs. White bars between DAE shaded regions represent periods with no confirmed executive. Not shown are relatively new programs that have not spent at least 30 percent of their original EMD schedule.

These figures also show who served as the DAE at the time of the MDAP’s MS B approval. (Click here to see similar charts for the programs started under different SAEs in the three military departments)

These charts neither reflect the effectiveness of subsequent oversight or major program changes by later DAEs during execution oversight, nor do they reflect statistical analysis to control for other internal and external variables that could have led to a program’s success or problems. Defense acquisition is complex, and each measure has its strengths and weaknesses, so attributing performance to a single measure is subject to the limitations of that measure. For example, some programs may appear to be performing well in terms of total planned RDT&E funding but may be having problems that are reflected in other measures (e.g., total needed procurement funding; estimated operational costs; or cost growth on one of the program’s major contracts). Thus, a combined examination of available data is important before reaching conclusions. Nevertheless, they are a crude indicator of the effectiveness of the decisions made by these officials. (click here to see a detailed discussion of the topic)

Cost-Related Improvements

Recent data on MDAPs at the program- and contract-level have shown some statistically significant improvement trends in funding, price, and cost control, although complicating factors raise caveats and potential concerns.

More MDAPs are showing program funding reductions in both development and production. Relative to their original MS B baselines, more active MDAPs by proportion are estimated to have total RDT&E and unit-procurement funding reductions (sometimes referred to as “underruns”) as of 2015 than as of 2009 — even after we remove relatively new programs that would be unlikely to currently show growth, as seen in Figure H‑3 below. (click here to read a detailed discussion of program funding growth by start date). The 2015 numbers are slightly lower than we saw last year in the 2014 data, but they remain significant. These data reflect similar results discussed below where biennial cost growth at the program level and the annual growth of contracted costs for MDAPs both have dropped significantly in recent years.

NOTE: Development funding is total RDT&E funding growth independent of procurement funding and quantity changes; it reflects any work-content changes and cost growth relative to targets. Procurement funding is growth in unit recurring-flyaway funding after adjusting for quantity changes; it is independent of RDT&E funding but reflects any work-content changes and cost growth relative to targets. Statistically significant differences between adjacent periods are marked with an oval. A program shows a reduction if current total is below the original MS B baseline. To reduce bias from newer programs, relatively new programs that have not been through at least 30 percent of their original EMD schedule are not included. MS B dates are calendar years.

Improved contract cost control beyond budgetary effects. The growth of contracted costs for major programs has dropped during Better Buying Power from 9 percent in fiscal year (FY) 2011 to a new 30-year low of 3.5 percent in 2015 (see Figure H‑4 below). After accounting for budgetary effects, statistical analysis found an almost 2 percentage-point drop since 2012 and 1 percentage-point drop after Goldwater-Nichols and the Packard Commission recommendations were implemented. Analysis also found that the defense acquisition system adjusts in the future for unforeseen cost “shocks,” dampening their effects. (click here to read a detailed discussion on the topic)

NOTE: Five-year moving average of annual growth in contracted total costs is shown relative to negotiated cost targets on major contracts of Major Defense Acquisition Programs (MDAPs), as well as MAIS that are MDAPs, in EMD and early production that reported earned-value (EV) data (i.e., almost no firm-fixed price or full-production contracts). This contract cost measure is different than statutory measures of program cost growth relative to baselines. Such changes reflect added work and overruns after adjusting for inflation. This measure should not be mistaken for the total costs of these programs because it excludes non-contracted costs and the majority of production contracts, which tend to be firm-fixed-price and do not report the data used for this analysis. These data summarize 18,470 EV reports on 1,123 major contracts for 239 MDAPs.

Program-level improvements. Table H‑1 (see below) summarizes program-level results for MDAPs in development and procurement. Most (but not all) show improvement. For total RDT&E funding growth from original baseline, medians are still flat on a program basis but increasing on a dollar basis. In terms of quantity-adjusted unit procurement costs from original baseline, medians dropped 5 percentage points from SAR year 2014 (on a dollar basis) and 2 percentage points (on a program basis). Biennially, medians are still flat since 2009 at about 0 percent. Note, however, that one measure (cumulative RDT&E growth from original baseline on a dollar basis) has a backlog of cost growth that will likely remain until those older programs exit the portfolio. We will discuss that later in this section.

Note: a SAR year includes information up until the SAR’s submission date, which may reflect events and budgeting decisions from the beginning of the following calendar year (especially from January, when the budget request is being finalized). In other words, SAR years are similar to calendar years but may include data past December 31.

Lower total MDAP funding growth since original baselines in production. Adjusting for quantity changes and the dollar size of programs, the median quantity-adjusted unit funding growth since original MS B baseline has been statistically lower after 2009 and dropped further in 2015 (see the dollar-basis line in Figure H‑5 below for details). On a program basis, the recent total unit funding appears somewhat lower at the median, but the population differences are not statistically significant (see the more detailed Figure 2‑22). In other words, larger active MDAPs (by dollar) generally have brought their growth in total unit procurement funding needs to levels close to the median for all MDAPs regardless of size. Note that this is not the case in development, where increases are seen by program and dollar in recent years — see Figure H‑16 below. (click here to see a detailed discussion of the topic)

NOTE: This shows growth in unit recurring flyaway funding (i.e., for the production of a single usable end-item) after adjusting for quantity changes; it is independent of RDT&E funding but reflects any work-content changes. The trend on a dollar basis (weighting by program size) is statistically significant, but the lower results on a program basis (unweighted by dollar size) do not yet represent a statistically significant trend (see Figure 2‑22 for details). These are percentage changes after adjusting for inflation and any quantity changes from original MS B baseline of actual past and estimated needed future funding as reported in the programs’ latest SARs. Not included are relatively new programs that have not spent at least 30 percent of their original EMD.
NOTE: This shows growth in unit recurring flyaway funding after adjusting for quantity changes; it is independent of RDT&E funding but reflects any work-content changes. These are percentage changes after adjusting for inflation and any quantity changes from original MS B baseline of actual past and estimated needed future funding as reported in the programs’ latest SARs. Relatively new programs that have not spent at least 30 percent of their original EMD schedule are not included. Boxes show second quartile, median, and third quartile; bars show first and fourth quartiles, minimum, and maximum. The IQR is the difference between the 75th and 25th percentiles.

Lower biennial change in MDAP program funding for both development and production. In addition to measuring total growth against original baselines, we also measure biennial growth to monitor incremental (marginal) growth. Median biennial change in funding growth continues to be lower in recent years both on a program basis and when adjusting for the size of programs (i.e., on a dollar basis) — see Figure H‑6 and Figure H‑7. In both program and dollar bases, biennial changes have been below 1 percent since 2011 for development and essentially zero or below since 2009 for procurement. These are measured using total program RDT&E funding and quantity-adjusted unit procurement (recurring unit flyaway funding), including past and needed future funding. (click here to read a detailed discussion on lower biennial change)

NOTE: This measures biennial changes in total RDT&E funding growth independent of procurement funding and quantity changes; it reflects any work-content changes. Both trends are statistically significant. Total RDT&E is an insightful measure because it is necessary, regardless of quantity. Relatively new programs that have not spent at least 30 percent of their original EMD schedule are not included. The dollar-basis value for 2010–2012 is higher than last year’s report due to an error from not including 2011 F-35 fighter jet engine RDT&E dollars in 2010. Other slight adjustments in 2002–2004 and 2003–2005 reflect the addition of the Chemical Demilitarization (Chem Demil) programs to the dataset on program and dollar bases.
NOTE: This measures biennial changes in unit recurring flyaway funding after adjusting for quantity changes; it is independent of RDT&E funding but reflects any work-content changes. Indicated trends are statistically significant. Not included are relatively new programs that have not spent at least 30 percent of their original EMD schedule.

Lower median MAIS funding growth. As shown in Figure H‑8 (see below), the median funding growth in the MAIS Annual Reports (MARs) compared to the MAR Original Estimate (MOE) dropped in MAR Year (analogous to SAR Years) 2015 after a small rise in 2014. Note that the magnitude of cumulative funding growth for MAIS programs is much smaller than those for MDAPs in development (Figure H‑16) and production (Figure H‑5).

NOTE: Trends is statistically significant. Growth may reflect content changes. Immature programs that have not completed at least 30 percent of their original EMD schedule time were excluded to help control for maturity.

No penalties from new statutory overrun calculations on recent MDAPs. Section 828 of the FY 2016 National Defense Authorization Act (NDAA) (Public Law 114–92) enacted a new cost-growth calculation and associated penalty by military department across their MDAPs that started since 2009. Table H‑2 summarizes the results for the 2014 and 2015 SARs. None of the military departments has net positive overruns and thus none of them incurred penalties for either of these years. We also note that each military department improved from its 2014 to 2015 SARs. (click here to read a detailed discussion)

NOTE: “Overruns” in this case are defined (by Section 828 of the FY 2016 NDAA relative to original PAUC baselines at current total quantities. To aid comparison between the two years, all dollars are converted to a common FY 2017 base year (BY) and rounded to the nearest $100 million.

Lower recent rates of Nunn-McCurdy breaches. As shown in Figure H‑9 below, there have been statistically significant downward trends since 2009 of both nonquantity-related critical breaches (shown) and all critical Nunn-McCurdy cost-growth breaches. Nunn-McCurdy “cost” growth thresholds are established by law and trigger reporting to Congress and other specific actions by the DoD (click here for additional discussion). As discussed earlier with respect to PAUC, these “cost” measures reflect funding and include the underlying contractor and government execution costs plus contractor margins (profits and fees). (click here for a detailed discussion on Nunn-McCurdy breaches)

NOTE: This chart includes data through CY 2016Q2, which is the second quarter of CY 2016, inclusive. Breaches due to quantity changes are based on Performance Assessments and Root Cause Analyses (PARCA) root-cause analysis or review of information from the program’s DoD Component. JSOW has been recategorized as quantity related (a change from last year’s report). Since PARCA was not established until the Weapon System Acquisition Reform Act (WSARA) of 2009, it is unknown whether quantity changes were a root cause of breaches before 2009. There is a statistically significant downward trend in both total critical breaches and non-quantity-related critical breaches since 2009. Cost breaches are after adjusting for inflation. Since it usually takes a few years before a program might breach again, we removed programs from the portfolio count that have breached recently to avoid the potential bias toward an artificially low breach rate (i.e., this adjustment makes the metric more conservative). Also, relatively new programs that have not spent at least 30 percent of their original EMD schedule are not shown. For the trend analysis, we used the breach rates instead of counts to control for changes in portfolio size between years, although the patterns are very similar because the size of the MDAP portfolio is relatively stable.

Schedule-Related Performance

While cost growth eventually affects warfighter capabilities through opportunity cost effects on quantity and other programs, schedule growth directly delays the delivery of capabilities that address operational needs and threats. Thus, we are expanding our analysis of schedule-related performance to look for trends in cycle time and schedule growth. As with cost, schedule-related metrics vary depending on what is included (e.g., whether active or incomplete programs are included, raising questions of remaining maturity bias) and how they are calculated (e.g., whether in years relative to baselines or in percentages of growth, the latter of which are not symmetrical around 0 percent).

The following schedule-related metrics show mixed performance; some show increases while others show zero or negative growth. Generally, the magnitudes of any changes are lower in percentage than cost growth relative to baselines.

Historical MDAP schedule growth shows no trend in time. When measuring schedule growth from MS B or C to IOC by start date, we saw no significant trend across all MDAPs. In other words, essentially no part of the variation is explained generally by a trend in time. The median overall growth since 1981 is 1 percent (up from 0 percent last year). We do see a downward trend in the subset of programs that have not yet achieved IOC, but it is too early to tell whether the trend is real or are merely due to the fact that the programs are not yet complete. (Click here to read the related discussion material)

MDAPs have averaged about 7 years to reach IOC, showing about 3 percent growth overall. The average MDAP that reported between calendar year (CY) 1997̅–2015 took about 7 years to reach IOC from initiation at MS B or C. This is about 3 percent above the average planned length. The actual portfolio variation (standard deviation) grows by about 5 months. In other words, the portfolio of programs that have achieved IOC showed modest schedule growth, measured in months, not years. (Click here to read a detailed discussion)

Recent median schedule growth on active, shorter MDAPs is increasing. In a different measure examining all then-active MS-B-start programs (regardless of whether they have reached IOC or not), we have seen a marked increase in schedule growth from program start (MS B) to IOC from about 4 percent to about 15 percent at the median on programs since 2009 on a program basis (see Figure H‑10 below). Schedule growth across the distribution in CY 2015 was statistically higher than it was in 1997–1999 and 2004–2005. However, when weighting by program span (so that longer programs weigh more than shorter programs), the trend remains relatively flat at about 2 percent. Thus, recent schedule growth (in percent) on active programs generally appears to be concentrated on shorter programs. We also found (in separate analysis) that shorter programs tended to have higher schedule-growth percentage variation than longer programs. (Click here to read detailed discussion)

NOTE: Program basis weighted each program equally. Schedule growth on a program basis across the distribution in CY 2015 was statistically higher (at the 5-percent level of significance) than it was in 1997–1999 and 2004–2005. Weighting by span increases the contribution of longer programs relative to shorter programs (analogously to weighting cost growth by the dollar size of a program). There were no complete SARs in 2000 and 2008 due to changes in presidential administrations. No programs that started at MS C are included.

MDAP development contract length has grown slowly with system complexity. In comparison to program schedules, when examining development contracts for MDAPs we also see a cycle time of about 7 years. Historically, contract cycle time has grown since 1980 (when it was about 4 years and we had many large overruns on programs in the 1970s) through the 1990s (when it was about 5 years) to the present level of about 6.5 years since about FY 2002. These increases are commensurate with data from our prior reports and probably reflect increases in system complexity and capabilities over the last 35 years. (Click here to read a full discussion)

Schedule growth is declining on major MDAP contracts. In contrast to program-level data, major development contracts for MDAPs are showing a statistically significant decline since 1985. Our model also shows that any random deviations from this trend are corrected in later years, preserving the trend. Figure H‑11(see below) shows that this analytic model closely fits the actual data. (Click here to see the related discussion)

NOTE: 18,470 earned-value reports on 1,123 major contracts for 239 MDAPs.

Median MAIS cycle time is lower. MAIS cycle time is particularly important given the fast pace of information technology advancement. Here cycle time is measured from either MS B or Funds-First-Obligated (FFO) to the Full-Deployment Decision (FDD). As shown in Figure H‑12 (see below), the median cycle time as reported in the MARs dropped from 5 years before 2009 to 3.2 years since 2009; this result is unchanged from last year’s report. Further data and analysis are needed to determine whether we are faster at acquiring MAIS or are planning MAIS in smaller increments. (Click here to see detailed Figure 2‑21 and related discussion materials)

NOTE: These changes may reflect systemic reductions in how much work is included in an MAIS. Original estimates are those in the MAIS’ first MAR. Included are the latest data on programs that appeared in at least one MAR from 2011 through 2014. Relatively new programs that have not spent at least 30 percent of their original EMD schedule are not included.

 MAIS program schedule growth. We also track MAIS schedule growth to understand actual execution. As shown in Figure H‑13, the median schedule growth across then-active MAIS programs has increased slightly from 3 months in 2011 to 5 months in 2015. Again, further data and analysis are needed to determine whether MAIS programs are executing well relative to plans or if requirements and work content are being adjusted to keep programs close to original schedules. (Click here to see a full discussion)

NOTE: These measures do not control for any changes in work content or specifications. Original estimates are those reported in the first MAR for each MAIS. Schedule period is from MS B or FFO to FDD. Relatively new programs that have not spent at least 30 percent of their original EMD schedule are not included

Improvements in Institutional Inputs

Acquisition workforce capability and quality improvements. Workforce professionalism is central to the performance of the defense acquisition system. With strong support from Congress, we have made strides in improving the capabilities, qualifications, demographics, and leadership of the workforce through various strategic initiatives. The workforce grew by about a quarter after FY 2008 when it was recognized that the DoD had serious deficiencies in this area. This growth was stopped as budgets declined after 2011, but since then the size of the workforce has remained roughly constant. Quantity (the number of people) alone is insufficient. We now focus on improving the quality, experience and professionalism of the workforce. The percent of the workforce lacking certifications has dropped from 14 percent in FY 2008 to 3 percent in the first quarter of FY 2016. We have reduced a significant shortfall in late mid-career staff through strategic hiring. Finally, board-certification has articulated and applied advanced quality standards for many categories of key acquisition leaders. (Click here to read the related discussion materials)

Increased small-business utilization on prime contracts. Figure H‑14 shows actual DoD-wide small-business utilization (measured as a percentage of dollars obligated) relative to yearly goals. At the prime contract level, recent trends since FY 2011 have been steadily improving; we exceeded our FY 2014 and FY 2015 goals by 2.1 and 3.0 percentage points, respectively, surpassing all prior years except FY 2005. (Click here see additional information)

NOTE: Closed green squares indicate that the goal for that fiscal year was achieved; open red squares indicate that the goal was not achieved.

U.S. Army 1st Lt. Jordan Springer, right, the contracting officer representative with the 104th Engineer Company, 62nd Engineer Battalion, 36th Engineer Brigade, asks a Liberian worker about making adjustments on a pipe for a well at an Ebola treatment unit in Tubmanburg, Liberia, Jan. 13, 2015. Photo by U.S. Army Sgt. Ange Desinor.

Where Improvement Is Needed

While progress has been made in a number of performance measures, improvement is flat or could be improved in others.

Initial operational test ratings remain about the same. The whole reason we have defense acquisition is to provide operational capabilities to our warfighters against current and evolving threats. Cost and schedule control are important, but more important is the relative value of operational benefits given costs. Operational performance goes beyond merely meeting technical requirements established before program inception. Threats can change and those initial requirements may lag operational aspects important to performance in the field. One measure of performance is the operational test results reported by the DoD Director of Operational Test and Evaluation (DOT&E) at the end of LRIP. These initial operating tests, often referred to as Beyond-LRIP (BLRIP) tests, provide independent data on the operational effectiveness and suitability of the system at this point.

Figure H‑15 (see below) summarizes the results of these BLRIP operational tests. While the absolute percentages are slightly lower, the differences between the time periods are not statistically significant, and we are not able to distinguish statistically significant differences based on the incumbent DOT&E. Further analysis discussed in the report found that a program is fairly likely to test out as effective if it tested as suitable, but the converse is not true. In other words, we have a number of effective programs that revealed issues with safety, interoperability, availability, maintainability, and reliability. However, systems that demonstrated suitability also tended to be effective against threats. (Click here for additional information)

Source: DOT&E BLRIP reports. NOTE: Differences are not statistically significant. Sample sizes differ between Effective and Suitable for some DoD Components because effectiveness and suitability could not be determined in all cases.

These data do not reflect subsequent remediation of the issues found during operational testing. Structured data on subsequent operational tests have a much lower sample size and showed no obvious trends. Note, however, that any remaining limitations are subject to tradeoff decisions that weigh remediation cost and performance factors against the benefits of early introduction of advanced capabilities.

Higher total MDAP RDT&E funding growth since original baselines. While biennial changes in total planned and actual RDT&E funding growth has been decreasing recently (click here for additional information), cumulative RDT&E funding over original MS B baselines continued its increase since 2001 on a dollar basis but has been statistically flat since 2004 on a program basis (see Figure H‑16 below). Since recent biennial changes in planned and actual total funding have been near zero at the median, this metric is unlikely to reverse (even if no more RDT&E growth occurs) until programs with earlier RDT&E growth (e.g., the F-35, which had significant historical development cost growth but has been stable since the Nunn McCurdy breach in 2010) exit the MDAP portfolio. Negative growth would be required to reduce this metric, absent programs dropping out of the dataset. (Click here to see the full discussion)

NOTE: This shows total RDT&E funding growth independent of procurement funding and quantity changes; it reflects any work-content changes. Both trends are statistically significant. These are percentage changes from original MS B baseline after adjusting for inflation of actual past and estimated future funding as reported in each program’s latest SAR. Total RDT&E is an insightful measure because it is necessary regardless of quantity. Relatively new programs that have not spent at least 30 percent of their original EMD schedule are not included. The dollar-basis value for 2010 is lower than last year’s report due to an error in double counting the F-35 engine RDT&E dollars in that year. There are also slight corrections in 2002–2003 on a program basis adding the Chem Demil program and in 2014 to include three subprograms to the dataset.

Competition rates are falling. Figure H‑17 (see below) plots the percentage of all DoD contract dollars that were competitively awarded from FY 2006 to FY 2015. Since goals were established in FY 2010, we had declining actuals until we made progress in FY 2014 at reversing the trend. However, competition rates declined again in FY 2015 despite an increased goal and strong management emphasis by the DAE through the Business Senior Integration Group for that year. Major drivers of this trend are high-value sole-source Foreign Military Sales, fewer new program starts, and higher percentages of the MDAP portfolio (e.g., shipbuilding and aviation programs) in production and thus sole or dual sourced. Increased bid-protesting also forces us to award sole-source contracts to bridge until we can let the new contract awards. We anticipate continued challenges from fiscal uncertainties, but this will remain an area of management focus. (Click here to read a detailed discussion)

NOTE: Fraction of contracts competitively awarded is measured on a dollar basis. We did not establish goals until FY 2010. Open symbols indicate that the subcategory goal for that fiscal year was not achieved. Closed green symbols indicate that the subcategory goal was achieved for that year.

Subcontracting utilization of small businesses. As shown in Figure H‑14 on above, small-business utilization on subcontracts to our prime contractors has been declining since FY 2010. We have missed the goals for the past 4 years. To address this trend, the DoD continues to apply statutory procedures wherein contractors who fail to comply in good faith with the requirements of their small-business subcontracting plans are in material breach of their contracts and are subject to liquidated damages (see the Federal Acquisition Regulation [FAR], Section 19.7). We also emphasize the importance of small-business subcontracting with senior management at our major primes when reviewing their institutional performance. In addition, we are modifying our acquisition strategies to further open up competitions on components for our large weapons systems.

June 2 — Marines from various units learn how to setup and print designs using an Invent3D printer during a class at Camp Lejeune, N.C.. Additive manufacturing, or 3D printing, allows Marines to produce parts quickly, with exact specifications and at almost any location. Photo by U.S. Marine Corps Cpl. Justin T. Updegraff.

Other Observations

Technical superiority concerns continue. Declining investments in both RDT&E and production (click here to see Figure 1‑4)at a time of accelerating threats is delaying and limiting development and production of superior capabilities in quantities that are operationally relevant. Also, trend analysis of RDT&E budget activities (BAs) supports our concern that budget reductions are affecting new system development, which constitutes the programs in the DoD’s new product pipeline. BA 6.5 (System Development and Demonstration, which supports programs after MS B) drops below BA 6.4 (Advanced Component Development and Prototypes) after FY 2014 (click here to see Figure 1‑5) instead of being higher. As part of the Third Offset Strategy and consistent with BBP 3.0, the DoD is increasing its investments in BA 6.4 by funding a number of risk-reduction prototype programs. Without funds to continue these efforts into the more expensive BA 6.5 phase to further develop the systems for production, these demonstrations will not result in fielded capabilities. Figure H‑18 (see below) also shows that the number of new MDAPs has dropped in recent years to about half of what we saw in the mid-1990s and two-thirds of the peak in the mid-2000s.

NOTE: Dates were extracted from CY 1997–2015 SARs, with MS start dates in 1994–1996 extracted from the 1997 SARs. The data points for 1996 reflect the average for calendar years 1994–1996.

MDAP requirements are relatively stable. Preliminary analysis of performance requirements in unclassified baselines and SARs indicate that about 15 percent of 121 MDAPs showed requirements changes that we could trace from the original MS B baseline to the latest SAR for the program. Most of the programs with any traced changes had only one such change. For cases in which we could judge a change as more or less stringent, ( As opposed to additions, deletions, or cases where it is not readily clear whether the change is harder or easier to implement)about half of more stringent requirement changes were in a single program (FMTV, a ground vehicle), and half of the less-stringent changes were in the (subsequently canceled) NPOESS satellite program. Further analysis is needed, but for the most part system requirements appear to be stable. (Click here to see detailed analysis and discussion)

Most MDAPs deliver the original baseline quantity or more. Figure H‑19 (see below) shows the actual number of units procured by completed MDAPs over the last 19 years. More than 80 percent of programs delivered at least 80 percent of their originally planned units, and just over 40 percent of programs delivered more than originally planned. This general pattern also appears to be holding for currently active programs. (Click here to read a detailed analysis and discussion)

NOTE: Completed programs are those that stop reporting after approximately 90 percent of units are delivered or 90 percent of funds are expended. There were n=63 completed programs in our dataset. The bars show the fraction of the 63 programs that procured the indicated range of original quantity percentages (e.g., 35 percent of the 63 programs procured 90–100 percent of their originally planned quantity). The blue line measures cumulative fraction of programs and is read off the y-axis on the right side of the plot (e.g., 19 percent of the programs procured less than 80 percent of their originally baselined quantity.

Tight budgets may motivate overly optimistic program baselines. Changes in DoD budgets at the start of a program (MS B) correlate in the opposite direction of the overall economic cycle (i.e., countercyclically) with changes in total contracted costs aligned to MS B for the contract’s parent MDAP. Thus, as budgets go down, total contracted costs (including both work-content growth and cost-over-target overruns) generally increase, and the opposite occurs when budgets go up. Analysis also found three stabilizing correction factors that adjust growth in total contracted costs when actuals vary from what was expected from the model. (Click here to see detailed analysis and discussion)

The countercyclical nature of total growth of MDAP contracted costs by MS B start date seems to imply that in tight budgetary environments resource planners were willing to take risks to maintain program start rates and used unrealistic and optimistic initial cost estimates to fit more content into military department budgets. With optimistic initial estimates on what typically are higher-risk development efforts, we would expect the higher cost growth that the data reflect.

Conversely, in accommodating budgetary environments, there may have been less pressure to assume risk to maintain number of program starts, so DoD Components may have had more realistic program start rates and cost estimates. With realistic initial estimates and low risks, we would expect lower cost growth.

These data (along with prior results from McNicol and Wu, 2014) further support caution about starting programs with overly optimistic program cost baselines and contract cost targets during periods when budgets are contracting or low, as in the current environment.

The guided-missile destroyer USS Stout receives supplies from the fleet replenishment oiler, USNS John Ericsson during a replenishment-at-sea. Photo by U.S. Navy Mass Communication Specialist 3rd Class Bill Dodge.

Schedule growth is lower than cost growth in development. Overall median schedule growth (B/C to IOC) since 1981 for active and completed programs is running at 1 percent while schedule growth on only active programs has ranged from 0 percent in 1997 to about 15 percent in 2015. In contrast, MDAP program- and contract-level cost growth in development tends to run in the 20–45 percent range, depending on what measures, data, and adjustments are included. Thus, there are indications that the DoD generally may prioritize schedule over cost, which makes sense given that our primary mission is equipping the warfighter against existing threats, but the data and results on schedule growth are mixed.

Operationally suitable programs are fairly likely to be operationally effective, too. An examination of the coincidence between DOT&E test results shows that major programs often tested as operationally effective when they tested as operationally suitable, but the converse was not true. In other words, we have a number of effective systems that have suitability issues (e.g., safety, interoperability, availability, maintainability, and reliability), but systems that address these suitability issues tended to also be effective against threats. (Click here to read a detailed analysis and discussion)

Labor, health-care, fuel, and maintenance costs appear to drive O&S cost estimates. DoD-wide, these factors correlate closely with O&S cost estimate changes reported in 2001–2014 SARs after adjusting for inflation. With some differences, these are also the dominant correlate types of factors for growth in O&S estimates by DoD Component and commodity. In nearly all cases, growth in system service life or quantity did not correlate with growth in O&S cost estimates. Thus, dynamics in these factors appear to be important for controlling O&S costs, and we found these dynamics can cause estimates to vary significantly from year to year (i.e., actual annual growths varied widely from about +/−15 percent since 2001). (Click here to see a detailed analysis and discussion)

Bid-protest sustainments remain low despite increased filings. Despite corporate bid protests to GAO nearly doubling to about 1,300 per year, competitive source selections have increased by half and protests average only about 2.5 percent of solicitations (and about 0.25 percent of contract awards). As for outcomes, the number of sustainments by GAO has remained statistically flat at about 30 per year (see Figure H‑20 below) — about 2 percent of filings. Thus, the increased number of protests appears to reflect, in part, external industry strategies or competitive pressures from the declining DoD budgets rather than poor DoD source-selection performance. These results are commensurate with the Congressional Research Service’s recent analysis of bid-protest rates. (Click here to see further discussion)

Prime defense contractors remain profitable. We monitor operating margins of our prime contractors to ensure that the net effect of our cost-control efforts — combined with other issues such as sequestration — are not negatively affecting the health of our industrial base. In addition to the operating margin data published in our 2014 annual report, Figure H‑21 (see below) plots the trends in earnings before interest, taxes, depreciation and amortization (EBITDA) since 2010 for the six largest DoD prime contractors. Generally, these primes have performed consistently or slightly better against this measure since before BBP 1.0 was initiated in 2010. We will examine lower tiers on the industrial base in subsequent reports.

NOTES: EBITDA are earnings before interest, taxes, depreciation and amortization. Years refer to corporate fiscal years (coinciding with calendar years).


The analysis in this report could not have been performed without authoritative structured data archives in the DoD and GAO. The analysis was primarily conducted by Dan Davis, Ken Munson, Douglas J. Buettner, and Philip S. Anton, with much-appreciated assistance and contributions from Joseph Beauregard, Gary R. Bliss, Caroline Chien, Ellen Chou, Karen Cook, Margaret Cregan, Brian Davidson, Terence Emmert, Adrienne Evertson, Randall Fisher, Alan Fu, Susan Gates (RAND), Lynne Giordano, Larry Klapper, Matthias Maier, Andrew Monje, Philip D. Rodgers, Lisa Romney, Garry Shafovaloff, Thomas Sheehan, Nancy Spruill, René Thomas-Rizzo; E. Andrew Long and Catherine Warner (DOT&E); and Edward Goldstein, Jeanette S. McKinney, and Ralph O. White (GAO). Philip S. Anton, Dan Davis, Ken Munson, and Douglas J. Buettner were the primary writers, with very helpful reviews and comments provided by Philip D. Rodgers, Gary R. Bliss, Michael Glennon, Claire Grady, Susan Raps, and Kenyata Wesley. Benjamin Tyree and Michael Shoemaker cheerfully provided excellent editing on a very compressed schedule.