The Work That Bob Work Is Up To

Adam Elkus
Rethinking Security
7 min readDec 18, 2015

Phil Schrodt has an typically acerbic take on the new DoD efforts at human-machine teaming.

It was with a mix of deja vu, amusement and resignation that I saw the latest Dept. of Defense (DoD) pronouncements — try here and here — about their intentions to take a very important innovation in machine learning, recurrent neural networks [2], and use this as the centerpiece of a major new machine-human interaction initiative. It’s that word “human” that’s setting me off, as when it comes to technical applications, DoD can’t ever seem to do “human…….

With the intensified exposure to this phenomenon in the past couple of years, I’ve finally figured it out: the Doughboys are the equivalent of the Communist era minders in Soviet puppet states, and, consistent with the tactics of the Old Left, their entire purpose is to make sure that these meetings remain completely pointless [6] and avoid the disastrous possibility that DoD might, say, spend $10-million on some social science [7] research that would prevent a $100-million mistake or even worse, spend $100-million on research that would prevent one or more $1-trillion mistakes, or, worst of all, develop a sophisticated social science research culture within DoD comparable to that found in numerous other parts of the government, to say nothing of academia and the private sector.

I normally finds Schrodt’s missives enlightening and humbling as a researcher, but this one was so so fundamentally wrongheaded that reading it caused me to explode into a public spectacle of rage on social media. Why? Schrodt sees the entire thing as fairy dust and argues that the money would be better spent on a “sophisticated social science research culture” within DoD. But that’s an apples to oranges comparison. What DoD is trying to may somewhat involve social science, but only indirectly and as simply just one small component of what could be a far-reaching set of programs. And the reasons why DoD and others are interested in these new suite of technologies frankly have little to do with social science or research at all. It’s called human-machine teaming, This should be a clue that automation, human factors, and a large amount of computer engineering — not really social science per se — is what DoD is looking for. And for some fairly particular reasons.

In the article that Schrodt links to, DoD’s Deputy Secretary of Defense Robert Work lays out these priorities for the Third Offset:

Autonomous “deep learning” machines and systems, which the Pentagon wants to use to improve early warning of events. As an example, Work pointed to the influx of “little green men” from Russia into Ukraine as simply a big data problem that could be crunched to predict what was about to happen.

Human-machine collaboration, specifically the ways machines can help humans with decision-making. Work pointed to the advanced helmet on the F-35 joint strike fighter, which fuses data from multiple systems into one layout for the pilot.

Assisted-human operations, or ways machines can make the human operate more effectively — think park assist on a car, or the experimental “Iron Man” exoskeleton suit DARPA has been experimenting with. Work was careful here to differentiate between this point and what he called “enhanced human operations,” for which he did not offer an example, but warned “our adversaries are pursuing [enhanced human operations] and it scares the crap out of us, frankly.”

Advanced human-machine teaming, where a human is working with an unmanned system. This is already going on with the Army’s Apache and Grey Eagle teaming, or the Navy’s P-8 and Triton systems. “We’re actively looking at a large number of very, very advanced things,” Work said, including swarms of unmanned systems.

Semi-autonomous weapons that are hardened to operate in an electronic warfare environment. Work has been raising the alarm for the past year about weapons needing to be hardened against such attacks, and noted the Pentagon has been modifying the small diameter bomb design to operate without GPS if denied.

Note that forecasting is only one of these applications. And even in that, the story itself is also misleading in that deep neural networks are only one of the kinds of initiatives DoD is investing in. DoD is also investing in probabilistic programming and Bayesian modeling in particular, as well as collaborative intelligence analysis and assessment more broadly. One thing that should come across here, especially when the linked Defense News story makes reference to “making coherence out of chaos” is that this is an exercise in large-scale automation. This is not a social science problem. It is simply, an issue of how DoD acts based on both what it sees as a problem and an opportunity.

  1. The problem is the perceived decline in the ability of humans to control complex sociotechnical systems. This is an issue that has preoccupied both the government as well as its critics for decades.
  2. The temptation is the equally old promise of “man-computer symbiosis” that allows the intelligence analyst, commander, or decision-maker writ large to offload much of their cognitive processes to machines.

We can see several antecedents for how the Third Offset bridges the stated problem and temptation.

  1. During World War I-II, a military science of systems emerged from efforts to automate fire control and other forms of techno-tactical decision-making. In the early days of the Cold War, problems with the pace and information processing of tactical decision-making in air defense led to the creation of the SAGE system for air defense. Finally, progressively more demanding command and control tasks and faith in technology led to DARPA’s failed attempt to essentially automate the whole of US command and control with an AI expert system. If centralizing decision-making via automation failed, decentralizing decision-making with the network-centric warfare and Transformation (really centralization in disguise, but that’s another story for another time) also has had some mixed results at best .
  2. The use of computers since the atomic bomb for stochastic simulation and analysis. Because defense problems, as Clausewitz said, involve simple rules and complex probabilities, it is unsurprising that probabilistic methods have been used to automate intelligence analysis, targeting, and command and control. It is also unsurprising that one of the fathers of statistical process control, W. Edward Deming, served in World War II military planning efforts and stochastic models in general have been key to both computing and military operations research. They are also, unsurprisingly, key to target tracking (especially when multiple agents are doing the tracking).
  3. Human-machine collaboration as a way of generating an overwhelming military advantage, something that has been of interest to military men and scientists since Licklider’s notes on man-computer symbiosis (which use the hypothetical of a commander trying to plan a battle on a slow and frustrating computer). A large amount of literature has documented how Cold War scientists and decision-makers attempted to both heavily automate decision-making and replace the human role in it altogether. A large amount of research the federal government funds goes into understanding cognitive engineering of decision systems, human-machine collaboration, and autonomous machine technologies and applications,

All of these influences have come together in the idea that, as Work has espoused, we need to invest in the following technologies for military and security success:

At the centre of the US DoD Third Offset is Human-Machine Teaming (HMT), with five building blocks:

Machine Learning

Autonomy / AI

Human-Machine Collaboration

Assisted human operations

Autonomous weapons.

The analogy with Centaur Chess is a powerful one, and potentially offers the best use for both people (H) and machines (M). However, this approach is not easy to implement.

One can critique this from the standpoint of human factors and automation, as one British expert does so in the referenced link. One can also wonder about how safe, reliable, and accurate such a system or system of systems can ever be. One can also critique it on legal, moral, and political grounds or suggest that its moral, legal, and political commitments are instrumentally counterproductive. And so on. I myself have voiced skepticism about the challenges involved here and other places. Or you could even, as many scientists did about the Strategic Defense Initiative, just point out that there was no way it could technically work outside of science fiction.

But it’s bizarre and ridiculous to not talk about the program in terms of what it is actually trying to achieve. Schrodt makes some vague references to DoD vaporware, but it’s far from clear (see the debates here) that the inspiration for the current offset strategy was vaporware. It might be more helpful to talk specifically about what is wrong with Work’s program, either in terms of its intention or its proposed implementation, rather than simply suggest that it is vaporware to be replaced by social science efforts that may or may not meet all of the policy, strategy, and operational needs that Work has articulated.

It’s far from clear, for example, why a ‘better social science research culture’ within DoD is going to help handle, say, the challenges that have motivated Work and others to look at swarming combat platforms, the processing of massive amounts of raw intelligence, automated support to real-time combat decision-making in general. It’s as bizarre as, say, criticizing the World War II Office of Statistical Control for doing operations, rather than social science, research. Their job was to crunch numbers and optimize the effectiveness of US bombing missions. That sort of job, and its Cold War successors, had a lot more to do with things like mathematical programming and the simplex algorithm than social science per se.

Obviously there’s a role for social science in the Third Offset, and Schrodt is, admittedly, a world expert in quantitative social science. If anything that Work and co are doing overlaps with Schrodt’s area of expertise, they should take every single one of his maxims to heart, however crankily expressed. But…..it’s not really clear that Schrodt fundamentally understands what Work and DoD are trying to do, how are they trying to do it, and why they are trying to do it. That is frankly the only conclusion that I have after reading so puzzling and frustrating of an take on Work and human-machine teaming.

--

--

Adam Elkus
Rethinking Security

PhD student in Computational Social Science. Fellow at New America Foundation (all content my own). Strategy, simulation, agents. Aspiring cyborg scientist.