The Work That Bob Work Is Up To Part II: The Work That Needs To Be Done

Adam Elkus
Rethinking Security
11 min readDec 22, 2015

So, in my last entry, I expressed my opinion about why I thought quantitative social science god (despite a lot of the harsh things I said, intellectual honesty obliges to me to note that Schrodt is the G.O.A.T. of all things quantitative social science) Phil Schrodt didn’t do enough work to understand the work that Bob Work is up to with human-machine teaming. To paraphrase an Akon chorus about another kind of work, Work has enough work to feed the whole town when it comes to human-machine teaming, and Schrodt neither addressed it on its face or really got to the way that social scientists really could contribute to help make it less work. Now that I have said my own piece, I am going to talk about how social scientists can actually contribute to the Third Offset.

You see, in truth Schrodt got one key thing right, although indirectly. DoD invests a lot of money in the “machine” part and the “human-machine teaming” part. But there are distinctly human and specifically social-scientific things that could make or break the success of the Third Offset that don’t have to do with human factors automation or NSA-grade computer science methods for handling lots of graph data at once. A core problem with the current specification of DoD’s priorities is that they are mostly oriented around engineering. The social dimensions and ramifications of human-machine teaming is not something that can be left for later; making sense of it will be serious work for Work and others.

Given the significant consequences of human-machine teaming for various aspects of war, peace, geopolitics, security, norms, and other quintessentially social science topics, Schrodt somehow managed to ignore the most cutting criticism he could possibly make of Bob Work’s work — it impinges on social science but there isn’t a consummerate investment in social science research to help anticipate, mitigate, and pathfind through all of the complexities, challenges, dangers, and uncertainties that human-machine teaming poses. That’s only one half of the social science contribution to the questions that Work is interested in. Given that human-machine teaming involves cooperation between humans and agent collectives, there’s a possibility for social science modeling much in the way that nuclear strategy gave birth to formal and computational models of deterrence, compellence, and other similar subjects.

One thing that I also somewhat agreed with Schrodt about was that placing all of the eggs in the deep learning basket is problematic, though I don’t think that is what Work and co are doing. Why? Well, you see, AI and cogsci have long had a link to social science ideas about individual and group decision-making. Herbert Simon, in fact, was a triple threat that revolutionized intelligent systems, the social science of organization and decision, and psychology and cognitive science (that’s just a tiny sampling of things that Herbert Simon Already Did (TM)). And the reason is that there are problems that cannot be solved by one computer alone or involve social interaction with other computers and/or humans. The collective dimension of human-machine teaming is probably the area as well where the technical risks to peace, security, and norms are arguably the most severe.

So yes, that is the real problem with the human-machine teaming aspect of the Offset as currently formulated. It’s certainly an engineering challenge, but it has multiple social science dimensions in context. And even some of the engineering dimensions are arguably social science problems as well. Yet all of this does not seem to be a particular focus of the Offset’s R&D efforts if a focus at all. [0] Most of the literature referenced by Work in his speeches is scenario planning research done by corporate entities or defense science planning entities. And that’s unfortunate, given that the idea of autonomy and automation — if Work is correct about it being a new regime of conflict–will be one of the most divisive, perplexing, and dangerous areas of security in the 21st century. Without involving social scientists in this process and social science writ large as a consideration for investment, DoD is quite literally fighting with one arm tied behind its back and a blindfold over its eyes. Historically, transformative eras in conflict have seen multidisciplinary efforts to understand and grapple with them. But this does not seem to be the case so far for human-machine teaming and autonomy in regards to social science.

So given all of this, what was I arguing with Schrodt about? We both seem to agree that social science is being left out, though in highly distinct ways. My beef with Schrodt’s argument was not that he was critical of the program. Rather, it was that the critique itself was not useful. Schrodt did not seriously address the context, motivations, and potential implementation of human-machine teaming at all and simply wrote it off as a giant DoD boondoggle that could be meaningfully substituted for by the development of a social science research culture within DoD. As a consequence, Schrodt only recommends a narrow course of action that assumes that the problem is just social science forecasting, prediction, and modeling. This not only does not seriously engage with the problem as Work sees it, but it also enormously undersells the potential social science contribution.

Given the highly uneven quality of social science work on technology, I cannot blame Work for ignoring us in favor of the techno-geeks. [1] However, as a social scientist with feet planted in various camps I’m happy to assist the Deputy Secretary of Defense by pointing him and others interested in this topic to useful ideas, people, and tools. I will start by recommending that Work look at Schrodt himself, mostly because Schrodt is The Man for all things quantitative, data-driven, and political. If I were Work, in fact, I would stick Schrodt in the Office of Net Assessment (ONA) as the Chief Political Science and Data Modeling Badass and sit back with some popcorn as he would use his quantitative political science chops to demolish every single sacred cow DoD has, had, or ever had in the future (with amazing old-school programming skillz in the data modeling process). But that’s just the start. In the following sections, I define three key areas where social science is relevant to Work’s work:

  1. The social context of human-machine teaming. Any kind of human-machine teaming is going to involve some implicit yet deeply sociological assumptions about human knowledge and how it is expressed in machine form.
  2. Computational social modeling of human-machine teaming. There are a lot of ways to use agent-based modeling and simulation (and other tools) to simulate key questions related to the social and political and security implications of human-machine teaming.
  3. Interdisciplinary social science work on human machine teaming. A lot of design methodologies, surprisingly, in computer engineering take after ideas originally or mostly seen in social science.

Social Science Component I: The Social Context of Human-Machine Teaming

There is an enormous set of literature in the social and historical study of science, technology, and computing about how artifacts can both be a direct means of expressing social knowledge power and correlate with certain expressions of social knowledge and power. Granted, much of this literature is highly uneven in quality, but if you’re looking to get a start I recommend a book by Harry Collins and Martin Kusch, The Shape of Actions: What Humans and Machines Can Do. The Morgan Kauffman Synthesis Lectures on Engineers, Technology, and Society series book A Philosophy of Technology: From Technical Artefacts to Sociotechnical Systems is also a good place to look in terms of the social context, effects, and construction of technology. From a historian’s perspective, Thomas P. Hughes’ book on our “human-built world” also deserves a look.

If Work is looking to get some expert insight on it, I would highly recommend that he ring up Geoffrey Herrera, who wrote a book applying theories drawn from history, philosophy, and social science on technology and the transformation of the international system. RAND’s Constantine Samaras also is a policy expert and professor on this topic and is housed down in RAND’s Pittsburgh branch. To top it off, Janet Vertesi has done some great academic ethnography about the social constitution of robotic systems that needs some more play.

Another question is to what degree the machine can be regarded as a social agent, a question that Steve Woolgar raised in the 1980s. More recently, actor-network theory and object-oriented ontology have become popular in the social science and humanities. Social scientists and computer scientists are teaming up to study human-agent collectives (as can be seen in this ACM article). Lastly, Work might want to be on the lookout for graduates of the Carnegie Mellon University PhD program in Societal Computing or look more broadly at CMU (and other universities)’ specialists in similar topics. Finally, Miles Brundage is doing his PhD dissertation on the computational modeling of the social dynamics of AI and cognitive systems research progress.

Lastly, the question of how human-machine teaming would impact security and world politics is obviously an social science topic! A shortlist of some of the most advanced people that are studying this right now from a social science perspective are Antoine Bousquet, Michael Horowitz, Charli Carpenter, Jack McDonald, Stephanie Carvin, Kenneth Payne, and Thomas Rid. I could generate an entire bibliography but that would be pointless. I think these people are doing an interesting cross-section of topics in the security consequences of human-machine teaming (for example, Horowitz has an interest in military innovation, Carpenter in norms, and McDonald in big data and machine learning). That’s a lot of reading, certainly, but it’s also more than worth it.

Social Science Component II: Computational Social Modeling of Human-Machine Teaming

Could computational modeling in the social sciences contribute to useful discussions on HMT? Certainly! First off a lot of social scientists do work with agent-based modeling and multi-agent systems toolkits. The former involves lots of agents with very simple behavioral systems and the latter a few with rich and detailed behavior structures. And this is where I insert the obligatory plug for my department at George Mason University, which has some of the most advanced social science research in both on tap. I mean, we have the most awesome agent modeling toolkit ever, which is appropriately named MASON and can be run on either one computer or in parallel on a bunch of PCs. Lastly, I also should mention that a professor in our department taught a reconaissance platform to recognize USMC hand signals.

Now that this bit of obligatory promotion is over [2], there is an enormous research literature on the agent-based modeling of sociotechnical systems. Carnegie Mellon’s Kathleen Carley teaches a course on it, in fact. Agent-based modeling is also used by Uber to optimize their human dispatch system, and agent-based modeling has been heavily used for scenario analysis about the travel, urban, and environmental implications of individual and shared autonomous vehicles. Agent-based modeling is already heavily used in DoD for operations research, has been seen in historical simulation, and in the modeling of international conflict.

Now, I’d be grossly overreaching to say that agent-based modeling is the best or only way to use computational or quantitative methods to explore the implications, challenges, and potentials of human-machine teaming from a social perspective. But, suffice to say, it demonstrates the potential for a certain kind of social science work to address even highly non social science-like problems through programming, modeling, and simulation. Nowhere is this potential more clear when it comes to the issue of incorporating into social science computational models the internal cognitive processes of human managers of machines as well as the ways in which machines interpret those processes. I’ll rattle off a few names and publications.

Cognitive modeling guru Ron Sun has several edited collections that bridge the gap between simple agent-based models used in the social sciences and the detailed cognitive models used in computational psychology. And if that’s not enough, Joshua Epstein recently came out with a book that melds social science agent-based modeling with cognitive neuroscience. And yes, there are also people in my department who do this too. [4]

Social Science Component III: Interdisciplinary Social Science Work in Human-Machine Teaming

There are many shared problems that social science and autonomous agents have, if one looks closely. First, a lot of work in computer engineering of intelligent agents takes ideas commonly used in social science for inspiration. Game theory shows up quite often in situations where decision-making programs must strategically cooperate and compete with other agents. In fact, game theory shows up in state of the art reinforcement learning approaches too! Seems like it’s hard to go anywhere in computing with decision-making programs without game theory coming up at least tangentially somehow. That ought not to be surprising given that economist and game theorist John Nash wrote a monograph for RAND about parallel programming in the 1950s.

Second, social knowledge, organization, culture, cooperation, norms, hierarchies, and decision-making are design patterns for the construction of both individual systems and multi-agent systems. Deep learning pioneer Yoshua Bengio says that replicating the human brain’s capacity for overcoming optimization difficulties owes itself to culture and communication with other humans. Futurist John Robb, who I gather is also working on a similar topic, has said the same. Crowdsourcing has been used to train robots in complex tasks, and RoboEarth and other cloud robotics platforms create shared knowldge and collaboration structures between robot systems. Finally, shared knowledge ontologies and frameworks will be used as standards to bootstrap robot development.

If this has some overlap with social science, as the saying goes you will not believe what happens next. Artificial societies, institutions, and organizations are all social science inspired design formalisms used to create systems of agents that solve complex problems through coordination, communication, and cooperation. As more and more economic functions become automated as well, computer scientists are quite literally turning to economists to help manage markets and economies composed of these programs. And as someone with a political science BA and MA I’m also obligated to point out that social choice theory (of Kenneth Arrow vintage) has also ended up as yet another social science inspiration for the design of computer agents and systems.

Conclusion

Obviously that’s quite a collection of topics, books, and reading, but so is the mountain of technical and behavioral literature that I imagine will be produced either directly or indirectly as a consequence of DoD funding for human-machine teaming. And it’s relevant to the one aspect of Schrodt’s take that I find somewhat congenial: “[i]t’s that word “human” that’s setting me off, as when it comes to technical applications, DoD can’t ever seem to do ‘human.’ ” DoD does suck at doing human, unfortunately. But if social scientists are to help DoD fix that, they have to understand what DoD’s problems are, what kind of work is most relevant to helping with those problems, and what kind of challenges amenable to social science analysis that DoD’s problems suggest.

Given that I just wrote all of this off the top of my head while taking a break from finishing up some coursework/exams [5], I am not going to claim that this is anything but a basic take on what those problems and challenges — and social science implications and applications — are. Your mileage may vary. That said, I do feel that my suggestions address the nature of the problem as formulated both by DoD and the larger implications of that formulation for peace, security, and other things we care about. There’s a lot of work that Bob Work and need to do in this area, but at least knowledge and expertise on that work already exists in spades. Any people in DoD reading this should use the specific people I’ve listed here as a rolodex to call up, if any of what I say strikes you as meaningful or useful. [6]

Footnotes

[0] Obviously it’s premature to talk about something that is just starting up, and I also could be wrong……

[1] Let’s not mince words. It’s frequently awful and I’ve wasted far more time than I should writing about why it is awful elsewhere on my Medium profile.

[2] It’s my department, I gotta flog them every chance I get!

[3] Oops, I just flogged more research by people in my department. Can’t help myself.

[4] OK, I’ve given up trying to prevent myself from flogging people in my department.

[5] It has to do with multi-agent systems, actually.

[6] Some of them may be already known to people who follow research in conflict, security, and strategy or research in the social dimensions of science and technology. Others may not be.

--

--

Adam Elkus
Rethinking Security

PhD student in Computational Social Science. Fellow at New America Foundation (all content my own). Strategy, simulation, agents. Aspiring cyborg scientist.