Managing Research Teams — Part I

On Being a ‘Servant Leader’ in Research

Management. A word that, in some of my circles, elicits a strange mixture of sympathy, dread, occasional envy, but mostly wonder as to how any respectable researcher would ever be sufficiently willing — or ever feel competent enough — to take on the responsibility of becoming a manager.

In my case, it began literally as a case of ‘well, someone’s gotta do it …’, though admittedly I am never one to pass up on an opportunity to tickle my inner impostor syndrome. Ominously, my previous manager had burned out within a year of reluctantly taking the job, and had opted for an early retirement. As a relatively junior member of the team, which included several respected members with a decade or more of experience, I am quite certain that I wasn’t the first person that my director wanted to ‘promote’ to that role. But I guess I was the first one to say yes.

Managing a research team is an unconventional role. Many will immediately bring up the notion of ‘herding cats’ when I mention that I lead a group of research scientists. I actually firmly reject that characterization: approaching the role with that mindset is not only a great way to set yourself up for failure, it’s also grossly undervalues what solid management can bring to a research effort.

I will not rehash here what others have said much better than I can about leading technical teams. I warmly recommend the very concise and insightful Debugging Teams from my colleagues Brian Fitzpatrick and Ben Collins-Sussman. In particular, they go to some length describing the servant leadership principles that are at the core of the leadership style that’s practiced in my current workplace, and particularly befits the research setting.

Instead, I’ll focus on a few aspects of leadership that are most relevant in the context of research teams:

1. Be an adamantium-plated shit umbrella.

Research demands long-term focus and psychological safety is a must to sustain it. Many forces can intervene and break one’s fragile state of flux.

Administrativia is the most obvious burden that a team lead can remove from the member’s shoulders, but there are many more: from product teams looking for a quick fix to a problem, to opportunities to give talks at one summit or another, to talking to enterprise customers about their own challenges, to giving demos to executives. Much of it is well-intentioned, and some of these activities can be of value, but in my experience it’s an area where liberal application of the polite ‘no’ is a must. I often joke that by the time I’ve gone through my morning email, I’ve already exhausted my emotional quota of ‘no’ for the day. Saying ‘no’ is hard, but I’ve rarely regretted it, while I’ve seen plenty of ‘maybes’ or ‘yes’ that have turned into onerous commitments in hindsight.

Another class of activity is distilling all the ‘big company toil’ to its strictest minimum: dealing with space, budget and resource planning, keeping abreast of the organizational landscape and company priorities, preparing OKRs and executive reviews. It’s tempting to delegate much of this and focus on the shinier parts of the job, but it’s also how ‘shit funnels’ earn their name.

A more subtle form of ‘emotional toil’ that is often imparted on research teams is uncertainty from above: Does our research matter? Does this new leader a few rungs up in the executive chain care about us? Is our research agenda at risk due to this or that reorganization? Uncertainty is a slow-acting tax on research: uncertainty means taking fewer risks, shrinking time horizons. As a research lead, providing stability and articulating a firm long-term commitment to the research agenda has a direct impact on the team’s ability to explore with confidence.

2. Shoot down shiny objects on sight.

There are plenty of opportunities in research to go on tangents that are so appealing that they put researchers at risk of losing track of their own goals. New equipment: ‘niiice, look at what we could do with this’ — with 6 months of development and a whole new software stack, maybe. New ‘hot’ papers, claiming breakthroughs that will take a few months to (fail) to reproduce. New tooling that this other team is using and seems better than yours — but will it yield better research?

Another one of my favorites is the perennial temptation to turn the code one wrote just for a paper into a ‘framework’ of sorts: after all, you’ve spent several months coding up the new method alongside all the relevant baselines, it’s starting to feel framework-ish, with just a bit more polish, maybe a quarter, it could become generally useful to others? Often, the answer is: probably not.

I see part of my role as pointing out where these shiny objects lie along the path and calling them out for what they are. In research, one could spend a lifetime shaving yaks and much of that time spent would feel like progress. The classic scenario in my immediate world of robotics research is adding better hardware to a robotic setup to improve performance on a task: if what you’re studying is how to make the software more intelligent, better hardware is actually negative progress: not only you’ve made the problem easier, and your roofline in terms of research impact is now lower as a result, but you’ve made your benchmark less general — and hence less valuable — by adding extra dependencies to it. Progress should be measured in increments of learning, not benchmark performance.

3. Kill what doesn’t work.

Bad ideas refuse to die. As long as they can find a sliver of a handle on a researcher’s ego they will linger, fester, and occasionally become cult-like belief systems that then become impervious to skeptical enquiry. Some may argue that the results are bad because they’re looking at the wrong problem, or the wrong benchmark. This usually leads down the path of committing the capital crime of dataset selection. Maybe the model is too simple, and by throwing the kitchen sink at it we’ll get a positive — albeit useless — result. “But that’s how the brain works, it can’t be wrong!” Calling out and exposing when research starts veering into cargo-culting territory can have a huge impact, especially before desperation sets in and people start erecting defensive intellectual shields against criticism. Many of my peers are contrarian by nature, and that quality turns into a definite strength when it comes to research leadership.

Likewise, bad collaborations often thrive on their own toxicity. In their most common form, they may involve brilliant jerk worship, easily managed by … not hiring them in the first place. But they also naturally blossom in otherwise mild-mannered people out of misaligned incentives and goals. At worst, a sort of reciprocal Stockholm syndrome sets in, whereby the parties thrive on each other’s misery. I say at worst because from an outsider’s point of view these dysfunctional relationships can actually yield spectacular outcomes and can remain somewhat closeted in their dysfunction for long periods of time. I am always amazed at people’s innate ability to make themselves miserable. As a leader, it’s important to go past the end products of a collaboration and examine its dynamics, lest you miss out on addressing fundamental issues.

4. Become a patient, caring counselor.

I’ve written in the past about the challenges of being a researcher. Those very stresses and obstacles are at the core of what a research manager can help deal with. Much of it starts with listening, and being genuinely interested in helping people succeed. There are many stresses that stem from things a manager cannot control yet invariably come up in one-on-one meetings with implied pleas for help. It’s often important to resist the urge to solve the unsolvable, and merely listen.

A fantastic tool for this engaged listening is the GROW model used in the context of one-on-one meetings. Summarized, it’s about asking 1) what’s your Goal? 2) what Reality are you facing? 3) what are your Options? 4) what Will you do? — Very straightforward, but a great way to structure interactions that are more about eliciting questions than providing answers.

5. Build the narrative.

My team’s mission today is to ‘make robots useful in the real world through machine learning.’ Every word of that research statement matters. Each word has been used at one point or another to justify decisions that would otherwise have no strong grounds for being made. It defines a specific, arbitrary-by-design, scope for the research agenda, without being prescriptive about it.

People and organizations need an identity. Something that can concisely define who they are and what they are about. That narrative doesn’t always come naturally in an academic setting, but it is no less important: it’s a steady goalpost in an otherwise uncertain environment. It’s also something to hang on to when contemplating the vast open space of possible research directions. It’s equally useful to have a clear sense of what you are not about, opportunities you will not chase, for no other reason that you’ve decided it wasn’t part of your team’s mission.

Part II can be found here.