Highly Effective Data Science Teams
For all its hype, Data Science is still a pretty young discipline with fundamental unresolved questions. What exactly do data scientists do? How are data scientists trained? What do career paths look like for data scientists? Lately, I’ve been thinking most about a related question: What are the markers of a highly effective data science team?
We often think first of “is there lots of data?” as the most important criteria for doing great data science work. I want to argue for a broader list that explores the processes of the team, the infrastructure that supports the team, and the boundaries between the team and the rest of the company. If you can organize those in a way that lets the team focus on the problems they own and remove friction around those problems, data scientists will excel.
This approach is inspired by Joel's test for software engineering teams. The structure of his framework is simple. You should be able to quickly answer each question with a yes or no. More yes’s are better!
This is a baseline measure of health — great teams might diverge on many other dimensions. These questions are as much about the ecosystem around the team as the team itself, but in my experience data science teams are so embedded that they must be acutely concerned with their organizational environment. You can also think about this from the perspective of someone thinking about joining your team; what would you ask about a team you were thinking about joining?
- Do you spend the vast majority of your time on projects that take longer than a day?
- Does data infrastructure have dedicated engineers working on it?
- Do people in the organization have ways to access basic data without asking a data scientist?
- Can you access data without impacting production system performance?
- Do you spend more time doing analysis than waiting for data?
- Is there documentation for major schemas?
- Is instrumentation considered part of a minimum launch-able product?
- Do you have a process for detecting and fixing bugs in data collection?
- Is past research work documented and available in a central location?
- Does the team have a regular process for reviewing work before sharing it?
- Do you run experiments to understand the impact of decisions?
- Can you report negative results without major political pressure?
- Can the CEO (or other leader) name at least one way the team contributed that quarter?
- Are data scientists consulted in product and business planning processes?
Great data science work is built on a hierarchy of basic needs: powerful data infrastructure that is well maintained, protection from ad-hoc distractions, high quality data, strong team research processes, and access to open-minded decision-makers with high leverage problems to solve.
The first set of questions (1–3) focuses on whether the data science team is properly protected from tasks that could be better handled by better infrastructure, tools, or other specialists. Because data science is an interdisciplinary field and data scientists have at least basic skills in many adjacent domains (like engineering, dev ops, product management, math, research, writing, business, etc.) one of the easiest failure modes as a team is if they can’t focus on work that requires that entire set of skills to accomplish. Spending most of your time on ad-hoc requests, supporting simple data access, or doing data pipeline management displaces data science work. Because they can do that work well, it takes a disciplined organization to make sure they don’t have to.
A data team without rich data is flying blind, and questions 4–8 test whether the team has enough data and the associated tooling to work with it efficiently. If working with data is high friction because it conflicts with production systems, is undocumented or inconsistently collected, or simply not present, then it becomes challenging for a data science team to contribute in a timely fashion. They are also a measure of the level of organizational trust the team has; if product teams don’t get value from the data science team, building and fixing data collection systems will get de-prioritized.
Internal team processes (covered by questions 9–11) ensure the team is doing the kind of high quality research work that builds and maintains trust in the organization. Validating the work of a data scientist is out of reach for most of the team’s customers, so it is the responsibility of the team to commit to documenting their work, putting it through strenuous peer review, and evangelizing results. It should go without saying, but controlled experimentation is the most critical tool in data science’s arsenal and a team that doesn’t make regular use of it is doing something wrong.
If there is pressure for the data science team to make products look great even when evidence doesn’t support that view, then leadership is rotten. Teams must be able to report negative results confidently, otherwise everyone will lose trust in positive results. Data science teams need access to decision-makers with high leverage questions, and those decision-makers must have an honest relationship with data and evidence. One good proxy for this is whether there is demand for the data science team’s involvement and that leaders can rapidly identify how data science helped their team succeed. The final questions, 12–14, try to catch any of these issues.
This list is clearly not exhaustive or totally generalizable. The boundaries of what is and is not data science are still highly contested. I expect that teams who focus purely on building data products might have a very different perspective, as would those that intentionally blur the lines between data science and data engineering. Is there common ground between all data teams? Feel free to speak up in the comments and suggest new questions or vote to strike questions that you think aren’t broadly applicable!