The benefits of peer review

Leverage your teammates to elevate your work.

Chris Pitler
Data Science at Microsoft
5 min readMay 15, 2020

--

In the field of chemistry, physics, or any other science, the peer review process is essential. For a discovery to be written into the canonical understanding of the world it must be validated by another set of scientists who work in the same space. Feedback is exchanged, experimentation and writing may be redone, and the finished product is a published research article. Scientific peer review ensures a high level of quality and integrity before research is shared with the world. A team of data scientists can leverage this same concept to enhance their own work and improve the caliber of their business insights.

How we do it

Peer reviews represent one of the standards areas our team uses to help define and follow best practices in data science. We tackle peer reviews a little differently than academics who submit papers to publishers. We hold weekly peer review sessions where data scientists present the current state of their project, analysis, model, or even slide deck. Bringing work to a peer review requires little overhead — sharing incomplete slides, messy code, or even whiteboarding can all lead to productive feedback sessions. We refrain from establishing a rigid procedure during the actual review because of the diversity of content discussed. However, we’ve found that before diving into the details of a project it’s often helpful to establish a few things:

1. Problem statement: What are you seeking to accomplish with your work?

2. Audience: What teams or individuals will be leveraging your output?

3. Approach: What have you done to date? What have you learned so far?

4. Questions: What kind of feedback are you looking for? What do you want to get out of the peer review?

Data scientists are often guilty of spending a little too much time in the weeds, and so starting with this structure keeps the conversation on task and ensures alignment between the presenter and reviewers. At the end of peer review, it’s not uncommon for the presenter to have a list of next steps to improve their work — try this statistical test, modify these visualizations, or talk to this teammate. It’s through these suggestions, or different perspectives, that this process improves the quality of our work.

Last year, I was preparing a presentation on measuring instances of Microsoft customers migrating their infrastructure to Azure. After months of being entrenched in the data and meeting with stakeholders well-versed in the migration process, I thought I was ready to synthesize everything for an easy-to-understand presentation. Just in case, I brought my work-in-progress presentation to peer review. I quickly learned two things:

1. Concepts I thought I needed ten seconds required much more explanation.

2. Concepts I thought I needed five minutes required much less explanation.

I had fallen into the classic trap of being so embedded in my space that I lost sight of the outsider’s perspective. Through just a short practice run of my presentation, my teammates were able to give this feedback, and I was able to rework the presentation. The result was a more informative product that delivered impact. Peer review elevated my work and my audience’s experience.

Share ideas with everyone

A few years ago, before our team established a formal peer review process, feedback came in silos, and any learnings that came out of those sessions was exclusive to the involved parties. This was problematic because it limited knowledge sharing and broader cross-pollination of ideas. If I had brought my Azure migration presentation to my immediate team, I’m not sure I would have received the same feedback, given their familiarity with the subject. Likewise, those teammates who reviewed my presentation would have missed out on learning about Azure migrations. We were limiting our potential by operating like this.

Through our current peer review process, we have sought to create an environment of inclusion, where anyone interested can participate in peer review, and in turn learn more about what the rest of the organization is working on. Peer review signups are visible to the whole team, and so data scientists can decide in advance whether they want to participate. If I want to learn more about the Microsoft for Startups program and a teammate is bringing their Startup-related project to peer review, our participation can benefit both of us.

Practice feedback

Giving and receiving feedback is foundational for growth, but it is often hard to find a safe space to practice. For me, peer review provides ample opportunity. Because our feedback sessions are consolidated to a set time, I can mentally prepare myself to provide or receive feedback. Getting into that headspace can be challenging, and so offering a recurring timeslot can reduce the friction to get team members who ordinarily might not ask for feedback a chance to do so.

Many resources exist on how to best give or receive feedback, and so I won’t go into detail here, but I’ve found it useful to keep it constructive and specific. Attempting to create clarity through asking questions often allows the presenter to realize what they need to modify. During my Azure migration peer review, I knew what I needed to fix after just a few questions (or lack of questions). Building these muscles through participating in peer reviews can make future feedback exchanges a smoother and more familiar process.

Build the outer feedback loop

A benefit of the centralized peer review model we use is the ability to capture metadata on the types of techniques or approaches various data scientists are using across the team. Which datasets are most useful for measuring churn for this segment of customers? Which sampling technique makes the most sense for this population? With enough signals, we’re able to piece together and eventually establish organization-wide best practices. When there are conflicting approaches, or a lack of any viable approaches at all, we often have to take a step back and develop a single standard that can be etched into our team’s analytical playbook. This ultimately results in an outer loop of feedback where peer review reveals opportunities to develop best practices, which are then polished, documented, and eventually adopted as standards in future peer reviews when similar problems surface. However, all of this is easier said than done — we’re always trying to improve and build upon our best practices as part of our defined Standards areas.

Conclusion

A peer review process that encourages inclusivity and open feedback can pay dividends for a data science organization. For our team, we’re only beginning to see the long-term benefits to our business insights, range of knowledge, and culture. After all, everyone has their own strengths and weaknesses — leveraging a teammate’s expertise or experience can elevate work beyond what anyone would be able to do on their own.

Chris Pitler is on LinkedIn.

--

--