Gathering customer feedback is everyone’s job.

Strategies for building and maintaining feedback loops for cross-functional teams.

Greg Nelson
In The Hudl
9 min readJul 21, 2014

--

I wanted to give you a quick primer on the main features that we have been using this process on during the last few months. The team I work with has been building our Playbook tool. This tool allows coaches to create diagrams of their plays with both written and video content that is then shared with their players. This allows athletes the ability to study the plays anywhere. Most of the examples below start about three months into the Playbook project. Previous to the examples below we had validated features like diagram creation and play organization with other MVPs.

The feedback loop is really designed to take our initial MVP and develop it into future MVPs and enhancements. It leverages the same principles of testing assumptions and validating features that building an initial MVP does but provides a platform for us to continuously learn from user stories alongside hard usage data.

Process for building cross-functional feedback loops.

Building feedback loops can be boiled down to these five steps.
This was the form we had available at the top of the page in our Playbook feature.

First, the team must determine what they want feedback on. There are two types of feedback loops you can create: feature based and task based. Feature based feedback loops are broad feedback cycles around a feature that may contain many workflows and pages. Our feedback loop began with a static link at the top of every page that was a part of the feature. Hitting the link took the user to a form that contained general questions regarding the feature. We used Wufoo to facilitate this.

The other emphasis for us was task-based feedback. This focused specifically on an action that we wanted the user to take. We targeted the feedback based on the usage of certain sub-features. We targeted features that showed under-performing usage numbers.

Once the area is identified, the next step consists of tailoring the form or email to what we wanted to learn. For the feature based feedback loop, we asked broader questions because we didn't know the exact tasks the user had performed before hitting our form. We just wanted general feedback on the many workflows the user could take. We identified the most important questions we wanted to ask as a team. From there, we were able to include usage information on the form via hidden fields.

Wufoo allows you to hide form fields and we fill those fields with user data that provides context for our team.

In the task based loop, we laser focused questions around our goals. We knew the exact context the user was in and were able to investigate why a user was taking an action we deemed to be problematic.

This task based feedback form was developed to figure out why users weren't attaching video clips to diagrams.

Once the forms are tailored precisely to the user, they need to be timed appropriately. For task based feedback loops it is important to contact the user right after they have used the particular feature. For tasks that take a longer amount of time (1-2 hours), it is important to time the contact properly. For feature based feedback loops, they can be constantly available or targeted towards users after they hit certain usage milestones. It is important to discuss how best to throttle these notifications. Be careful to not bombard a user or the feedback that is received could be curt and un-insightful. Most task based notifications can be limited to one notification per user. Depending on the number of users a feature has it may also be important to throttle the percentage of users that receive these notifications.

Once the forms are out in the wild it is important to discuss the follow up plan. By default, any time a user sent in a feedback form, every team member was notified via email — again, it’s everyone’s job. Whenever a user is taking time to give feedback on how to improve a product it is important to give them the courtesy of a response. In each feedback notification, include the response number in the subject line (Wufoo does this automatically — i.e. Hudl Playbook Feedback #61). From there, split up which team members cover what responses. Person A covers all responses that end in 0 and 1, person B covers responses ending in 2 and 3, etc. Each user should get at minimum a message of thanks, though the expectation is that this is an opportunity to follow up with questions for the user and investigate any issues they encountered.

Finally, it is important to take time to synthesize all of the user feedback. Have a meeting every two weeks where all of the responses are inventoried and time is dedicated to discuss the priority of any enhancements or fixes to the feature.

Specific challenges and failure stories.

How do I share?

The biggest challenge our feature based feedback loop helped uncover was our lack of clarity on how to share the diagrams the coaches were creating.

We anticipate that our users wanted to organize their plays before sharing them with athletes. Because of this flow, we embedded our sharing tool within an organization entity called an Install. The install allowed users to custom order diagrams before sharing them out. In short, you had to create an install before sharing plays.

What we found was that even though users had created some installs they still didn't know how to share with their athletes.

The user was so close to the sharing tool, how could they miss it!? More like, how could we have missed that!

Nothing causes you to look in the mirror more than receiving emails like the one above. The user was in the exact area where they could share their plays, but they still didn't know how to do it!

These types of emails forced us to spend time fine tuning our sharing workflow. We eventually extracted the sharing workflow outside of the install creation (organization) workflow. Unfortunately, it took our users continuously calling us out for us to make the change.

How do I attach video?

One big piece of feedback we received from our feature based feedback loop was that users wanted to attach video clips to their diagrams. Coaches knew that seeing video examples of a diagram would allow their athletes to master those concepts. When we first built this out, we saw very low usage. Only 6% of coaches were attaching clips to their diagrams. But coaches had told us they wanted this!

To investigate this issue ,we set up a task based feedback loop. Each time a user chose not to attach a clip, we sent them an email asking them why.

We actually had a similar workflow already built out in a different part of our website and our users were quick to point out that they preferred that method of selecting video. In our minds, that workflow was convoluted and confusing.

That forced us to respond to our feedback emails and dig even deeper on what the users were looking for. What were we missing? We found that our users wanted meta-data around each clip that they would then use to make a decision on whether or not to attach the clip to the diagram. In short, we were not giving the user enough information for them to make a proper decision.

We started experimenting with adding different pieces of meta-data to the clips and continued to prod our users for their thoughts. We eventually found a proper mix of data that made the feature impactful for coaches and silky smooth for them to use. We ended up seeing a video attachment rate of greater than 20%.

One other important thing to mention was not only did this feedback loop allow us to increase our video attachment rate, but it also allowed us to set proper goals. Initially we were hoping 70% of diagrams would have video attached to them. We realized by asking coaches what they looked for in clips and by figuring out what their ideal playbook looked like we were overaggressive with our initial estimates. There were a number of factors including new schemes, missing video, or low quality video that meant coaches didn't want video on every clip. 50% turned out to be a more realistic goal for our video attachment feature. Needless to say, there is still work to do!

Tangible Outcomes

The ideal flow of our feature is the following:

Coach Creates Diagrams -> Organizes Diagrams -> Shares Diagrams -> Athlete Consumes Diagrams.

Based the feedback described above we found that steps 2 and 3 are interchangeable. Based on the changes to the sharing workflow we saw a 2x increase in the amount of sharing coaches were doing. We also saw a 2.5x increase in unique athlete usage while card creation remained static. So while there was no further input into the top of the funnel, we ended up with a huge increase in both diagrams shared and the number of engaged users at the end of the funnel.

Sharing and user engagement increased 2.5X while diagram creation remained static.

More engaged athletes meant coaches found more value in our tool. Coaches finding value in our tool leads to them paying us to use the tool. Before this study, we had found that 90% of our cancellations had never completed that usage loop. We were able to lower the number of users who didn't complete that usage loop by 2x so we have effectively dropped cancellations by 2X.

What systems do you have in place to reinforce these feedback loops, and how big is your team? Was your team resistant at all to any of these processes?

At Hudl, we organize our product team into distinct squads around market verticals. Each team is made up of 5-7 members (1 Project Manager, 2 Developers, 1 Product Designer, 1 Quality Assurance Analyst). Our squads function as the product owner and have the tactical and technological flexibility to continuously test and release features.

It is important that each team have the ability to act on this user feedback. If the team needs to have their decisions rubber stamped by a higher up that doesn't participate in the feedback loop, there will be conflict around the lack of shared understanding. We view customer feedback as a part of every person’s job responsibilities.

At the beginning, our developers were concerned about being tied up in a mass of emails — similar to our support team. Over a nine month period we answered over 850 emails as a team. This averaged out to about 20 emails per team member per month. The instant validation and feedback we received as a team made the investment of 20 emails per month seem like a bargain. Screening some of the responses sent by our developers made me realize they care about the work they do even more than I imagined. Their responses to our customers demonstrated more attentiveness and sensitivity that I had in my responses.

The founders and company leaders had always rooted our development efforts in customer feedback. Even this approach brought concerns from them that our team would be eschewing hard usage data in favor of feedback forms and emails. We had invested heavily in usage data aggregation tools like Splunk and Google Analytics. As we went down this path, they realized that this feedback technique blended the users thoughts, ideas and pain perfectly with hard usage data.

Other team issues we encountered initially were around the uncomfortably of hearing raw feedback from users. We seek out and encourage real-talk from our users and hearing it for the first time for some can be shocking. This tongue-in-cheek quote from our product designer shows how important it is to have a system for synthesizing feedback and prioritizing enhancements:

Sketching up new versions of all the screens we've spent all this time working on since feedback emails have me convinced that nothing works and everything is terrible everywhere.

Overall, this integrated feedback loop lead to instant discussions around enhancements from the team. We were able to talk about problems as they were being reported by our users. These discussions led to quicker action and more buy-in on what should be tackled next.

That buy-in led to a closer bond between the team and increased motivation to tackle the problems the team is facing. It gives every team member a feeling of ownership in the product.

--

--

Greg Nelson
In The Hudl

Head FB Coach at Lincoln Lutheran. Work for @Hudl building awesome tools for coaches.