Internal Enablement: How We Ditched Simple Surveys and (Finally) Started to Understand Our Learners

Desiree Kielenz
Inside Personio
Published in
12 min readJul 25, 2023

The value of quality data can’t be overstated, especially for a scale-up tech company that is focused on uplevelling every product we offer to our customers. For us in Internal Enablement, our customers are our own Customer Experience teammates, and our product is learning solutions that enable them to master onboarding as new hires, understand multi-product releases, and develop CX-specific skills. We customize solutions to team-specific needs, strategically partnering with CX Management and our learners.

And, of course, we gather data to measure if and how our learning initiatives are actually effective. If you’re like us, you may think you’re collecting useful data by sending surveys to your learners. But — and this is a question we really had to dive into — does that data really tell you anything about whether your programs are actually effective?

Our survey approach was fairly typical, asking learners to rate statements on a Likert scale, ranging from “strongly agree” (5) to “strongly disagree” (1). This type of scale is so handy because it doesn’t matter how many statements you put up for rating — you can boil everything down to easily-reportable scores or trendlines over time.

And, of course, it’s important to be able to measure the success of each initiative for future planning, learning budget allocation, strategic and workforce planning, or performance growth cycles. But what we realized at Personio is that collecting this type of data means we’re missing the opportunity to actually listen to our learners.

Take this example:

Our learners used to rate an in-person learning course during the last 10 minutes of the session. We provided them with five questions to rate on the 1–5 scale, easily accessible via a QR code. The result could have been, for example, an average of 4.7 across all questions. Success! But, what does the rating of 4.7 out of 5 tell us?

We believe that doesn’t actually tell us anything about how effectively the learning course supported learners in developing new skills and capabilities — because we didn’t ask the right questions at the right time.

It tells us nothing about whether the learners are supported in applying what they have learned in their day-to-day job — because the survey was sent too soon after the session for learners to know yet.

It tells us nothing about how to support high performance in the business and how we can improve our learning assets — because scores do not provide qualitative, actionable insights.

Furthermore, in Learning, we cannot even rely on our 4.7 score, because how a person rates is closely tied to their personality and their mood.

A new approach

After realizing the limitations of our previous approach, we were inspired by Will Thalheimer’s book, “Performance-Focused Learner Surveys: Using Distinctive Questioning to Get Actionable Data and Guide Learning Effectiveness.” As a result, we have moved away from meaningless surveys to using Performance-Focused Learner Surveys (PFLS) and the accompanying philosophy.

Today, we’ll introduce our approach to PFLS, help you understand our goals and motivations, and show you how we brought the theory to life in five steps. Let’s jump right in!

Introducing Performance-Focused Learner Surveys

Performance-Focused Learner Surveys are a set of questions provided to learners, either directly after a learning event or as a follow-up survey about 30 days after learning completion. They were created to provide learning professionals with meaningful feedback on how learners are learning, and what they need in order to succeed.

Specifically, PFLS can help you:

  • Ask distinctive questions and provide multiple-choice answers that are actionable in reverse
  • Gather meaningful feedback on key components of your learning asset
  • Reveal what keeps learners from delivering high performance
  • Convey messages to learners and nudge them into action

We’ve used our learnings from the book to develop a user-friendly framework for our learning professionals, including templates, guidelines, and resulting data to drive informed decisions.

Setting the goals

Our primary goal in developing this framework was to establish a genuine feedback loop that enables us to understand how learning solutions actually change learners’ behavior in their daily business — and therefore how we contribute to the overall CX department goals.

We aimed to facilitate a more learner-centric and effective approach to learning, and therefore become an integral lever in driving high performance in our Customer Experience teams.

Along the way, we realized that using PFLS is more complicated than using Likert scale-based surveys, and therefore that we needed to foster a mindset shift amongst learning professionals to embrace this new methodology. Overall, we took five specific steps to implement PFLS:

  1. Establish a PFLS project lead
  2. Create foundational documentation
  3. Change the team’s mindset
  4. Collect actionable data
  5. Generate data-informed recommendations

Establish a PFLS project lead

As a first step, we roughly scoped out the project within our team. Helpfully, the book provides tons of tangible examples and templates for understanding and launching the methodology.

One of our teammates, Nadine, volunteered to take on this challenge, using it as a great opportunity for career growth and to gain new experience. (In fact, successfully leading and launching this project was a major factor in her new promotion!)

To kick it off, Nadine looked into our current data to define the status quo and better understand our starting point. She also familiarized herself more with the PFLS theory, brainstormed with her network, and even reached out to Thalheimer himself to consult on initial ideas.

With the foundation set, Nadine then defined three guiding principles to help her and the rest of us stay hyper-focused throughout the project.

Three guiding principles for implementing Performance-Focused Learner Surveys in CX

Creating foundational documentation

We are a team of 12 learning professionals, including Learning Specialists, who run learning programs across Customer Experience; Learning Partners, who align closely to the enablement needs of our CX functional teams; and Learning Experience Designers, who consult on, design, and develop learning experiences. Before we implemented PFLS, each team member was following their own approach of surveying learners and building their own routines from there. So we knew that we needed to remove as many barriers as possible for our team in order to ensure the successful adoption of this new methodology.

To set us up for success, Nadine developed two engaging survey templates including “ready-to-go” formulations, as well as a selection of thoroughly curated question and answer choices from Thalheimer’s book. We ended up choosing a majority of multiple choice questions, with about a twenty percent mix of open questions for the learner to add more context.

Below are two examples of the multiple choice questions we ask (here for a course about “objection handling”):

How ABLE are you to apply OBJECTION HANDLING in your role?

◻️ My current role does NOT ENABLE me to use OBJECTION HANDLING

◻️ I am STILL UNCLEAR about the topics learned for OBJECTION HANDLING

◻️ I need MORE GUIDANCE on OBJECTION HANDLING

◻️ I need MORE EXPERIENCE to be secure on OBJECTION HANDLING

◻️ I CAN SUCCESSFULLY apply OBJECTION HANDLING (even without more guidance or experience)

◻️ I CAN perform now at an EXPERT level in applying OBJECTION HANDLING

How frequently have you applied OBJECTION HANDLING since completing the LEARNING COURSE?

◻️ NOT YET, my current role does not allow me to use OBJECTION HANDLING

◻️ NOT YET, but I am confident I WILL use OBJECTION HANDLING

◻️ ONCE or TWICE, but I need MORE GUIDANCE to use OBJECTION HANDLING

◻️ A FEW TIMES and I feel I can SUCCESSFULLY use OBJECTION HANDLING

◻️ EVERY DAY, as it is essential to my current role

In addition to the survey templates and an analysis guide, Nadine also delivered documentation that detailed our purpose, goals and approach to implementing PFLS. She also offered weekly Q&A hours for any teammates who needed further support.

Implement PFLS at formal learning touchpoints

In the realm of Learning, there are three types of touchpoints: social, on-demand, and formal. Social touchpoints involve learning interactions with other people, such as Q&A sessions or mentoring. An on-demand learning touchpoint happens when learners have 24/7 access to assets without the need for extensive searching, such as job aids or one-pager. A formal touchpoint, on the other hand, is an instance of learning in which the learners experience a clear start and end point, as well as a structured sequence or path to follow. Classroom sessions and mandatory eLearnings are prime examples of these formal touchpoints. For our purposes, we chose formal touchpoints as our first focus area, because a clear beginning and endpoint make it much easier to implement PFLS surveys.

To help us measure the impact of these formal touchpoints, we send two different surveys — one directly after the learning session, and one about 30 days after. The first survey focuses on the learner’s perception of the session, while the second better measures how effective the session actually was.

  1. Directly after the learning session: Surveying learners immediately after they have completed a learning session can lead to a biased perception of their level of understanding, as we (and they) don’t yet know if they’ll retain and apply this knowledge over time, or forget it. Therefore, the first survey serves to capture perception: their thoughts, feelings, and opinions on the learning asset and our brand as an Enablement team.
  2. Around 30 days after the learning session: Only after a reasonable amount of time has passed can we actually measure learning effectiveness. To do this, we use survey questions that focus on learners’ ability and opportunities to apply what they have learned, identify any obstacles they may be facing, and evaluate their managers’ support. Through these surveys, we also convey messages and shape learners’ understanding of the learning culture we are aiming to cultivate.

(All of this said, it is essential to acknowledge that surveying is not the only method for measuring learning effectiveness. We also assess behavioral changes within the organization, which manifest as improvements in performance metrics.)

After making these changes, we didn’t just want to measure the success of the learning session. We also wanted to measure the success of the new surveys themselves (because of data!). So we asked our learners, and received some significant feedback: 85 percent of respondents said they like the new versions better, and our overall rate of response increased by nearly 50 percent, which tells us that the new surveys capture learners’ attention in a way that the prior versions didn’t.

Change the team’s mindset

As I’ve already mentioned, and what seems pretty obvious, is that it is a lot easier and more convenient to rely on Likert scale surveys and calculate vanity metric scores rather than investing time in comprehensive surveying and evaluation. Which, of course, meant that we needed a compelling reason to convince the team to embrace a more effort-intensive approach.
Moreover, our enablement and communications strategy was significantly influenced by overall capacity issues and high workloads within the team.

Both of these factors shaped the narrative we conveyed and the level of support we provided. For support and enablement, we provided micro-learnings, documentation, interactive Q&A sessions, and educational support in setting up surveys. In communicating the change in the first place, we prioritized two key messages:

Start with your why

While effective learner surveys may not seem as significant as increasing MRR or groundbreaking new product features, they are still essential for us in internal enablement to capture feedback. We explained the benefits of this change to our four major stakeholder groups, and thereby established four layers of relevancy:

  1. Our learners can finally voice their needs
  2. Learning professionals can classify learners’ feedback, combine it with their expertise, and build more effective learning assets
  3. Managers become more aware of their essential role in supporting learning application
  4. The business benefits from higher performance and happier employees

Disrupt deficit thinking

Before rolling out the new approach, it was also important to thoroughly test it with a couple of team members. It helped reveal areas for improvement, as well as “Aha”-moments, highlights, and what success can look like in practice. Additionally, this process helped establish allies who championed the approach during the roll out.

We discovered two specific things during our testing: that we had placed too much emphasis on developing comprehensive survey templates; and that we needed to refine our core purpose and enhance our storytelling around the “why” in order to achieve a successful launch. Once we were able to make a few changes to solve for these, it was time to launch.

In order to combat any deficit thinking in the team (a mindset that focuses on the limitations and challenges of an approach rather than its advantages and potential), we decided to transparently share the results of the testing period and include the learnings and improvements in our launch communication. We noted that we recognized the potential challenges, and highlighted the exciting opportunities this approach would bring if we could overcome those challenges.
We gave a few concrete examples, including a heterogeneous learning program consisting of many different learning assets which could be massively improved through PFLS output, recognizable in feedback and public praise.

Collect actionable data

After launching within our team, the next step was to review incoming feedback and prepare it for further use. Here as well, we followed Thalheimer’s approach and classified multiple-choice responses in the standards “alarming,” “unacceptable,” “acceptable,” “superior,” and “brilliant.” We did this in the background, meaning the learners did not see any standards and therefore were not biased in their answer choices.

We analyzed open questions individually. Since we only use two to three open questions per survey, leveraging the outcomes was not unreasonably time consuming. These open questions gave us more details on learners’ needs, preferences, and their learning environment. Like, for example, “What hindered you from learning?”

Reviewing the common themes of these answers gives us the opportunity to extract actionable insights, which enable us to improve performance in the business.

To enable our team to start this process, we set up a user friendly Google sheet construction that our learning professionals could use to connect their survey responses to the predefined standards and scan through open-text answers. Using COUNTIF formulas to calculate how many learners rated the learning in a certain range, making it easier for us to understand trends, note down similarities, and define focus areas.

Generate data-informed recommendations

The last step for us was to recommend improvements to the learning asset and make the learning professionals aware of potential findings. They were then asked to take action, improve their asset, and pursue an even higher degree of learning effectiveness.

In our first iteration, Nadine analyzed and prepared the feedback for the learning professionals, as her deep dive into the data throughout this project had sharpened her understanding of our learners and our opportunities for improvement.

Since launching we’ve seen three major data points emerge from our PFLS:

  1. actionable insights to increase learning effectiveness and thereby the performance of learners on the job
  2. an impression on the learning culture within Personio
  3. data to create learner personas, for example, preferences on learning formats or the degree of manager support in the application of learning

We have piloted the data-informed recommendations in our month-long product onboarding program. One quick win we were able to execute on was that new joiners lacked knowledge on how to properly filter our knowledge management base and therefore struggled to find meaningful content. We’ve reported the finding back to the Knowledge Management team for further investigations and recorded a short video tutorial for future new joiners. In Q2 2023, we have generated, on average, 2.6 exclusive, actionable insights per survey, which means 2.6 meaningful ideas on how to improve each learning asset.

Conclusion

By now, we’ve adopted PFLS in our learning cycle and the team uses the approach and toolbox independently. We used the data generated by PFLS to improve many learning assets and drive even higher performance within the department. Additionally, we are about to create 5 learner personas for CX, partially based on the survey data.

This entire process has also highlighted that, for internal learning and training to be successful, the support of managers is crucial. Be it through outspoken support of new skills and knowledge or through the allocation of meaningful, related projects, we can’t do it without a close alignment across all teams. (How we went about creating an even closer 3-way-partnership between learners, managers, and us, CX Enablement, is a topic for another blog article.)

If you’re interested in joining a team that implements these types of new data-based approaches — or if you want to try learning from our new programs — head to our careers page and apply today!

--

--