Design Patterns in Our Trainings
After 5 years of training public sector employees in critical data analytics skills and techniques, as well as the all-important data-driven mindset that makes those skills relevant and valuable, we’ve settled into a set of simple design patterns that constitute our recipe for success.
These include:
- Dumbshow exercise for mastery
- Learning in context of examples
- Managing the flow of new information
- Showing the result before doing the work
Dumbshow exercise for mastery
In Elizabethan drama, the Dumbshow was a convention to communicate plot through silent mime and exaggerated physical action. This helped communicate the main plot points to those who might not get the drama through the words being spoken.
In our case, we use a simple preliminary exercise to demonstrate a key idea and help participants feel the tasks are capable of being not only understood but mastered. This builds confidence for later learning and sets participants up for success as we move through the learning experience.
For example, in our Excel for Data Analysis I, we have participants create a simple pivot table from the available data to show the 311 Service Requests by Borough. It’s a simple exercise with a clear objective and utility that demonstrates a useful skill. We do this before getting into any kind of discussion about data, analysis, or the analytics process in order to give everyone a touchstone experience we can refer to in class.
Learning in context of examples
While explanations at the beginning of a block of instruction are helpful for introducing a topic, verbalizing content alone doesn’t communicate information. The more I teach, the less I believe there is anyone who truly learns by being told rather than by doing.
What this means in our training is that while we endeavor to share the important points in the short lecture preceding an exercise, we don’t try to ensure complete comprehension before moving into the exercise, assuring the participants (and ourselves) the topic will become clearer in the exercise.
Often, participants will say they understand a topic (and sincerely believe they do) until confronted with an exercise asking them to apply the concept.
For example, in our Introduction to Statistical Analysis class, we introduce the idea of the inter-quartile range and calculating outliers using 1.5 times this range. Everyone nods in agreement and then we ask them to calculate this with the data we have. Despite saying they understood the concept, they look lost as to where they begin and only really grasp the concept after we go through the exercise.
In virtually every class, the same mistakes are made when applying this relatively simple concept and through the mistakes they make, the class learns the concept more fully. We design for this intentionally, not to confuse, but to introduce opportunities to learn something more fully by applying it rather than just hearing the concept be described.
Managing the Flow of New Information
It’s important to add to the above that too many details or complications too soon in the learning experience is a recipe for disaster. There must be some understanding of the underlying concept (and ideally an opportunity to “get it”) before throwing too many complications and details at participants. I’ve talked before about teaching for comprehension, not completeness, meaning I want to ensure core concepts are introduced before the edge cases.
For example, in our Introduction to Statistical Analysis class, we first do summary statistics on a small and simple dataset of attributes about the participants joining us for class that day. We then move to a more complicated dataset of vehicle collisions that is easily summarized in a pivot table before running summary statistics.
Only after mastering the basics of summary statistics with a small dataset and then with a larger and somewhat more sophisticated dataset (vehicle collisions), do we move into working with a more complicated dataset (311 Service Requests) that has missing values and distant outliers. These complicate the analysis but are issues participants are likely to encounter in applying these skills to their work. Discussing them at the beginning would be at best distracting and likely ignored, or at worst, overwhelming and inhibit learning.
There is an optimal rate of new information where participants are gaining knowledge by integrating it with their existing understanding of the topic without being overwhelmed and possibly shutting down. We are constantly working to find that rate for each participant and design for different rates of flow, with moments of pause and reset for both participant and facilitator.
Showing the result before doing the work
This was an insight shared in our partnership with Matt LeMay and Tricia Wang. While there is something natural in having the answer suddenly reveal itself, often people need some idea of what we are doing and why before they fully commit to the exercise.
In our classes, we will preface an exercise with the result we are going for (a chart, graph or other result) and then discuss briefly why this would be useful before starting to walk participants through the experience of creating it for themselves.
For example, in our dumbshow mentioned above, we show the chart of 311 Service Requests by borough, showing the expected result before we start the exercise so everyone knows what we are creating and why.
After starting to implement this, we’ve seen fewer questions about why we’re doing something or what the point of the exercise is, and more focus on understanding the steps to the task itself.
Conclusion
We’re still learning what works and improving our approach each time we deliver a learning experience. I look forward to updating this post as we identify these lessons. In the meantime, if you’re a past student in one of our trainings, I’d love to hear your feedback on your experience. I’d also welcome feedback from other instructors/trainers in this space on their experiences with any of the above.