Is Quality Matters A Cult?

MSU Hub
MSU Hub: Design and Innovation in Higher Ed
11 min readSep 13, 2021

by Dave Goodrich, Learning Experience Designer at Michigan State University Hub for Innovation in Learning and Technology

Dave Goodrich

Recently, a post titled “The Cult of Quality Matters” came up on my Twitter feed from some of the good folks over at Hybrid Pedagogy. As I write this, I am facilitating a two-week Quality Matters workshop for faculty and staff at Michigan State University. As someone who has had fairly extensive experiences in my past with both cults and Quality Matters, I quickly devoured it. I came away with a few takeaways along with some thoughts, questions, and concerns.

For reference, Quality Matters is a non-profit “quality assurance organization.” The mission of Quality Matters, according to their website, is to promote and improve the quality of online education. When people talk about QM, they are usually talking about QM’s review and certification process, and the rubrics that drive it.

My hope for this post is to keep a conversation and dialogue going around this increasingly important topic for those of us who help design and teach for online and blended learning environments. This post is not meant as a rebuttal or defense of QM so much as it is intended to underscore the importance of viewing it through a critical lens while recognizing both what it is useful for and what it is not intended to do.

First, I’ll share some thoughts that raise some questions and conclude with concerns I have.

Claims that the QM rubric “denies the subjectivity of human judgment” or that it lives “in a world where there are no humans altogether” is not only inaccurate, but it misses a key characteristic of the purpose of the rubric itself.

Let me begin by saying that I have been meaning to write a critical piece about Quality Matters for some time now, but have been putting it off due to the tricky dance of not making evidence-based and unwarranted claims and simply not wanting to come off as unbalanced in my assessment.

I’m glad now that I have waited as Martha Burtis and Jesse Stommel have beat me to the punch. Their article provides a useful critique and even a remixed language for the essential QM standards that are written in a humanized tone that are worth printing out and hanging on the wall for any educator. Seriously, fantastic stuff worth checking out. Still, there are claims made in it that are simply false and that should not go uncontested.

That doesn’t mean there are not many things in their article that I think we all could get behind. Namely, I would agree that we should all “support nuanced, complex conversations about pedagogy and design,” or that we need to “valorize the collaborative (and sometimes messy) work of teachers and designers,” and “design for and with the students who show up to our physical and virtual classrooms.” I am not sure that QM necessarily hinders any of these efforts. In fact, in my experience and research, I have engaged with many examples of QM helping do these very things.

I should note that I have never been a staunch defender of Quality Matters even as I have used it at arm’s length in the past 12 years in my work as an online instructor and learning experience designer. Instead, I try to recognize the myriad efforts that are made to enhance these experiences and I question the supposed need that they are pitted against one another. I believe that the majority of educators are all, as Martha insightfully describes, “grappling with the messy, often chaotic, human work of teaching and learning.” What I’m less sure of is that “systems” like that of QM are the direct agents necessarily geared toward attempting to “cover that work up with neatness” or what is in need of resisting, per se. Still, it is important to better understand what QM and other similar kinds of quality improvement-oriented systems are capable of helping with along with both their rigid limitations and potential unintended consequences.

In that vein, I appreciated that Burtis and Stommel recognize that, “none of this is to say the Quality Matters rubric has never been used to support good pedagogy” and that “there is certainly truth to the point that no tool is perfect, and we all bear a responsibility to think critically not only about the adoption of new tools and techniques but how we choose to implement them within our specific institutional contexts.” I wish they had begun their piece in that way because relating QM to a cult and making claims around it being unavoidably inhibitive of innovative and humanizing forms of teaching throughout the post seemed to be contradictory and unnecessarily antagonistic.

None of QM is necessarily contradictory to the practice of un-grading or alternative forms of assessment for that matter. It could be argued that the codification of the rubric itself is actually designed out of respect for the messiness of learning.

For instance, their claims that the QM rubric “denies the subjectivity of human judgment” or that it lives “in a world where there are no humans altogether” is not only inaccurate, but it misses a key characteristic of the purpose of the rubric itself. Namely, QM strives to be overtly clear that the focus of the rubric criteria is on the course design decisions only and not the teaching itself. This boundary often will get misconstrued at first glance as an attempt to remove the human teacher out of the equation, but this is untrue. Instead, QM emphasizes the importance of the role of the educator and the learners in online and blended learning by leveraging the advantages of sound course design decisions to help the learner and the teacher to focus more of their time on engagement with each other and the course content. Doing so puts the focus on the learning and the interactions without needing to be unnecessarily distracted by basic navigational, technological, institutional, administrative, curricular, and support-related questions. It is an interesting criticism from an article that quotes Sean Michael Morris stating that “we must be willing and ready to unseat the teacher,” which could also be misconstrued at first glance as attempting to take the human subject out of the picture.

“We must be willing and ready to unseat the teacher” could also be misconstrued at first glance as attempting to take the human subject out of the picture.

Burtis and Stommel made further claims around QM and the rubric for online course design that are too many to fully address here. I’m afraid that responding to each claim could warrant a unique blog post for every individual one. Instead, I’ve tried to capture and categorize each of the claims made in the nested list below. I did omit ones that I have already mentioned or that were generally redundant, but were there any that I missed?

Burtis and Stommel claimed that QM:

Over-structures:

  • “relies on upon and encourages overwrought course structures”
  • “encourages a kind of tidiness of design that obstructs learners from choosing their path or embracing emergent activities.”
  • “leaves little space for alternatives to traditional assessment.”
  • “could lead to faculty imposing heavy-handed rules about discussion forums”
  • “could result in a syllabus with a laundry list of expectations of pre-existing knowledge instead of an open conversation with students about past learning and how it dovetails with the goals of the course”

There’s no place where QM recommends longer syllabi or compliance over relationships. Further, exploration and risk-taking should not be threatened by the importance of providing institutionally consistent policy information to students in their courses. I do think it is important to be sensitive to the ill-effects of being overly prescriptive in discussion expectations, still QM encouraging clarity around these expectations does not necessarily mean that they are requiring or emphasizing being overly prescriptive. Further, is it not possible for “expectations of pre-existing knowledge” to be disclosed along withan open conversation with students about past learning and how it dovetails with the goals of the course?” I do not see why these must be mutually exclusive. Alignment and clarity between objectives and activities could actually greatly enhance students making more informed choices in learning pathways or emergent activities. QM strongly encourages diverse forms of assessments.

Over-simplifies:

  • “encourages a paternalistic centering of students and predetermined instructor-prescribed objectives.”
  • “irreconcilable with the complex, multi-faceted, and emergent practice of un-grading”
  • “discourages respect for the messiness and complexity of learning”
  • “obstructs students from being coauthors of their learning by overprescribing and predetermining outcomes at the outset”
  • “incapable of neatly measuring the complex set of human experiences, behaviors, and interactions found in human learning”

What exactly does a paternalistic centering of students mean? I fail to see what is wrong with predetermined learning objectives. None of QM is necessarily contradictory to the practice of un-grading or alternative forms of assessment for that matter. It could be argued that the codification of the rubric itself is actually designed out of respect for the messiness of learning.

Overly-administrative:

  • “lacks a few words that do not appear anywhere in the rubric: ‘community,’ ‘agency,’ ‘inclusivity,’ ‘flexibility,’ ‘joy,’ ‘compassion,’ ‘question,’ and ‘human’”
  • “loaded with administrative box-checking for accessibility instead of actual care”
  • “barely scratches the surface of what it means to address accessibility (or diversity)”
  • “the onus is put on students to protect their data and privacy”

Those words may not appear anywhere in the general standards, but that doesn’t mean they do not appear in the in-depth annotations. Of course, just because those words do not appear out front also does not mean that the rubric is against any of those things. The rubric focus is on caring for the student through means of good design. It does not focus in any way on instructors behavior or posture in the course. In this way, it respects the autonomy of educators in that they cannot be a product of design. Still, I do think this is a relatively fair critique. There is a lot of work to be done in the areas of accessibility and privacy. In fact, these standards alone could be a whole rubric in and of itself. In that way, I think that QM is explicit on these being very basic guidelines that do only scratch the surface.

Rubrics (in general):

  • “are especially damaging to those who would be most successful and to those who are struggling (according to Hurley)”
  • “are authoritarian, even colonizing, at their core.”
  • “are mechanisms for keeping complexity at bay”
  • “are mechanisms for controlling people”
  • “reinforce and rely on systems that oppress students”

These are unfair characterizations of rubrics. Of course, they are tools that can be used in these problematic ways, but they can also be used in ways that are extraordinarily liberating. What evidence is provided for the claim that rubrics are damaging to students? It is important to note here that rubrics can be written in ways that actually can enhance alternative assessment creativity and flexibility. QM makes no claim that all the messiness of learning can easily be captured through a rubric.

QM rubric:

  • “is the least necessary of rubrics”
  • “provides no incentive (and there are risks) to pushing past boundaries”
  • “empty of ethos”
  • “patronizes its users”
  • “empty of practical advice”
  • “gives administrators a safety blanket”
  • “confuses teachers (or gives them a false sense of ‘security’)”
  • “is not the first place we should be turning as we begin to imagine what online and hybrid learning could be”
  • “feels like a crude (and mechanistic) tool for administrators and institutions to police teaching”
  • “exploits the precarity of adjunct and contingent educators by how the rubric is deployed with rhetorical positioning by institutions”
  • “does harm to students”

It is unclear to me how QM is used to patronize its users or exploit the precarity of adjunct and contingent educators. I’m not saying this could not or does not happen. I am simply unaware of any instance in which it has been used in these ways. Of course, how the rubric is used and framed by any institution is extremely important and is actually emphasized as such in QM training and materials. Characterizing the rubric as a crude tool for administrators to police teaching really misses the entire point of why it exists. Further, the QM rubric is not designed to be a confining boundary condition, but rather a foundational framework upon which to build or design one’s course. It is a starting place and not a prescriptive destination.

Now, is the rubric perfect? No, of course not.

Does it oppress or harm students? This is entirely disingenuous.

Is it improving over time? Yes, it has and will continue to do so through thoughtful critiques, as demonstrated in both posts and various other educational arenas.

Are there other rubrics to consider that are similar in focus, less expensive, or even free? Absolutely. In a recent blog post called “The Win-Win of Peer Reviews for Online and Hybrid Courses,” Erica Venton and I described MSU’s approach to providing access to several course quality rubrics as guides, but programs and departments decide what is best for the unit in terms of quality measures. MSU has chosen to make QM available to anyone at MSU in partnership with multiple units across campus in order to provide a well-recognized benchmark for a course or program to think about quality in their online or hybrid courses. Individual faculty or even full programs who volunteer to utilize it find that it helps to develop a shared vision for what is a level of course quality that all members of the community strive for and agree is, at the very least, a base level of quality.

Still, in no way does QM claim that the rubric is all-encompassing or that compliance to it necessarily means that a course is successful or not. Instead, it can provide a baseline framework for an online or blended course structure that is designed using sound criteria to build on. New and seasoned educators alike can be assured that the criteria within the rubric are based on evidence found in many early studies done during our young adolescence in the field of remote learning. In their words, QM is a model that I believe serves “only as a set of possible starting points, opportunities for critique and conversation” as we are doing right now.

When Burtis and Stommel assert that “there can be no neat and tidy distinction between the kind of design we do for learning on-ground vs. the kind of design we do for learning online,” I am perplexed by the careless ignorance and potentially dangerous thinking hidden in this statement. First, it makes me wonder if there is something inherently wrong with neatness, tidiness, or clarity as it has been called out numerous times in their piece. Succinct, orderly, and pointed language is difficult to work as any writer can attest to and it is work that can provide distinctive value on its merit even if it is not exhaustive or comprehensive. Beyond this, assuming that the design for online or blended education is or even should be the same as face-to-face learning is fraught with its own set of over-simplified complications. Of course, the design decisions in these modalities can and should be guided by the same learning theory scholarship as the nature of the modalities themselves require careful and thoughtful design decisions that respect the affordances and constraints of the mediums themselves.

What about you? Let’s continue the conversation…

  • Is there another online course design rubric that you prefer?
  • What have been your experiences using the QM rubrics for designing online courses?
  • Where does the rubric excel? (if at all)
  • In what areas do you find the rubric challenging and what are your suggestions for improvement?
  • Do you find these claims to be fair as they relate to QM from your experience?

Learn more about QM at MSU.

Try it for yourself. QM course review opportunities are available.

Also, check out another useful check-list from our own College of Education worth considering too!

--

--