Why Quality Matters

By Bethany Simunich, Ph.D., QM Vice President of Innovation and Research

This piece was written as a response to a growing sense that fears related to loss of academic freedom and creativity are being fueled by misinformation, including articles and blog posts written by individuals who purport to be deeply and experientially familiar with QM and its tools when they are not. It’s my hope that it addresses misunderstandings about what QM is, how it is used, and the intended purpose of QM Rubrics. More broadly, I hope my experience and perspective can promote constructive dialogue about the need for and appropriate use of standards for online course quality.


I remember with painful acuity the first time I taught an online course. Having previously taught the course for years in a face-to-face (F2F) format, when asked if I’d teach it online, I quickly agreed, eager to explore this new-to-me modality. This was over 15 years ago…before many institutions understood the differences (and nuances) of teaching in the online classroom, and way before I understood anything about designing or teaching online courses. I thought my experience teaching the course and passion for the subject matter would be enough, and that the technology I’d need to “migrate” my course online would be relatively simple and straightforward, even for a Luddite like myself. I distinctly remember thinking, “How hard can it be?” 

Teaching my first online course was, as you’ve likely guessed, very hard — much more difficult than I had anticipated. It remains to this day the biggest “I didn’t know what I didn’t know” experience of my career. Prior to teaching online, I had never deliberately and strategically designed a course, nor had any training on how to do so. Moving online, I quickly learned that I couldn’t create a course in the learning management system the same way I had created it for the on-ground classroom. Like most faculty, I “taught how I was taught,” which for me meant: selecting the textbook, creating lectures to augment that text, developing in-class activities to engage with students and allow them to engage with the material, and creating assignments and exams. I had never truly examined if these components supported one another, as well as my pedagogical goals, and I certainly never had to create a well-organized web-based layout for a course. Designing a great online course, I came to realize, was not only a skill set I didn’t yet have, it was also complex and incredibly time-intensive. In person, I could change things quickly and easily, and I never had to purposefully create a web-based learning path for students, or be transparent about the design of the course. Face-to-face design and teaching were concurrent and dynamically intertwined in a way that asynchronous online design and teaching were not. 

Consider this: if our habit is to teach as we were taught, what does that mean for those of us who have never taken an online course? Or have never taken a well-designed online course taught by a great online instructor? I had no models. I had never taken an online course or even seen an example of a good one.

I realized too late in the process that teaching online was not simply digitizing and uploading materials from my F2F class. At this point, I didn’t yet know that uploading a bunch of documents doesn’t equal an online course, and when I realized that I couldn’t even create a “bad” online course easily, I was suddenly struck with an uncomfortable feeling of being unmoored in my teaching and frustrated by the lack of guidance and training. I also felt that I had let my students down before the semester even started. I was woefully unprepared. 

I spent the next decade or so learning what I could about designing and teaching online courses, and continue to learn more every day. I left full-time teaching and, after taking additional courses and certificates (I had two masters degrees and a doctorate, but had never had a course on instructional design), accepted an entry-level position as an instructional designer. Since that time, I’ve worked with faculty at several institutions, helping design and revise hundreds of online courses. I’ve trained over a dozen instructional designers and led ID teams. I’ve spent many, many years doing faculty development work for online design and teaching, and have created and delivered dozens of workshops, several online courses, and over a hundred conference presentations on online learning topics. I’ve helped with the development of several fully-online degree programs and worked to bring administrators, faculty, and staff together through the entire process. I’ve conducted research focused on many aspects of online learning and spent time learning the history of the field, while keeping abreast of new research. I’ve surveyed, interviewed, and spoken with over 500 online students about their experiences. And I’ve continued to teach online. 

Why am I sharing all of this? Because I believe it is crucial to understand the background and experience that anyone who is talking about quality in online learning brings to the table — what time have they spent striving to increase online quality in a variety of ways, in various institutional contexts, and what roles have they performed in implementing quality assurance at scale? 

Humanizing Quality Assurance

Too often, we limit the quality assurance conversations to abstract examples that present quality standards as a phantom menace against creativity, while ignoring the actual people that are harmed by a lack of quality assurance efforts and preparation. Allow me to “humanize” quality for a moment. Achieving online quality means successfully addressing a wide range of real-life challenges such as:

  • The fantastic face-to-face teacher who is afraid they’ll lose their connection to their students once they move online;
  • The instructor who happily eschewed using technology for many years, but now needs it to support their online teaching;
  • The faculty member who doesn’t have a clear idea of how to use web-based navigation and organization to create a learning path;
  • The instructional designer tasked with revising dozens or hundreds of online courses, none of which may live up to the institutional reputation or the learning quality that students were promised;
  • The administrator who has never taught online or learned online, but now leads the decision-making about online strategy;
  • The colleague conducting a peer review of an online course who has never taught online themselves, and is required to use a form designed for F2F teaching evaluation;
  • The instructional designers and faculty developers who receive panicked calls from faculty the week (or weekend) before the term begins, desperate for help for online courses that were assigned or not thought about until the last minute, and are not yet designed or built;
  • The students who feel as though they’re “teaching themselves”, wondering where the professor is, why they aren’t answering emails, and why the course isn’t visible or ready.

I have experienced every single one of these, either as a faculty member or as online learning staff. It is the last one that especially, and regularly, breaks my heart. All of these situations reflect people in scary, frustrating, or terrible situations. These real-life situations are why we need to talk about the quality of the online learning that we offer our students and get beyond misassumptions that elevating quality can only lead to standardized, “cookie-cutter” courses or a curtailment of academic freedom (neither of which I have seen or been provided evidence of). 

The Community Experience

While diving into online learning best practices over the past decade, I discovered Quality Matters (QM) — an international nonprofit dedicated to promoting and improving the quality of online education and student learning. If you are reading this, you likely know QM and are working with them in some capacity. In my former roles at higher ed institutions, I personally used and found QM’s tools and resources to be incredibly valuable. In 2020, I decided to join the team and currently serve as QM’s Director of Research and Innovation. But it wasn’t just about the tools and resources, it was about what QM truly is:

An organization founded by faculty and educational staff that creates and provides tools and processes for quality online learning, and whose work is continuously informed and improved by its members. 

It was, in fact, the collegial community that first drew me to QM — it differed so much from the other educational organizations I interacted with. Instead of feeling nameless and overwhelmed at a conference, I felt included and mentored. I found a community of people who were passionate about creating the best online learning opportunities we could for students, including so many online faculty and instructional designers who shared their knowledge and tips. I felt connected and respected, even when I was new to online design and teaching. 

Unfortunately, these aspects— and so many other important factors — are often lost in the conversations around QM.

The Quality Matters Rubrics

Those deeply familiar with the QM Rubric know that it inherently provides flexibility, laid out well in the Standard’s Annotations, for how the Specific Review Standards can be met. Over the years, I’ve heard many things that “can’t be done” in a course and still meet quality standards. 

Some examples include:

  • “I can’t include a video to welcome my students”
  • “I can’t utilize ungrading”
  • “I can’t make this work for my practicum course” 
  • “This doesn’t apply to my doctoral students”
  • “I teach a [hands-on course, math course, science course, public speaking course], so these things don’t apply/can’t be done”
  • “I can’t use a flexible, student-inclusive approach to design my course”
  • “I can’t make changes to my course while it’s running/after it’s done”
  • “This goes against my academic freedom — I can’t create the course I want to create, use the content I want, incorporate elements of small teaching, etc.”

I would say every single one of these is a false assumption, and points to a limited understanding of the Rubric, rather than a limitation of the Rubric itself. I am not saying that it’s impossible that there could be a situation or pedagogical approach that is limited by the Rubric, but I am saying that I have yet to discover one. One of my colleagues, for example, teaches his online course by embracing open pedagogy and allowing the students to co-author the assessment questions, suggest or create activities, curate and share content, and also select specific topics for exploration in the course. Nothing about that instructional approach would be prohibited or hindered by the QM Rubric so long as the intention and/or goal of that type of assignment is apparent to the student. 

I understand that it can be easy and tempting to tear down tools for quality assurance…but my experience shows that these tools can help generate ideas for how we can improve and practice good online education. Too often, ill-informed assumptions and opinions steal valuable time from the conversation of “How can this be done?” and “How can we collectively do this better?” in order to dive into conversations that often result in defensiveness, posturing, and the marginalization of voices and experiences. I would love to spend more time listening, generating possibilities, and co-creating solutions, and much less time defending online quality assurance from hastily-made assumptions. It’s important to understand, though, what the tools can and cannot help us achieve. 

The QM Higher Education Rubric is: 

  • The first rubric developed by faculty specifically for the evaluation of online courses, and developed with the intent of collegiality, continuous improvement, and flexible implementation;
  • The only rubric regularly updated by online faculty, distance learning staff, and online experts to reflect the latest in online learning research and pedagogical practice — over 100 independent educators have participated in updating the Rubric, now in its sixth edition;
  • A rubric continuously informed and improved based on usage and feedback from its community;
  • A tool maintained and supported by an educational nonprofit staffed by 44 truly dedicated people, most of whom are former teachers, instructional designers, and educational staff.

Quality Matters has created a quality assurance tool — five tools, actually — that are usable and adaptable across all disciplines, all institution types, all online modalities, and all class sizes. How is that possible? Because at their core, the QM Rubrics are — more than anything else — flexible. 

QM Rubrics:

  • Do NOT require or prescribe a particular pedagogical approach or philosophy, specific teaching strategies or methods, and do not dictate types of instructional materials or assessments;
  • Provide Annotations that offer a myriad of ways, though not exhaustive, to meet each Standard;
  • Provide the opportunity to embed yourself in the student perspective.

In short, you simply cannot create and use inflexible, un-adaptable tools when you are serving over 1,500 unique educational institutions and over 100,000 educators around the globe. If the QM Rubrics were truly rigid, inflexible, or an impingement on creativity and freedom, then we wouldn’t see thousands of QM-Certified courses that span countless disciplines, course types, institutional cultures, faculty, and pedagogical strategies. We wouldn’t see a 99% satisfaction rate by faculty who engage in the QM-Certified review process, or data that shows 98% of faculty who engage with our professional development find the information so valuable, that they take it back to their F2F classroom as well. 

It’s important to note, though, that while QM Rubrics reflect well-researched instructional design principles, they’re not a course design checklist, and to see it as such would likely create the assumption that online course design is prescriptive. For faculty who were looking for a design guide, however, and who especially need design assistance during the pandemic, QM developed the publicly available Bridge to Quality Design Guide

It’s important to view the Rubric through the lens that you are applying it, whether via its original, intended use as a review tool for online quality assurance, or in an adaptation of that use — as a tool for information and ideas as you design your online course. Let me give an example:

Standard 5.3 reads: The instructor’s plan for interacting with learners during the course is clearly stated. If you’re reviewing a course, you’d then look to the Annotation, which provides more information, including having a clear plan for interacting with students in primary ways, such as responding to questions and providing feedback. It provides several, non-exhaustive examples of information that instructors might give to their online students, as well as several examples of where this information is commonly found. It doesn’t prescribe that you adhere to any particular type of grading approach or that you provide a specific type of feedback. It also includes specific information if one is reviewing a Competency-Based Course.

Let’s say that a given course includes information in the syllabus that lets students know that if they email or post a question, they’ll receive a reply within 24 hours during the week and 48 hours on the weekend. Additionally, the instructor lets students know that they can expect to receive feedback on course activities within a week after they submit their work. Students are also informed that their instructor makes the effort to provide feedback within one week so that they can use that feedback to improve their work on the next activity or assessment. The QM-Certified Peer Reviewer in this case isn’t asked if they agree with the policy — they might, for example, feel that a one-week turnaround time is an unreasonable promise, or that they themselves have a 24-hour response time for questions, even on weekends. The Reviewer can provide feedback and suggestions, but they are only evaluating the Standard in terms of whether they, from the student perspective, would understand some important ways their instructor is going to interact with them and respond to their needs in their asynchronous course. This is a great example of how the QM Rubric is not prescriptive… unless one disagrees with the idea that we should let students know when we’ll answer their questions or provide feedback.  

However, if you’re adapting the Rubric as a guide for design, you might be inspired to ask colleagues how they think through their policy, and even what approaches seem to work better for the cohort of students that typically take your class. The Annotation provides a bit of the “why” as well, which can also prompt some good reflection as you design. One note, for example, says: “Frequent feedback from the instructor increases learners' sense of engagement in a course. Learners are better able to manage their learning activities when they know upfront when to expect feedback from the instructor.” This cues you into how this Standard is grounded in research and best practices for student engagement, as well as methods to elevate teaching presence. There are a variety of ways to meet this Standard in your online course, and there is no requirement that all policies look the same or be standardized. 

Beyond Rubrics

While QM is often synonymous with its Rubrics, QM is actually a comprehensive, multi-faceted quality assurance framework, whose use and implementation are customizable to institutional needs and goals. In addition to the Rubric, QM offers multiple course review options as well as professional development opportunities and a number of publicly available free resources.

Just like there are a variety of ways to use the Rubrics, there’s no “one, right way” to evaluate review quality. QM does offer a pathway for certified, third-party reviews by faculty specifically experienced with online teaching and evaluation — an option rarely presented for face-to-face courses. But those reviews are only one of many ways to meet institutional and student goals for quality. There are also a variety of other review options and pathways available, including: 

  • Internal reviews that combine institutional standards with QM Standards
  • Internal reviews that combine institutional standards with select QM Standards
  • “Lite” QM Reviews that focus only on select QM Standards, as determined by the institution or faculty member
  • Internal reviews that combine QM Standards with other rubric standards
  • Self-reviews done by the faculty teaching the course

All of these options are available, and QM even has a tool to enable this flexibility called My Custom Reviews (MyCR). This tool, like so many of QM’s other resources, is designed to allow institutions to choose what standards they want to use and what processes they want to create for reviewing courses. 

If you are engaging with official reviews, here are some important facts you need to know:

  • Official Reviews are a collegial, collaborative, faculty-driven process;
  • Review teams are made up of three individuals, all of whom have taught a for-credit online or blended course in the last 18 months;
  • All Reviewers go through a rigorous professional development process;
  • The review process is designed to be diagnostic and collegial, not evaluative and judgmental;
  • The subjectivity of human judgment is embedded within the review process. Reviewers are encouraged to discuss and are not led to a forced agreement or unanimous decision;
  • Instructors receive three independent pieces of feedback for each standard, which they could choose to apply or not, in a way that works for them and their students. 

The Review process is, in fact, more collegial and collaborative than any classroom-based review that I’ve been a part of or witnessed. I often felt it shortchanged a F2F class to have a peer attend a single class session and make judgments from a templated checklist. The QM Peer Review process, on the other hand, begins with the instructor discussing the course, describing the design, their learning goals, their students, and more. As the review is conducted, Reviewers continue the dialogue with the instructor, and ask questions or make suggestions for quick fixes. The Review team itself represents a diversity of experiences and voices, comprised of three online teaching faculty — a great improvement, in my opinion, from the too-often singular review voice of an institutionally-based evaluation of a F2F course.

Quality Matters Implementation

The QM framework, consisting of the Rubric, supporting professional development, and options for internal and certified reviews, are resources, processes, and tools that are implemented by the institution. Quality assurance implementation, however, is often not given the consideration for the change-management initiative that it is, and institutions may experience missteps or disruptions if it’s not created as an inclusive process that reflects the institutional culture and goals. QM doesn’t prescribe how an institution implements the QM tools and resources for quality assurance. An individual faculty member has many choices about how they can use the QM Rubric, engage with professional development, or conduct course reviews. However, the most successful implementations occur when considered in conjunction with the institutional culture and context — including stated goals — and also when the implementation is inclusive, collaborative, and collegial in nature. 

Additionally, we’re supporting this work with research to further explore best practices and key drivers in implementing online quality assurance within higher ed institutions. Current findings include choosing the right person/people to lead this effort, making it inclusive from the start, and embracing a bottom-up approach. If you believe that your institution is not meeting faculty, staff, student, and other stakeholder needs with regard to QA implementation, I encourage you to have crucial conversations about implementation efforts, to connect with campus offices and partners that support QA in online learning, and to connect with the resources and training that organizations like QM provide to help in these efforts. Oftentimes, implementation is intrinsically linked with accreditation efforts, and that is an additional place to begin, or continue, the dialogue. QM also provides professional development opportunities, including two free workshops seats for those coordinating implementation efforts, and additional resources to help faculty and institutions decide how implementation would work best on their campus. 

The Student Experience

In all of the talk about rubrics, reviews, policies, and implementation, however, it’s important we don’t forget about those who are disadvantaged by lower quality online learning experiences. The human face of quality assurance is equally valid to academically-embedded conversations that never extend to online students. Students are the ones who are disadvantaged by lower-quality online learning experiences and need to be at the heart of the conversation. Consider the following real-life examples:

  1. The online student who struggled with a midterm assignment but did not reach out to his professor for help. Why? Because he felt like he didn’t really know his professor — he was worried that the professor wasn’t nice, and wouldn’t help. In truth, his professor was kind, engaging, and student-focused… but had never thought about creating an instructor introduction video. The student had never even seen his face or heard his voice. 
  2. The online student who emailed me out of desperation, fearing she was failing her class and couldn’t afford to retake it. She accompanied her emails with screenshots of the course, convinced she was “too dumb” to figure things out. Emails to her instructor had gone unanswered. Looking at the screenshots, I could see her struggles were largely a result of how the course was organized. All the files had been uploaded into a single folder with no directions or guidance. I also discovered the instructor had not been in the course for several weeks. 
  3. The online student who I found in tears in the campus library, devastated about their performance on a midterm exam. They thought they had prepared well by watching the instructor videos (on campus because their home internet did not have the bandwidth to handle the hour+ length of each video). They had read all the materials, highlighted key passages, and made review cards and a study guide. When I asked what they felt had “gone wrong,” they replied: “I didn’t know I wasn’t understanding the material. We had some quizzes, but they hadn’t been graded, so I didn’t know how I was doing. It wasn’t until a few questions into the midterm that I realized I had some big misunderstandings, and I wasn’t thinking about things like I should, but by then it was too late.”

This is the other side of online course standards and policies. It might seem like a great, freeing idea to not be clear with students about expectations, to not approach your design by also thinking about how you’ll connect with and interact with students, to not learn about good organization and navigation, to not tell your students how they can contact you and when they’ll receive a response, to not give students multiple chances and ways to check their understanding and gauge their learning process… but the reality is that not considering best practices such as this frequently disadvantages students

I ended up meeting with all three of these faculty, and trust me when I say that they absolutely wanted to do the very best for their online students. They didn’t know, however, that it’s important to introduce themselves as an instructor (Standard 1.8), that they needed to consider before the course began how they would interact with students, including responding to questions and providing feedback (Standard 5.3), that navigation and organization are absolutely crucial in online design (Standard 8.1), that posting long videos causes technology and accessibility issues for students, and is often too much information to absorb or review at one time (Standard 8.4), or that online students need multiple opportunities to check their understanding and progress (Standard 3.5), and with prompt feedback. 

These were caring, experienced instructors who just didn’t know what they didn’t know

The Whole Quality Picture

Equally important as what we “don’t know,” is defining what it is we are talking about when we discuss quality — because it’s vital to understand that “quality” is not just one thing. It lies not only in the design of an online course but also in:

  • The quality of the content the faculty chooses to create or curate for the course;
  • The effectiveness of their teaching (as well as how well they’re supported in that teaching);
  • The institutional infrastructure and readiness for quality online learning;
  • The preparedness and support of our online students;
  • The technology used, including how faculty and learners are supported in using the technology that supports good online learning. 

Quality online learning is more than a rubric, or any single tool, and it is a privileged perspective to posit that we should not define what quality means for students, nor create tools and processes to support faculty in improvements that lead to greater quality. Objections fail to address equity, access, student preparedness, and the complex, real-life issues faced when trying to ensure that all students receive a quality learning experience, regardless of whether that course is online or face-to-face. We can’t ignore the very real institutional barriers embedded in the change management it requires to create and implement quality initiatives at scale, nor the reality that many instructional designers know: across the vast landscape of higher education there are, and long have-been, online courses that fail to meet basic student needs, whether for support or learning. 

Be Part of the Conversation

I want to make clear that I am not claiming, nor do I believe, that the QM Rubric is perfect and should never be critiqued or improved. If I, or QM, felt that way, we wouldn’t bring together community experts, combined with expansive survey feedback from all our community members, to regularly revise the Rubric. We already intend, for example, to include more information about inclusive and culturally responsive design in the next Rubric revision.

You, as a member of our community, matter! And we want to invite you to share your thoughts. Please feel free to complete this short survey, which can be filled out anonymously. We continuously provide many avenues for the community to share their comments, ideas, and questions, and use that feedback to inform and improve the resources and services we provide. We also have options for individual consultations, and can join the conversation at your campus via a web meeting as well. 

Thank you for all you do to ensure high-quality online learning for your students, and thank you, in advance, for sharing your experiences, ideas, and questions with us. I look forward to our continued collaboration and conversation.


Dr. Bethany Simunich is QM’s Director of Research and Innovation. She has worked in higher education for over 20 years and has over 15 years of experience in eLearning research, instructional design and online pedagogy. As QM's Director of Research and Innovation, she helps provide research-based tools, ideas and solutions to enable individuals and institutions to assess and achieve their quality assurance goals. Her research interests include presence in the online classroom, online student and instructor self-efficacy and satisfaction, and outcomes achievement in online courses. Connect with Dr. Simunich on Twitter or LinkedIn.