I recently had the pleasure of participating in a collaborative professional development program centered on evaluating Qualitative Assessment in the accreditation process.
The Association of Specialized and Professional Accreditors (ASPA) meets twice each year to bring together thought leaders from the community of U.S. agencies that assess the quality of specialized and professional higher education programs and schools. Many thanks are due to that organization for opening their doors to non-members and interested parties.
Where hard metrics and pass/fail testing are not options for conformity assessment the issue is raised of how to provide consistent analysis where subjectivity prevails. This gets rather complicated in the ever-changing landscape of how teachers teach, how students learn, and how quality organizations measure the effectiveness of those processes.
It begins in the definition of the standards themselves. The quantitative measures, for example, are clearly defined in areas such as the number hours of field education necessary for baccalaureate or master’s programs. That’s an easy one, 400 baccalaureate and 900 master’s (this varies from agency to agency). Can the program prove they are requiring that of their students? Yes? OK, moving on.
It becomes drastically more difficult to define and measure what skills and dispositions are identifiable and valuable for a given profession, or how educational policy engages diversity and differences in practice. Needless to say, these standards are in a constant state of revision. Add to that the background and prejudices of the auditor or site visitor tasked with assessing conformity and the complexity increases.
Even with all of these moving pieces, agencies have been able to establish consistency through the use of qualitative assessment tools including interviews and observations, ethnographies, phenomenology, and review of portfolios, case studies and work samples.
A less utilized but emerging tool for qualitative assessment is the use of rubrics, which can effectively transform qualitative assessment in to quantitative analysis. Even though numbers come out of a rubric or survey, the information can still be considered qualitative because those numbers point back to a level of quality.
For instance, an indicator of strength in a stated objective could range from 1-Beginning to 4-Exemplary. Provided that clear definitions of those indicators of strength are available, qualitative analysis can be streamlined for consistency.
By no means is any of this easy to accomplish and the implementation requires a great deal of thought. But if you are in the business of accreditation or any other type of conformity assessment, chances are you will come across a need to utilize qualitative measures. And from my experience, higher education accreditors have done their homework on the topic.
Chad Baker has spent 11 years entrenched in compliance management solutions for organizations performing accreditation, certification, and quality assurance in a variety of industries including Higher Education, Healthcare, Laboratory Science, and Public Service to help evaluate performance, measure quality, and analyze outcomes.