Risk Management, Loss Prevention, Higher Education Accreditation, Laboratory Quality Assurance, Product Certification. These sound like completely different disciplines. Oddly enough, I spend time learning from people in each of these areas weekly, and the conversation is often the same.
Each of their processes for measuring compliance begins with the development of standards, measures or policies. These are primarily developed by the assessing body, or by a third party Standards Development Organization (SDO).
For those seeking compliance, each industry engages in an initial data gathering exercise to determine things like the type or scope of compliance sought, and general readiness or qualifications. This one ranges from filling out basic initial application forms to heavily narrated self-assessments or self-studies.
At this point the assessment body conducts an audit through some combination of a ‘desktop review’ system of evaluating all of the materials provided through the application process and/or by sending evaluators to visit and see firsthand how business is being conducted.
A report of compliance, non-compliance, findings, recommendations, etc. is generated.
Finally, this leads to a review, corrective action, and ultimately a decision either by committee or by quality analyst determination.
Now that the similarities are evident, it becomes clear that there are best practices that can be shared across industries to promote quality.
One key ingredient of many testing-based scientific quality processes is the incorporation of agreed-upon, common ‘findings’ or reasons for concern/non-compliance. This practice does not need to be incorporated across each and every policy or standard consideration. However, when it is used, even sparingly, tremendous insight can be gained from it.
For instance, once I compile the reasons for non-compliance on a particular measure, I become more enlightened as to the core issues behind the discrepancy. It can be easier to determine if perhaps the standard needs to be re-written, enforced in a different way, or interpreted more clearly. It can also lead me to develop specific training to address the most common problems.
Higher education accreditors are making great strides in tying standards to all of the different areas in which they are used, from self-study to site visit report. Work is being done to ensure that a single standard can be utilized across as many different instruments, reports, and documents as is necessary so that the version is always controlled at the source. This is enabling remarkable traceability and benchmarking capability, especially when review cycles move upwards of one year.
Risk Analysis and Loss Prevention programs often use a simple weighting strategy to assign greater value to policies/standards that represent a higher degree of potential hazard. These are often grouped together and tied to a report that identifies problem areas so that immediate action can be taken.
In every quality assurance process there are going to be some measures that carry more weight than others. Gaining insight in to the problem areas with some immediacy, without weeding through the less important data or narratives, can be of significant benefit no matter what area of compliance you are involved in.
Case in point; when the time comes to reflect on your own compliance processes, don’t just look for benchmarks in your own industry. It never hurts to consider the possibility that the solution to your problem may be out there already, perhaps just implemented a bit differently.
Chad Baker has spent 11 years entrenched in compliance management solutions for organizations performing accreditation, certification, and quality assurance in a variety of industries including Higher Education, Healthcare, Laboratory Science, and Public Service to help evaluate performance, measure quality, and analyze outcomes.