Ask the Experts- MCC Answers Your Metric Questions

With more organizations focusing on metrics, the MCC has received an increase of questions ranging from how to use metrics, why some metrics are better than others, which type of metrics is best to use, as well as questions about specific MCC Metrics. This column provides a forum for us to share these questions and answers with you.

MCC Members at the Gold or Platinum benefit level, can access the “MCC Ask the Experts – Questions and Answers Archive” by logging into the MCC member website. Interested in becoming a member or have questions about your membership? Contact membership director, Terry Holland.

December 2019

A: The exercise of managing risks is just that – most risks are managed rather than eliminated and so there is always a possibility for them to materialize as issues. It seems unlikely we will be able to eliminate all issues in clinical trials but good risk management should help reduce them (and their impact). Teams are typically working with imperfect information when they carry out risk assessments – clinical trials have many moving parts after all. Hopefully, many of the important risks have been identified through the risk assessment itself. The prioritization of those risks can be challenging with limited information though. If there is little past experience for a particular risk, then how do you score the likelihood of occurrence, for example? This really does emphasize the importance of an iterative and learning approach. Reviewing the risk assessment at points during the study will allow the team to remove risks that are no longer relevant, identify new risks, and reprioritize based on updated information.

There is another process error that you should watch out for when scoring risks. The MCC Risk Assessment Mitigation Management Tool allows you to score the Likelihood (L), Impact (I) and Detectability (D) of each risk. Sometimes, when scoring risks initially, teams will factor in risk controls they plan to put in place. So, for example, due to concern about the reproducibility of a particular lab measurement, the study team determines they will implement additional cross-checks and comparisons. Implementing these cross-checks and comparisons will mean they can detect an issue sooner (i.e. they will have an improved detectability score and a lower risk score). This may mean the risk score is low enough that that risk is not prioritized for risk reduction actions. However, if the study team fails to implement the cross-checks and comparisons, the true risk will be higher than the score assigned to the risk during evaluation. Thus, the scores for the risk assessment should be based on the existing processes otherwise you may end up with risks that are higher than you realized. Implementing the previously mentioned cross-checks could be implemented as a risk reduction step and the risk could be reassessed once the process is in place.

When issues occur, after the initial actions to contain and address the issue, you should take time to reflect. Was the issue recognized as a risk in the risk assessment? If not, should it have been? If it was, was it correctly scored and actioned? How should the risk assessment be updated given that the issue has occurred? What can you learn to help reduce the risk of recurrence?

MCC has resources that can help you:

  • The Risk Assessment and Mitigation Management Tool v2.0 (RAMMT 2.0) is available to those in member organizations. In 2020, we will be carrying out a further review and update of this tool
  • A new web-based course launched this year which is available at a discount to those in MCC member organizations – “MCCe301 Retooling Risk-Based Management Approaches in the Era of ICH E6(R2): Fundamentals of Clinical Trial Risk Management”
  • Study Quality Trailblazer LinkedIn Community where you can raise questions and debates on risk-based quality management with our user community

Meet the Experts

Keith Dorricott, MCC Ambassador and Director,
Dorricott Metrics and Process Improvement, LTD

Linda B. Sullivan, Co-Founder & President,
Metrics Champion Consortium

Click here to submit a question