How many cotton balls fit into a Boeing 747? What an ‘interesting’ thing to measure…

admin Our Thots

by Monica Horvath, PhD

Individuals working within healthcare know their efficacy is evaluated in part by an alphabet soup of nationally-recommended metrics.  In fact, there are almost as many committees of metric creators as there are metrics to follow.   But the guidelines for these measurements are not thrown together lightly.  They are conceived by caring, capable groups of experts who seek to promote the best possible patient care by positioning metric performance as a harbinger of systems gone awry.

So as providers, we know we need to measure ‘stuff.’  Through partnerships involving departments, IT, compliance, and executive teams, ‘stuff’ gets measured.  Sometimes, the visualization of these are beautiful art pieces that add to executive slide decks.  And at other times, the metrics seem to exist in a vacuum.  It is then quite reasonable to wonder whether our national health ecosystem incentivizes healthcare providers to drive improvement based on these process and outcome measures.  A recent study from theJournal of the American Medical Association analyzing improvements associated with NSQIP participation is one of many that suggest the answer is a simple, hollow, yet deafening‘No’.

Why? Are healthcare practitioners ambivalent to performance data?  Do they even see the data?  Are providers just scapegoats where true causal factors are sourced from other aspects of the care environment?  Do health organizations have the ability to quickly and efficiently track and share metrics to drive business change?  Do people know how to interpret a changing trendline and subsequently create a meaningful action plan?

All of these are reasonable questions to ask in the face of intelligence being ignored.  But they overlook a far simpler, but harder to address, causal factor that forestalls action:

The metrics are ‘interesting’.

(Yep, ‘interesting’.)

What? 

I am saying, quite bluntly, that ensuring stakeholders know of a metric’s trajectory is only important if they have reason to care.

We have had multiple engagements with executives, directors, managers, and even front-line staff that do the hands-on measurement work.   Disappointment has taught me that when I hear someone respond to a body of work by saying only ‘well that is interesting’ without offering any other comment, I know I have failed to make the data meaningful.  The data has left the realm of practical applicability and is relegated to trivia.  It could be ‘% of cesarean deliveries with appropriate deep vein thrombosis prophylaxis for the mother ‘ or‘how many cotton balls can you fit into a Boeing 747.’  My point is that the data is often based on too many multiplicative factors to have a single stakeholder infer enough culpability to drive change.  They don’t know what you are asking of them.  Everyone usually agrees that these ‘cotton ball’-like metrics should improve, but root cause identification isn’t a priority without the proper incentives.

So what does it mean to be incentivized?  Well, the first step is to ensure that the provider has all information necessary to make decisions that can shape the measured outcome.  Are they provided with information in the EHR, and if so, can they easily find it?  Without this, the system is failing to produce the information needed to provide the best care.   A second taboo but serious issue is that metrics must have a meaningful tie to a financial bottom-line—the ability to survive in our changing reimbursement environment—or else they can be interesting but not actionable.

Weak metrics provide noise and undermine healthcare’s ability to improve.   In designing any new measure for your dashboard, ask the following:

  • Does a metric clearly measure what is intended?
  • Are you sure that measurement ease is not a principal force in using this metric at your organization?
  • Are you certain that your metric isn’t just an overly resilient ‘canary’ in your healthcare coal mine?
  • If the metric increases or decreases by 30%, can your articulate the specific next steps the business should take?
  • Can you reasonably expect that your past metric is indicative of future performance?
  • And don’t answer too fast—are you sure the metric is driven by a clear business need, and not vanity?

If the answer to even one of these questions is anything but a confident ‘Yes’, you may be investing in the creation of interesting trivia that is impotent to shape meaningful business change.   

At ThotWave, we value a single, well-designed metric more than a dozen ancillary metrics that may be easier to measure but harder to interpret.  Within such a sea of numbers, the volume of trended, aggregated data can become a distraction.  Our experiences allow us to ‘soothsay’ the impact of a given metric as well as to coach the creation of visualizations that ensure important data moments and trends are easy to see.

Healthcare leaders just need courage to question the utility of existing measures, incentivize those actions that drive them to a better outcome, and ask for expert help when it is needed.  We are ready, willing, and able!