Six blind men, one elephant and how we measure Quality in Healthcare…

Six blind men, one elephant and how we measure Quality in Healthcare…

The latest issue of Health Affairs contains a report about the annual cost to US physician practices of reporting quality measures ($15.4 billion). My attention was piqued by their comment that state and regional agencies currently use 1,367 measures of provider quality of which only 20 percent are used by more than one state or regional program. Furthermore, a study of twenty-three health insurers found that 546 provider quality measures were used, few of which matched across insurers or the 1,700 measures used by federal agencies.

I was thinking about how we measure quality in healthcare these days and this parable came to mind. In ancient India six blind men came upon an elephant. The first ran into the side of the elephant and said an elephant is like a wall, the second felt the trunk and said the elephant is like a snake, the third felt the tail and said an elephant is like a rope and so on, each with a “view” that was very far from the whole.

It seems to me that we in healthcare are sort of doing the same. We started by analyzing “bad outcomes” and at least advanced to blaming the system instead of the individual. Next we began looking at cost per case and advanced to at least adding in severity measures such as APR/DRGs. Separately Six Sigma and “lean” management techniques became popular. More recently we have really focused on checklists and guidelines. The latest is patient satisfaction.

All of which make me think of the elephant and the blind men. We come at Quality from many directions but often reach highly disparate conclusions. One measurement still lacking is a measure of a patient’s condition before we start doing whatever it is we do as well as a measure of how our patient is after we finish. Such a measurement would satisfy a basic tenet, that you must have a way of measuring the effects of you efforts if you are to have a meaningful quality tool.

Such a tool now exists. It is called the Rothman Index (RI) and uses a patented algorithm to create a number between 1 and 100 which represents a patient’s “overall physiological condition or reserve” irrespective of diagnosis and reflective of treatment and is based on commonly available data. It is well documented in the peer-reviewed literature and since the “patient” serves as his own control seems particularly reliable. Using this approach would allow a before and after comparison applicable to hospitals, practitioners, various interventions or medications.

I hope it is not a seventh blind man.

What were we trying to do?

What were we trying to do?

In his book, “The Digital Doctor”, Bob Wachter describes speaking to a group of medical students thusly: “You folks need to be prepared for a career that will be massively different from mine. You will be under relentless pressure to deliver the highest quality, safest, most satisfying care at the lowest possible cost. I spoke these words slowly and gravely, doing my best to shake the students out of their youthful complacency. A clean-cut student raised his hand and asked, with the blinding mixture of naiveté and brilliance that characterizes smart young folks, “What were you trying to do?”

When I reflect back to the early ‘70s I believe I certainly was trying to deliver the above in regards to quality, safety, satisfaction and cost and feel my colleagues were trying to do the same. Most of the “guidelines” we followed were what we learned in medical school and training programs, much of which varied widely, depending on locales and the professors who taught us. Very little was “evidence-based” by current definition. The volume of medical knowledge was increasing at an exponential pace and keeping up-to-date in a subspecialty required meeting attendance at national meetings because the material in textbooks were several years old and that in most journals a year or so. No one ever spoke of patient satisfaction and some of us were imperious but most valued their patient relationships greatly. Cost was not so much of an issue but I remember personally wishing I didn’t have to worry about it because I did.

The quality of care we provided was monitored and measured by our peers and this was our greatest failing. As a pulmonologist I had a pretty good idea about the primary care providers who referred to me but not much about my fellow pulmonologists. Actually I had very little feedback about how I was doing except in the most personal and individual way. Mostly we evaluated a patient’s care and a physician’s performance only after some obviously poor outcome and did so with little or no training in the matter. As we “progressed” through the years, hospitals started providing data primarily comparing individual practitioners’ cost versus his peers, bringing up the old saw, “my patients are sicker than his”. I’m here to tell you this is true. Until you are able to measure how sick my patients are when I first got them and how they are afterward you can’t really tell how well I am doing….nor can I.

Today we are measuring quality by how well we are following guidelines: important but incomplete and often misleading. The very latest core measures do indeed look at important data but give little value as to how those practitioners in the upper quartile can get credit when they are already meeting these “core measures’.

To get back to Dr. Wachter’s student’s question; we were trying to do the same but were limited by lack of tools. Nowadays those students will have a different problem, being hampered by the extraneous duties assigned by insurance companies, hospital administrators and Medicare while hampered by inefficient medical record systems, all of which interfere with the patient-physician relationship.

Bitnami