But take a look anyway, if you have an interest in process improvement in hospitals. This is a collection of my best posts on this topic.

Thursday, March 31, 2011

How the veterans are winning the war

At a seminar last night at the Center for Public Leadership at Harvard's Kennedy School, one of the students asked a question along the lines of, "How do you know when you have done too much with regard to transparency?" My answer was that the question presupposed the wrong approach to transparency, that it was being driven by the CEO without proper attention to the efficacy and appropriateness of what was being measured and disclosed. Instead, I suggested that it should be driven by the leadership of the organization, but based on metrics that were viewed as useful and appropriate by the clinical staff. In such an instance, transparency serves the function laid out by IHI's Jim Conway, as summarized here in an article discussing the BIDMC experience:

[P]ublic reporting created what management guru Peter Senge calls creative tension, a key in getting an organization to change. Announcing a daring vision — the elimination of patient harm — combined with honestly publicizing the problems, fuels improvement, he said.

I expressed the concern last night that the general recalcitrance of the medical profession about engaging transparency will inevitably lead to fiats about disclosure from government regulatory agencies. The problem with those fiats is that they will be grossly constructed and force hospitals and doctors to focus on the wrong things, in a manner not consistent with widely established principles of process improvement. (See, for example, this approach in Maryland.)

Now comes the Veterans Administration, proving the case with panache! You may recall my complimentary post on the VA back in January. Thomas Burton's article this week in the Wall Street Journal -- "Data Spur Changes in VA Care" -- documents this in more detail. Some excerpts:

Hospitals serving U.S. military veterans are moving fast to improve care after the government opened a trove of performance data—including surgical death rates—to the public.

The information was released at the urging of VA Secretary Eric K. Shinseki. Among other things, it presents hospitals' rates of infection from the use of ventilators and intravenous lines, and of readmissions due to medical complications. The details have been adjusted to account for patients' ages and relative frailty.


"Why would we not want our performance to be public? It's good for VA's leaders and managers, good for our work force, and most importantly, it is good for the veterans we serve," Mr. Shinseki said in an emailed statement.

At VA hospitals in Oklahoma City and Salem, Va., the rate of pneumonia acquired by patients on ventilators was shown last fall to be significantly higher than the national VA average. The Salem hospital says a relatively low number of patients on ventilators skewed its infection rate higher, but staff members at both facilities say the numbers prompted action.

Seeing the data helped, says the Salem hospital's chief of surgery, Gary Collin, because "you can become kind of complacent."
In contrast, notes the article:

This unusually comprehensive sort of consumer information on medical outcomes remains largely hidden from the tens of millions of Americans outside the VA system, including many of those in the federal Medicare system.

And, as I reported last month,

A November 2010 report from the Health and Human Services inspector general concluded that one in seven Medicare patients is harmed by medical care, nearly half of those avoidably.

Conway is right. Senge is right. The veterans have figured out how to start winning the war for patient safety and quality and process improvement. The rest of the profession is in retreat and is letting the wrong people design the battle plan.

Monday, March 28, 2011

A mentor hospital

The Institute for Healthcare Improvement gives the following update. How impressive! And how generous of Columbia Regional to offer to share what they have learned. What a shame that The Joint Commission has not followed this lead by making its best practice library available to all.


Mentor Hospital Goes 5 Years Without a VAP

Columbus Regional LogoStaff at Columbus Regional Hospital in Columbus, IN, recently celebrated an amazing accomplishment. They have gone five years without a single incidence of a ventilator-associated pneumonia (VAP). These deadly pneumonias used to be considered an unfortunate reality in ICUs. As a participant in IHI's 100,000 Lives and 5 Million Lives Campaigns, the hospital took aim at reducing VAP by implementing the IHI Ventilator Bundle, evidence-based care guidelines that, when reliably applied, can drastically reduce and even eliminate these infections. One of the enduring legacies of the Campaigns is a robust registry of mentor hospitals, facilities that have outstanding track records in improvement in Campaign-related topic areas that have generously agreed to provide support and clinical expertise to hospitals seeking help with their implementation efforts. Columbus Regional has been a mentor hospital since 2006 for the topics of VAP, Rapid Response Systems, the Central Line Bundle, and Heart Failure Core Processes. IHI congratulates Columbus Regional on their tremendous achievements.

Saturday, March 19, 2011

Probably right, or wrong

In the post below, I ask you to make a diagnosis of a medical condition. Most people get it wrong, probably because the actual diagnosis is far removed from the setting presented. People apply their inductive forces to a new problem, based on probabilistic inferences from other situations with which they are more familiar.

I attended a seminar on Friday at which MIT's Joshua Tenenbaum presented a theoretical basis for this learning process. If you subscribe to Science Magazine, you can read his recent article on the topic: "How to Grow a Mind: Statistics, Structure, and Abstraction."

It turns out that people are reasonably good at inference, from a very young age, as Joshua notes:

Generalization from sparse data is central in learning many aspects of language, such as syntactic constructions or morphological rules. It presents most starkly in causal learning: every statistics class teaches that correlation does not imply causation, yet children routinely infer causal links from just a handful of events, far too small a sample to compute even a reliable correlation!

In a more theoretical section, the author describes a probabilistic, or Baysian, model to explain this learning process:

How does abstract knowledge guide inference from incomplete data? Abstract knowledge is encoded in a probabilistic generative model, a kind of mental model that describes the causal processes in the world giving rise to the learner's observations as well as unobserved or latent variables that support effective prediction and action if the learner can infer their hidden state. . . . A generative model . . . describes not only the specific situation at hand, but also a broader class of situations over which learning should generalize, and it captures in parsimonious form the essential world structure that causes learners' observations and makes generalizations possible.

Except when it doesn't work! As several of you demonstrated below, that same probabilistic model can lead to cognitive errors.

I summarized Pat Croskerry's explanation below:

Croskerry's exposition compares intuitive versus rational (or analytic) decision-making. Intuitive decision-making is used more often. It is fast, compelling, requires minimal cognitive effort, addictive, and mainly serves us well. It can also be catastrophic in that it leads to diagnostic anchoring that is not based on true underlying factors.

Why the dichotomy? How can a learning process that works so well in some cases led us awry in others? I asked Joshua, and he suggested that it might have to do with the complexity of the issue. For those functions that were important in an evolutionary sense as humans evolved -- e.g., recognizing existential threats, sensing the difference between poisonous and healthy plants -- a quick probabilistic inference was all that mattered.

Now, though, in a complex society, perhaps we get trapped by our inferences. The sense of tribalism that led us to flee from -- or fight -- people who looked different and who might have been seeking to steal our territory or food becomes evident now as unsupported and destructive racial or ethnic prejudice.

Likewise, the diagnostic approach to illness or injury that might have sufficed with simple health threats 10,000 years ago no longer produces the right result in a more complex clinical setting. Think about it. If you were a shaman or healer in a tribe, most conditions or illnesses healed themselves. You recognized the common ailments, and you knew you didn't need to do much, and whatever herbs or amulets or incense you used did no harm. If you couldn't cure the disease, you blamed the evil spirits.

In contrast, as a doctor today, you are expected to apply an encyclopedic knowledge to a variety of complex medical conditions -- cancer, cardiovascular disease, liver and kidney failure -- that were relatively unknown back then. (You were more likely to die from something more simple at a much younger age!) Many cases you see today have a variety of symptoms and multivariate causes and different possible diagnoses. It is no surprise that your mind tries to apply -- in parsimonious form -- a solution. The likelihood of diagnostic anchoring is actually quite high, unless you take care. As I note below:

Croskerry thinks we need to spend more time teaching clinicians to be more aware of the importance of decision-making as a discipline. He feels we should train people about the various forms of cognitive bias, and also affective bias. Given the extent to which intuitive decision-making will continue to be used, let's recognize that and improve our ability to carry out that approach by improving feedback, imposing circuit breakers, acknowledging the role of emotions, and the like.