Statistics are funny things. They can be manipulated, to be sure, in a manner not unlike an impressionist painting -- to give an impression of the facts, without an accurate rendition of the same. For the artist, this allows a wide range of expression and resultant interpretation on the part of the viewer. For the person generating statistics, it allows all sorts of shenanigans, whether intentional or not, that can result in dissemination of "facts" that are a bit shady. Unfortunately, we see this from a variety of sources in medicine today, with reams of data being mined, folded, twisted and mutilated to generate a given data set to support a particular position....or particular advertisement. A couple of recent articles may help illustrate this dilemma.
What Constitutes a "High-Volume" Hospital for Pancreatic Resection is a study from Johns Hopkins, itself an excellent facility with a very high volume of pancreatic surgery for malignancy. Basically, these folks looked at a five year collection of data from the Nationwide Inpatient Sample (itself a sampling of 20% of hospital discharges) to evaluate mortality rates for pancreatic resection --- with an eye towards determining some threshold below which mortality increased. The results (from the abstract):
Based on analysis of 7,558 patients who underwent pancreatic resection, median annual institution resection volume was 15 (range 1 to 254), and overall in-hospital mortality was 7.6%. The best model of “high-volume” centers was an annual institution resection volume of 19 or more, as determined by goodness of fit (r2 of 5.29%). But there was little difference in data variance explained between this best model and other “high-volume” models. The model without any volume variable had a goodness-of-fit r2 of 3.57%, suggesting that volume explains less than 2% of data variance in perioperative death after pancreatic resection.Let me repeat. "(surgical) volume explains less than 2% of the data variance in perioperative death after pancreatic resection." The best model that fits with the data is defining "high volume" centers as those that perform 19 or more pancreatic resections per year --- hardly what an outside observer would The conclusions of the authors :
Very little difference was observed in the explanatory powers of models of “high-volume” centers. Although volume has an important impact on mortality, volume cut-off is necessary but insufficient for defining centers of excellence. Volume appears to function as an imperfect surrogate for other variables, which may better define centers of excellence.Hmm. For the last several years, we have been treated to a variety of articles and policy proposals aimed at creating "centers of excellence" based upon the idea, and some supporting data, that higher volumes of certain surgical procedures leads to better outcomes. But when trying to retrospectively identify the threshold for what is "high enough," the data here don't match up well with the preconceived notion. Oops. As much as I suspect this same result might be seen if we looked at other procedures proposed to be regionalized into high volume centers, I don't expect the captains in charge of this ship to change course. My translation of this study's conclusions would read a bit differently --- we can't show that there is a volume threshold, but we still think it's important, so we'll find something else to focus on to prove our point.
OK. Bear with me. More statistics, but I promise not to have a quiz at the end. The next article is actually my favorite kind of journal piece -- it validates common sense and sticks it in the eye of folks who manipulate statistics inappropriately. Entitled The Zero Mortality Paradox in Surgery, it addresses the way data can be carved out to create a favorable impression of an institution.
Every surgeon knows that their next potential complication is only an operation away. Every surgeon has also seen journal articles touting things such as "2,000 consecutive whole body-ectomies without a mortality" --- which basically mean that the authors identified their two most recent deaths with a particular operation and counted the patients in between. Well, hospitals play that game too, and tout their successes with zero mortality in certain procedures in given years. These authors went a step further, and looked at such institutions in the years following a run of zero mortality for certain operations (from the abstract):
We obtained national Medicare data on five operations with high operative mortality (> 4.0%): coronary artery bypass grafting, abdominal aortic aneurysm repair, and resections for colon, lung, and pancreatic cancer. For each procedure, we defined zero mortality hospitals as those with no inpatient or 30-day deaths during the 3-year period 1997 to 1999. To determine whether these hospitals actually have lower mortality than other hospitals, we compared their mortality during the next year (2000) with the mortality at all other hospitals.Now, to be sure, these are good institutions with an admirable track record --- lower than expected operative mortality for some pretty major operations. However, they soon came back down to earth :
For four procedures, operative mortality in zero mortality hospitals in the subsequent year was no different than that in other hospitals: abdominal aortic aneurysm repair (6.3% zero mortality hospitals versus 5.8% other hospitals; (adjusted relative risk [RR]=1.09; 95% CI 0.92 to 1.29); lobectomy for lung cancer (5.1% versus 5.3%; RR=0.96; 95% CI 0.80 to 1.15); colon cancer resection (6.0% versus 6.6%; RR=0.91; 95% CI 0.80 to 1.03); and coronary artery bypass surgery (4.0% versus 5.0%; RR=0.81; 95% CI 0.61 to 1.04). In the case of pancreatic cancer resection, zero mortality hospitals had substantially higher mortality than other hospitals (11.2% versus 8.7%; RR=1.29; 95% CI 1.04 to 1.59).What goes up, must come down. Every run of good luck comes to an end. Use whatever cliché you'd like, but a zero mortality rate for these types of procedures is a laudable but impossible goal to achieve. Or, as my stock broker likes to say, past results are not necessarily indicative of future performance.
OK. I lied --- it's quiz time!
Question: What did we learn with this little dip into the statistics whirlpool?
Answer: Statistics are like bikinis. What they reveal is suggestive, but what they conceal is vital.
Question: So, how is the unsuspecting public supposed to interpret these kinds of studies?
Answer: With a jaundiced eye. As Mr. Clemens famously stated, there are "lies, damned lies, and statistics." The trick is to ensure you know as much about the data being presented as you can in order to interpret the results.
Question: Does all of this mean that we shouldn't believe data in medical journals?
Answer: Absolutely not. It is important to accurately assess things such as morbidity and mortality, and honestly look at data that show where we can improve. However, creating public policy or planning advertising campaigns based upon such data can prove to be a bit of a problem, because future data can turn around and bite you in the rear.