(NB: There are many ways to gauge “performance”. In this article I focus on the mortality rate, although the ideas presented herein could be applied to other outcomes.)

Hospital A is a large, academic, urban hospital, whose mortality rate in 2015 was 14%. Hospital B is a small, non-teaching, rural hospital with a mortality rate of 9% that same year. At first blush it appears that Hospital B is a safer place for treatment than Hospital A. But is it?

Perhaps a fairer question would be: Do the hospitals receive the same case-mix of patients? The patient case-mix, defined here loosely as the overall severity of illness and diagnostic mix, will differ from hospital to hospital. It would be naïve to expect that a hospital admitting severely ill patients has the same outcomes as a hospital admitting patients of lesser severity. In fact the latter might be discharging their patients to the former, which is could be better equipped to handle such cases.

How then can a comparison be made between two diverse hospitals? The crude method (i.e. CMS’s approach) is to standardize mortality rates using administrative data: gender, age, race, and diagnosis. In this era of electronic medical records and big data, that is unacceptable. A better approach is to create a predictive model based on as much information as possible that can be gleaned electronically. This might include vital signs, laboratory measurements, blood gasses, microbiology results, clinical procedures used, as well as administrative data. The result would be a model that is far superior for comparative purposes than one based on administrative data alone.

The key is how one uses these predictions. Predictive models, even the most accurate ones, are not precise enough to use at the patient level. For example, a patient with a mortality prediction of 17% ± 3% yields a confidence interval of 11% – 23%. That’s a huge range of possibilities.

Standardized Mortality Ratio

Using the average mortality prediction within a hospital balances out the unmeasured and random factors plaguing performance, and affords a smaller error range due to the large number of patients. The average mortality prediction within a hospital is then compared with the average observed outcome incidence. This ratio is known as the Standardized Mortality Ratio (SMR), a statistic that is widely used. Its formula is shown below:

SMR = average outcome / average mortality prediction

Suppose a hospital has an observed mortality of 5%, while its actual mortality is 6%. That hospital’s SMR is 5%/6% = 0.833. SMRs < 1.00 indicate the hospital is doing better than expected. Conversely, an SMR > 1.00 denotes poor performance, as would be the case if the observed mortality is 4% and the average prediction is 3.5%: an SMR = 1.14. An SMR = 1.00 means that a hospital is performing as expected.

SMRs can be used to compare hospitals. The example in Table One shows one such comparison.

Table One. Comparing hospitals based on the Standardized Mortality Ratio (SMR).


If we looked at observed mortality only, we would rank the hospitals in descending order by Hospital A, Hospital C, and Hospital B. However, armed with knowledge of the predicted mortality, the ranking would be completely reversed. That’s why it is so important to include a predictive model when comparing hospitals.

The SMR is far from perfect as a metric, and I’ll delve into its problems in a future article. But the takeaway here is that entities such as CMS, Consumer Reviews, and U. S. News & World Report need to use sophisticated predictive models in their assessment of hospitals. To err is human, but to err grievously is inexcusable.

For more information you might want to watch :

What is an SMR? Using RPI to Explain