INTRODUCTION

In 1999, the Institute of Medicine (IOM) published its seminal report on medical errors, To Err Is Human: Building a Safer Health System.1 This widely cited analysis extrapolated from two studies of adverse events in hospitals and concluded that between 44,000 and 98,000 Americans die annually due to preventable medical error. The two referenced studies evaluated deaths from medical error by first determining the frequency of adverse events in hospitals and then separately deciding whether the adverse event was preventable and whether the adverse event caused harm.2, 3 More recently, a report including several additional studies concluded that medical error causes more than 250,000 inpatient deaths per year in the USA, making it the third leading cause of death behind only cancer and heart disease.4

Studies that review series of admissions and determine whether adverse events occurred, whether the events were preventable, and what harms resulted have been criticized for indirectness when used to estimate the number of deaths due to medical error.5, 6 In contrast, studies of inpatient deaths offer a more direct way of estimating the rate of preventable deaths. We undertook a systematic review and meta-analysis of studies that reviewed case series of inpatient deaths and used physician review to determine the proportion of preventable deaths.

METHODS

Data Sources and Searches

The Meta-analysis of Observational Studies in Epidemiology (MOOSE) and the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statements for reporting systematic reviews were used for our study.7, 8 We searched for observational studies that reviewed case series of inpatient deaths and used physician review to determine the frequency of death due to preventable error. A librarian conducted literature searches for relevant articles in Ovid Medline All, Ovid Embase, Web of Science, and the Cochrane Library from database inception through April 8, 2019. Details of the full search strategy are listed in Appendix Table 4. Additional searches were performed for governmental reports using the gray literature search tool produced by the Canadian Agency for Drugs and Technologies in Health (CADTH).9 We also searched for publications by the Institute of Healthcare Improvement (IHI), The Joint Commission, and The World Health Organization. Lastly, we used Web of Science and Scopus to search conference proceedings from American College of Physicians, IHI National Forum, and International Forum on Quality and Safety in Healthcare. The review protocol was registered in the Prospero database under registration number CRD42018095140.

Study Selection

All studies of case series of adult patients who died in the hospital and were reviewed by physicians to determine if the death was preventable were included. Non-English studies were included and translated using Google Translate, which has been shown to be a viable tool for the purpose of abstracting data for systematic reviews.10 Studies which evaluated a series of inpatient admissions to determine if there was a preventable adverse event, and then determined if that adverse event contributed to death, such as those included in the 1999 Institute of Medicine report, were excluded. We primarily searched for studies of consecutive or randomly selected inpatient deaths, but also included studies that used cohorts with selection criteria but analyzed these separately. Studies limited to specific populations such as pediatric, trauma, or maternity patients were excluded because our primary research question was to determine the overall rate of preventable mortality in hospitalized patients and these populations are less generalizable. Two authors (BR and CG) screened all titles and abstracts for full text review, and independently reviewed them for inclusion. Citations of the included studies were also screened. Disagreement was resolved by consensus.

Data Extraction and Quality Assessment

A standardized data extraction form was used to document study characteristics including year of publication; country of study; setting; number of deaths reviewed; cohort description; definition of preventable mortality; method of determination of preventability, whether errors prior to hospitalization were included; and inter-rater reliability for preventability. Data were extracted independently by two authors (BR and CG). Two authors (BR and LB) independently assessed study quality using a modified Newcastle-Ottawa Quality Assessment Scale (NOS) for Cohort Studies (Appendix Table 5)11 Differences were resolved by consensus. Three of the original NOS components were not relevant given the designs of the included studies. Instead, we added components for whether two or more reviewers independently assessed the preventability of the deaths and reported a measure of inter-rater reliability, and whether clear criteria for preventability were defined along with a method of arbitration for differences of opinion between reviewers. In determining the rate of preventable deaths, we included deaths which were determined by a majority of reviewers to have a > 50% likelihood of preventability. Lastly, when possible, we limited the assessment of preventability to events that occurred during the hospitalization rather than to events that occurred prior to the hospitalization.

Data Synthesis and Analysis

The preventable mortality rates and 95% confidence intervals (CIs) for the included studies were calculated using the score method.12 Mortality rates were pooled using a random-effects model as described by DerSimonian and Laird13 with Freeman-Tukey double arcsine transformation.14 Between-study heterogeneity was estimated using the I2 statistic. Statistical analysis was performed using metaprop command in Stata/MP, version 15.1 (StataCorp, College Station, Texas).

RESULTS

The initial electronic search found 3835 manuscripts, of which 731 were duplicates (Fig. 1). Of the 3104 manuscripts reviewed, 3011 were excluded based on title and abstract review, leaving 93 for full text review. Of these, 16 met criteria for study inclusion. The main reasons that studies were excluded after full text review were that they were not the correct study design or did not report original data. Of the 16 included studies, eight were of random or consecutive groups of patients and eight were of cohorts with selection criteria. The characteristics of the eight random or consecutive cohorts are shown in Table 1. The 2015 study by Hogan included patients that were previously reported by their group in 2012—only the new patients in the 2015 study are presented in Table 1 to avoid duplication. Of the eight studies, three were from the UK,16, 17, 19 two from Norway,20, 22 and one each from Denmark,15 the Netherlands,18 and Canada.21 Four of the studies included patients from multiple hospitals.16, 18,19,20 All studies defined preventable deaths as those that were rated as greater than 50% chance of having been preventable. Seven of the studies used a Likert scale to define preventability15,16,17,18,19,20, 22 while one used a scale of 0–100%.21 Five studies used multiple reviewers, three of which used consensus to arbitrate differences of opinion,15, 17, 20 one used a third reviewer,18 and one used latent class analysis.21 Six of the studies included adverse events prior to admission.16,17,18,19, 21, 22 Only the study by Hogan in 2012 separately reported the number of preventable deaths that occurred during the admission alone versus the number that occurred prior to admission. This study found that 13 of 52 (25%) preventable deaths were related to prior outpatient events.]-->

Figure 1
figure 1

Flow diagram of study selection.

Table 1 Characteristics of Included Studies from Random or Consecutive Cohorts

Table 2 describes the eight included studies of cohorts with selection criteria. Three of these studies were from the USA,23, 24, 28 two from France,25, 26 and one each from the UK,27 the Netherlands,30 and Taiwan.29 These cohorts had varied selection criteria. Four included only patients with certain diagnoses, generally selected to increase the number of possible preventable deaths.23, 24, 27, 28 Two studies included only emergency department admissions, one which was limited to patients not admitted to an intensive care unit (ICU) who died within 72 h26 and another which included only patients who died within 24 h of admission.29 One study was limited to ICU deaths only.25 Lastly, one study only included deceased patients that were positive for certain “trigger tool” events such as unplanned readmissions, unplanned transfer to intensive care, or healthcare-related adverse events.30 Five studies defined preventable deaths as those that were rated as greater than 50% chance of having been preventable.23,24,25,26,27 The three remaining studies used different definitions; one included deaths that were “somewhat likely” to have been preventable,28 one included deaths that were “potentially” preventable,30 and one used a yes/no determination of preventability.29

Table 2 Characteristics of Included Studies of Cohorts with Selection Criteria

Table 3 describes the study quality assessment based on a modified NOS. Study quality was assessed based on seven elements, each of which could earn one point if certain standards were met. The eight studies with random or consecutive cohorts all received between 6 and 7 points, depending on whether they had two reviewers independently review deaths for preventability and reported inter-rater reliability. The studies with cohorts with selection criteria had more varied NOS scores, ranging between 4 and 6. All lost points for lack of representativeness, and several lost points for using only one reviewer or not reporting inter-rater reliability.25, 27, 30 Lastly, two studies lost points for lack of clarity around the determination of preventability.29, 30

Table 3 Study Quality Assessment

Figure 2 demonstrates the pooled and individual study preventable mortality rates for hospitalized patients using random-effects meta-analysis for the eight studies which used consecutive or randomly selected cohorts. The overall pooled rate was 3.1% (95% CI 2.2–4.1%). Individual studies ranged from 1.4 to 4.4% preventable mortality with statistically significant evidence for heterogeneity (I2 = 84%, p < 0.001). The eight studies with selection criteria reported rates of preventable mortality ranging from 0.5 to 26.9%. One study from 1988 reported that 26.9% of 182 deaths for myocardial infarction, stroke, or pneumonia were > 50% likely to have been preventable.23 A study which evaluated 124 patients from the Emergency Department who died within 24 h of admission found that 25.8% of these deaths could have been prevented.29 Another study from 1994 reported that 21.6% of 22 deaths from certain diagnostic groups were at least “somewhat likely” to have been preventable.28 A large recent study from the Netherlands reported 9.4% of 2182 deaths as “potentially preventable.” The remaining studies with selection criteria reported rates of 0.5–6.2% preventable deaths.]-->

Figure 2
figure 2

Differences in rates of preventable inpatient mortality.

One study of consecutive patients21 and one with selection criteria24 reported rates of preventable mortality limited to patients expected to live longer than 3 months. The rate of preventable mortality for these patients ranged from 0.5 to 1.0%.

Appendix Table 6 lists the types of error which led to preventable death from the included studies. Six of the studies listed the frequency of different types of error which led to preventable death,16,17,18, 22, 23, 29 and four provided a narrative summary of the possibly preventable events leading to death but did not further categorize them.15, 20, 21, 26 The remaining studies provided no information on the types of errors which led to preventable deaths. The most common errors which led to preventable deaths were clinical monitoring and management issues, diagnostic errors, surgery/procedural complications, drug- or fluid-related complications, and errors related to infections or antibiotic choice.

DISCUSSION

Overall, our systematic review found eight studies of hospitalized patients that reviewed case series of consecutive or randomly selected inpatient deaths and found that 3.1% of 12,503 deaths were judged to have been preventable. Additionally, two studies reported rates of preventable deaths for patients with at least 3 months life expectancy and reported that between 0.5 and 1.0% of these deaths were preventable. If these rates are multiplied by the number of annual deaths of hospitalized patients in the USA, our estimates equate to approximately 22,165 preventable deaths annually and up to 7150 preventable deaths among patients with greater than 3 months life expectancy.31

These results contrast with earlier estimates of medical error which reported higher rates of preventable mortality. The IOM report as well as similar subsequent reviews has reported much higher estimates.4 Numerous authors have criticized these prior estimates for varied methodologic reasons,5, 6 including poorly described methods for determining preventability and causality for death, as well as for indirectness—these studies have in common that they primarily attempt to define the incidence of adverse events in series of hospitalized patients and then secondarily estimate the likelihood that the adverse event was preventable and the likelihood that the adverse event, rather than underlying disease, caused the patient’s death. The studies we reviewed have the advantage of both using as their denominator a series of inpatient deaths rather than admissions and directly assessing the deaths for preventability.

Our finding that only a small fraction of hospital deaths is preventable and that many preventable deaths occur in patients with less than 3-month life expectancy has several implications. The first relates to patient safety. We agree with others that recent higher estimates of preventable deaths are not plausible and may have served to undermine the credibility of the patient safety movement as well as diverted attention from other patient safety priorities.5

Another important implication of our study relates to the use of hospital mortality rates as quality measures. Overall hospital mortality rates and disease-specific mortality rates continue to be reported in many countries in Europe and the USA.32, 33 In the USA, overall hospital mortality rates are reported by the Veterans Health Administration and disease and procedure-specific mortality rates are used by the Centers for Medicare and Medicaid Services (CMS). Disease-specific mortality rates are also used to determine hospital reimbursement as part of CMS’ Hospital Value-Based Purchasing Program. Our results show that the large majority of inpatient deaths are not due to preventable medical error. Given this finding, variation in hospital mortality rates is more likely due to variation in disease severity and non-disease-related factors that affect the location of a patient’s death. Although disease severity is taken into account through the reporting of adjusted mortality rates, numerous critiques have pointed out the limitations of this approach.34,35,36,37

Even if disease patterns and severity were uniform, however, there would likely be variation in hospital mortality rates because of variation in the use of hospitals at the end of life.28, 37 If it is assumed that the vast majority of hospital deaths are unavoidable, then variation in inpatient mortality should be seen as a measure of where patients die, rather than whether they die. Numerous studies have found that many non-disease-related factors affect location of death, including referral to palliative care, home support, living situation, functional status, and patient and family preferences.38

Our study has several limitations. The most important is that the included studies rely on physician judgment to determine whether death was preventable. The actual accuracy of this method is unknown since there is no alternative gold standard. Physician review has multiple potential biases that could affect the estimated rate of preventable death, such as hindsight bias after poor outcomes.16, 24, 39 Physician reviews are also based on retrospective chart reviews, which may lack important information relevant to management and death. Another well-described limitation with physician review is poor inter-rater reliability.32 Indeed, the inter-rater reliability for preventable death of included studies was generally in the fair-moderate range.

Another limitation of our study was the finding of high statistical heterogeneity. This is likely due to the clinical and methodologic differences of the included studies. The study by Flaatten et al., for example, had the lowest rate of preventable mortality among the studies in the meta-analysis. The authors speculated that this was because in Norway there is no separate inpatient hospice system and therefore patients are frequently admitted for end-of-life care. These patients were routinely found to have had non-preventable deaths and this may have contributed to the low overall rate of preventable mortality compared to studies from countries that have separate hospice systems. The limited number of studies in our review precluded subgroup analysis or meta-regression.

A limitation of our study is also the limited geographic representation due to a lack of studies from the USA. The eight studies included in the meta-analysis are from Europe and Canada. The three studies from the USA were not included in the meta-analysis since they used selected cohorts of patients with an oversampling of specific conditions, and thus per protocol were not pooled with studies of consecutive or randomly selected cohorts. Another limitation of the included studies is that six of the eight studies in the meta-analysis included medical errors from prior admissions or prior outpatient care. This could potentially lead to an overestimation of the number of preventable deaths that occurred from events during the index hospitalization. Only the study by Hogan in 2012 separately reported the number of preventable deaths that occurred during the admission alone versus the number that included preventable events prior to the admission.16 This study found that 25% of preventable deaths were related to prior outpatient events, meaning that our estimate may overstate deaths that were preventable due to care in the hospital by as much as 25%. In light of policy changes that focus on episodes of care rather than isolated admissions, further research is needed to study preventable errors prior to and after hospitalization which subsequently result in death.

Lastly, our study is limited only to preventable mortality and does not include estimates of preventable morbidity, another critical aspect of patient safety. While the rate of preventable mortality in hospitalized patients is lower than is often reported, it still represents what would be the 15th leading cause of death in the USA40 and deserves the continued attention of clinicians, hospital administrators, and policy makers.

In conclusion, we found that 3.1% of inpatient deaths are judged by physician review to have been preventable. This rate is lower than previous estimates and implies that the vast majority of inpatient deaths are due to underlying disease and not due to variation in quality of care. This has implications for the use of hospital mortality rates for quality reporting and reimbursement.