Skip to content
BY-NC-ND 3.0 license Open Access Published by De Gruyter December 6, 2014

Missed diagnoses of acute myocardial infarction in the emergency department: variation by patient and facility characteristics

  • Ernest Moy , Marguerite Barrett , Rosanna Coffey , Anika L. Hines EMAIL logo and David E. Newman-Toker
From the journal Diagnosis

Abstract

Background: An estimated 1.2 million people in the US have an acute myocardial infarction (AMI) each year. An estimated 7% of AMI hospitalizations result in death. Most patients experiencing acute coronary symptoms, such as unstable angina, visit an emergency department (ED). Some patients hospitalized with AMI after a treat-and-release ED visit likely represent missed opportunities for correct diagnosis and treatment. The purpose of the present study is to estimate the frequency of missed AMI or its precursors in the ED by examining use of EDs prior to hospitalization for AMI.

Methods: We estimated the rate of probable missed diagnoses in EDs in the week before hospitalization for AMI and examined associated factors. We used Healthcare Cost and Utilization Project State Inpatient Databases and State Emergency Department Databases for 2007 to evaluate missed diagnoses in 111,973 admitted patients aged 18 years and older.

Results: We identified missed diagnoses in the ED for 993 of 112,000 patients (0.9% of all AMI admissions). These patients had visited an ED with chest pain or cardiac conditions, were released, and were subsequently admitted for AMI within 7 days. Higher odds of having missed diagnoses were associated with being younger and of Black race. Hospital teaching status, availability of cardiac catheterization, high ED admission rates, high inpatient occupancy rates, and urban location were associated with lower odds of missed diagnoses.

Conclusions: Administrative data provide robust information that may help EDs identify populations at risk of experiencing a missed diagnosis, address disparities, and reduce diagnostic errors.

Introduction

An estimated 1.2 million people in the US have an acute myocardial infarction (AMI) each year [1]. In 2009, inpatient hospital costs for AMI were nearly 12 billion dollars [2]. Although overall AMI mortality rates are declining [3], about 7% of all AMI hospitalizations in 2007 resulted in death [1, 4].

Most patients experiencing an acute coronary syndrome (ACS) (i.e., AMI or a precursor such as unstable angina) come through an emergency department (ED). EDs commonly evaluate patients presenting with chest pain and other symptoms suggestive of ACS with electrocardiograms and biochemical diagnostic tests. After evaluation, patients considered at high risk of AMI are hospitalized or held for observation until an AMI can be diagnosed or excluded, and patients considered at low risk are released with outpatient follow-up.

The decision to hospitalize or release is not always clear. Previous studies have estimated that 2% to 8% of patients with AMI are not diagnosed in the ED and are inadvertently released home [5–9]. These studies have been small – usually including fewer than a dozen hospitals. They have also focused on the characteristics of patients who are more likely to have missed diagnoses of AMI: women younger than 55 years, patients who are not White, and those who present with atypical features of cardiac ischemia [7, 10, 11]. Additional research with a larger number of patients would yield more generalizable estimates.

Some patients hospitalized with AMI after a treat-and-release ED visit likely represent missed opportunities for correct diagnosis and treatment [12]. Although the effects of missed AMI diagnoses are not completely understood, some studies have found a nearly two-fold increase in the risk of death [7]. Tracking rates of these missed diagnoses might allow providers to target specific patients and policy makers to target specific facilities for improvement. Furthermore, the ability to track missed diagnoses across a range of symptoms and problems would facilitate public health prioritization efforts to reduce misdiagnosis and mitigate harms [13].

The purpose of the present study is to estimate the frequency of missed AMI or its precursors (e.g., unstable angina) in the ED by examining use of EDs prior to hospitalization for AMI. We focus on patients evaluated for chest pain or cardiac conditions within 1 week of hospitalization; these patients were the most likely to have missed opportunities for diagnosis and intervention that might have reduced their risk for AMI. We use administrative data from the Healthcare Cost and Utilization Project (HCUP) – a family of databases that encompasses inpatient discharge data for over 95% of visits to hospitals in the US [14]. We estimate the overall rate of missed diagnoses and examine the association between missed diagnoses and patient, ED, and hospital characteristics.

Materials and methods

Definition of misdiagnosis

Definitions and standards for describing diagnostic failures vary [13]. We defined misdiagnosis as a diagnostic error; that is, a diagnosis that is “missed, wrong, or delayed, as detected by some subsequent definitive test or finding” [15]. We focused on probable missed AMI using hospital admission with a discharge diagnosis of AMI as the subsequent definitive test. We looked back in time from these index admissions for patients whose symptoms were probably missed or misdiagnosed at a recent ED visit. We did not distinguish missed, wrong, or delayed diagnoses or differentiate between misdiagnosis and diagnostic error. Because we did not have detailed clinical data, we could not examine potential diagnostic process failures, preventability of the missed AMI, or potential harm resulting from the diagnostic procedure.

Study design

We conducted a retrospective, cross-sectional analysis of probable missed AMI using linked inpatient discharge records and ED visit records. We identified patients hospitalized for AMI in inpatient data. Then, we identified the patients who had been treated and released from an ED in the preceding 7 days in linked ED data. Data were prepared and analyzed consistent with Health Insurance Portability and Accountability Act (HIPAA) privacy rules as described below.

This is a retrospective study using administrative data with synthetic person identifiers. No human subjects were involved in the preparation of this manuscript, and no IRB approval was required.

Data source

Our analysis used the 2007 HCUP State Inpatient Databases [16], which is a census of inpatient discharge records, and the 2007 HCUP State Emergency Department Databases [17], which is a census of hospital-affiliated ED visits that did not result in hospitalizations. ED visits that resulted in hospitalizations are captured in the SID. We linked individuals in inpatient and ED settings of care using a synthetic person identifier that state data organizations had devised with each patient’s personal information. Synthetic person identifiers can be used to track patients across hospitals and settings while satisfying strict privacy guidelines [18]. We included in the study the states with SID and SEDD data as well as reliable, encrypted person identifiers and race and ethnicity coding. The resulting data came from 9 states (Arizona, Florida, Massachusetts, Missouri, New Hampshire, New York, South Carolina, Tennessee, and Utah) comprising 797 EDs.

Study population

We included records of 111,973 patients aged 18 years and older who had an AMI index admission between February and December, 2007. We identified AMI admissions by using Clinical Classifications Software (CCS), which groups International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) codes by clinically meaningful categories [19]. We required patients have a principal diagnosis of AMI as determined by the CCS code 100. To adhere closely to the aforementioned definition of “diagnostic error”, we excluded from the analysis patients who left the ED against medical advice (<1% of all AMI index admissions with a prior ED visit and 13% of AMI index admissions with a prior ED visit for cardiac symptoms). We also excluded patients with missing ZIP-Code-level income information (2.9% of AMI index admissions).

Variables

Missed diagnoses

Hospitalization for AMI following an ED visit does not necessarily indicate that an opportunity for diagnosis or treatment was missed. A patient’s ED diagnosis could be unrelated to the subsequent AMI. To minimize this problem, we focus on patients who visited an ED with chest pain or cardiac conditions, were released from the ED, subsequently returned to a hospital within 0 to 7 days, and were admitted with a principal diagnosis of AMI. Patients were not counted in the category of missed diagnoses if, on their initial ED visit, they were admitted to the same hospital through the ED or transferred to another hospital. Consistent with previous studies [5–9] and to make a conservative attribution of the AMI to the earlier symptom, we applied a maximum cutoff of 7 days between the ED visit and an inpatient admission.

We based our list of cardiac conditions on those used in previous clinical studies and on preliminary examination of CCS codes. Using CCS codes for the first-listed diagnosis in the SEDD, we identified ED diagnoses preceding admission for AMI. Table 1 lists the ED diagnoses for patients with an ED visit within the prior 7 days of their AMI admission. About 70% of these patients were diagnosed with four cardiac conditions: (1) nonspecific chest pain (45.5%); (2) coronary atherosclerosis and other heart disease (15.4%); (3) congestive heart failure (5.4%); and (4) cardiac dysrhythmias (3.3%). Another 15% were diagnosed with other lower respiratory disease (8.9%) or abdominal pain (6.6%). All other conditions accounted for <3% of the missed diagnoses. Although some ED visits for lower respiratory disease, abdominal pain, and other diagnoses may represent atypical presentations of heart disease, we restricted our definition of missed diagnoses to ED visits for the four cardiac conditions.

Table 1

Emergency department diagnoses among patients released from an emergency department in the week before hospitalization for acute myocardial infarction.

CCS CodeDescriptionFrequency, nFrequency, %
102aNonspecific chest pain64945.48
101aCoronary atherosclerosis and other heart disease22015.42
133Other lower respiratory disease1278.90
251Abdominal pain946.59
108aCongestive heart failure; non-hypertensive775.40
106aCardiac dysrhythmias473.29
138Esophageal disorders362.52
245Syncope322.24
155Other gastrointestinal disorders302.10
98Essential hypertension302.10
252Malaise and fatigue251.75
93Conditions associated with dizziness or vertigo241.68
100Acute myocardial infarction201.40
140Gastritis and duodenitis161.12
Total1427100.00

CCS, Agency for Healthcare Research and Quality Clinical Classifications Software (CCS). aED code that was considered a missed diagnosis. Source: Agency for Healthcare Research and Quality, Healthcare Cost and Utilization Project (HCUP), State Inpatient Databases and State Emergency Department Databases, nine states combined (Arizona, Florida, Massachusetts, Missouri, New Hampshire, New York, South Carolina, Tennessee, and Utah), 2007.

As a validation check to ensure coherence between our misdiagnosis construct and the results we report, we assessed the temporal profile of missed diagnoses and controls within the 7-day window prior to index admission for AMI. We hypothesized that revisits for symptoms designated as probable missed diagnoses would be clustered in the days immediately following an ED visit. We expected revisits for designated control symptoms (i.e., other lower respiratory disease, abdominal pain, esophageal disorders, syncope, other gastrointestinal disorders, essential hypertension, malaise and fatigue, dizziness or vertigo, or gastritis) to be more evenly dispersed across the 7-day period.

Patient characteristics

We categorized patients by age, sex, race and ethnicity (non-Hispanic White, non-Hispanic Black, Hispanic, other non-Hispanic), and expected primary payer (private insurance, Medicare, Medicaid, other insurance, uninsured). We also classified patients by the national quartile of median household income based on the patient’s ZIP Code. We used Elixhauser Comorbidity Software to adjust for patient comorbidities [20].

Facility characteristics

We assigned hospital and ED characteristics based on the facility that initially treated the patient for AMI symptoms. For patients with an ED visit prior to the AMI admission, we used the characteristics of the ED that treated and discharged the patient. For patients without a previous ED visit, we used the characteristics of the admitting facility. Facility-specific characteristics included: (1) region (Northeast, Midwest, South, West); (2) population size (large metropolitan area, small metropolitan area, micropolitan area, and rural area); (3) hospital ownership (public, private not-for-profit, private for-profit); (4) the availability of a cardiac catheterization lab, as reported to the American Hospital Association; and (5) the facility’s teaching status (teaching or non-teaching). A teaching facility was defined as having an approved American Medical Association accredited graduate medical education program, being a member of the Council of Teaching Hospitals, or having a ratio of full-time-equivalent interns and residents to beds that was 0.25 or higher.

Facility volume characteristics

We characterized hospitals according to the volume of patients seen in the ED and inpatient settings. To assess volume, we divided into tertiles: (1) total volume of ED visits during the year, (2) the proportion of patients admitted to an inpatient setting from the ED, and (3) the annual occupancy rate as reported to the American Hospital Association.

Visit characteristics

We also accounted for visit-specific characteristics in the analyses. These included: (1) visit occurrence on a weekend or weekday; (2) visit occurrence during the first half (July–December) or second half (January–June) of the traditional resident training year; and (3) the relative ED volume on the day of the visit. We calculated relative ED volume as the total number of visits to the ED on the day of the patient’s visit, divided by the maximum number of visits to the ED on any day during the year.

Hospital disposition

We examined rates of mortality for patients with and without a missed diagnosis of AMI based on the patient’s discharge disposition.

Statistical analysis

We performed statistical analyses using SAS (SAS Institute, Inc; Cary, NC, USA) statistical software Version 9.2. We used hierarchical multi-level modeling [21–23] to examine the likelihood of missed diagnoses, adjusting for patient and facility characteristics. We fit the simplest form of a hierarchical model using SAS PROC GLIMMIX, where we investigated patient visits nested within the EDs. We also included hospital fixed effects to control for unobservable characteristics, such as ED technology, infrastructure, culture, personal biases, and other factors that could not be measured directly at each ED.

Results

Figure 1 displays the results of the temporal profile analysis of probable missed diagnoses and controls. As hypothesized, treat-and-release ED visits for missed diagnoses were clustered in the few days before the hospital admission for AMI. In contrast, visits for control symptoms were more evenly distributed throughout the 7-day period.

Figure 1 Temporal profile of initial emergency department treat-and-release visits for probable misdiagnosesa vs. control conditionsb in the 7 days prior to an index acute myocardial infarction admission.aProbable misdiagnosed cardiac conditions: non-specific chest pain, coronary atherosclerosis and other heart disease, non-hypertensive congestive heart failure, and cardiac dysrhythmias. bControl conditions: other lower respiratory disease, abdominal pain, esophageal disorders, syncope, other gastrointestinal disorders, essential hypertension, malaise and fatigue, conditions associated with dizziness or vertigo, and gastritis and duodenitis. Abbreviations: AMI, acute myocardial infarction; ED, emergency department; GI, gastrointestinal; IP, inpatient. Source: Agency for Healthcare Research and Quality, Healthcare Cost and Utilization Project (HCUP), State Inpatient Databases and State Emergency Department Databases, nine states combined (Arizona, Florida, Massachusetts, Missouri, New Hampshire, New York, South Carolina, Tennessee, and Utah), 2007.
Figure 1

Temporal profile of initial emergency department treat-and-release visits for probable misdiagnosesa vs. control conditionsb in the 7 days prior to an index acute myocardial infarction admission.

aProbable misdiagnosed cardiac conditions: non-specific chest pain, coronary atherosclerosis and other heart disease, non-hypertensive congestive heart failure, and cardiac dysrhythmias. bControl conditions: other lower respiratory disease, abdominal pain, esophageal disorders, syncope, other gastrointestinal disorders, essential hypertension, malaise and fatigue, conditions associated with dizziness or vertigo, and gastritis and duodenitis. Abbreviations: AMI, acute myocardial infarction; ED, emergency department; GI, gastrointestinal; IP, inpatient. Source: Agency for Healthcare Research and Quality, Healthcare Cost and Utilization Project (HCUP), State Inpatient Databases and State Emergency Department Databases, nine states combined (Arizona, Florida, Massachusetts, Missouri, New Hampshire, New York, South Carolina, Tennessee, and Utah), 2007.

Table 2 displays the descriptive characteristics of the patients with missed diagnoses in the ED and patients with AMI and no ED visit for chest pain or a cardiac condition within the previous 7 days. We identified 993 patients (0.9% of all AMI admissions) with missed diagnoses. State-level estimates ranged from 0.29% to 1.96%. Compared to patients without a preceding ED visit for a cardiac condition, patients with missed diagnoses were younger; more likely to be Black; less likely to be Hispanic; more likely to have private insurance, Medicaid, or no insurance; more likely to reside in areas with the lowest household incomes; and less likely to die in the hospital. There were 15 significant differences in comorbidities between patients with and without missed diagnoses; of these, 13 comorbidities were more common in patients who did not have missed diagnoses. Also, most patients with missed diagnoses were seen at non-teaching hospitals and visited EDs without cardiac catheterization capabilities.

Table 2

Descriptive characteristics of patients with and without missed diagnoses before acute myocardial infarction admission and hospitals where they were treateda in 2007.

Data elementsHad missed diagnoses (n=993)Did not have missed diagnoses (n=110,980)p-Value
Mean, %SDMean, %SD
Patient characteristics
 Age, years62.8215.3468.0614.76<0.001
 Female39.4848.9039.9448.980.667
 Race/ethnicity
  White (non-Hispanic)75.5343.0176.8542.180.162
  Black (non-Hispanic)10.8831.158.2627.53<0.001
  Hispanic4.3320.366.5324.71<0.001
  Other (non-Hispanic)9.2629.018.3627.680.155
 Expected primary payer
  Private insurance33.1347.0927.4144.61<0.001
  Medicare46.0249.8758.4849.28<0.001
  Medicaid8.3627.695.1622.12<0.001
  Uninsured9.4729.296.2724.24<0.001
  Other3.0217.132.6716.130.355
 Median household income by Zip code
  Highest (>$61,000)12.6933.3019.0739.28<0.001
  Moderate ($46,000–$60,999)18.6338.9521.9741.41<0.001
  Low ($37,000–$45,999)25.1843.4225.9743.840.408
  Lowest (<$36,999)43.5049.6033.0047.02<0.001
 Patient unrelated comorbidities
  Paralysis1.019.991.5512.360.013
  Other neurological disorder4.8321.465.9723.700.016
  Diabetes without chronic complications24.5743.0727.0744.430.008
  Diabetes with chronic complications3.9319.434.9621.720.015
  Hypothyroidism7.5526.448.9928.610.013
  Renal failure12.3932.9616.0836.73<0.001
  Liver failure1.3111.371.0510.180.294
  AIDS0.103.170.103.120.961
  Lymphoma0.507.080.517.100.985
  Metastatic cancer0.607.751.009.930.022
  Solid tumor without metastasis1.3111.371.4912.110.473
  Rheumatoid arthritis1.9113.712.0614.200.631
  Coagulopathy3.1217.404.0019.600.022
  Obesity11.0831.408.9428.540.002
  Weight loss0.919.481.5412.330.002
  Fluid and electrolyte disorder12.7933.4118.2938.66<0.001
  Chronic blood loss anemia0.818.941.3611.590.005
  Alcohol disorder2.4215.372.7516.350.325
  Deficiency anemias10.1730.2414.5635.27<0.001
  Drug abuse2.9216.851.7513.100.001
  Psychoses1.3111.371.8613.520.028
  Depression5.2422.295.9023.560.175
Facility characteristics
 Hospital region
  Northeast26.9944.4133.9547.36<0.001
  Midwest16.8237.429.8929.85<0.001
  South49.8550.0247.2249.920.017
  West6.3424.398.9328.52<0.001
 Population size
  Large metropolitan area (residents ≥1 million)33.8447.3456.1349.62<0.001
  Small metropolitan area (50,000<residents <1 million)31.3246.4036.3048.09<0.001
  Micropolitan area (10,000<residents <50,00020.2440.206.2824.25<0.001
  Non core-based area (residents <10,000)14.6035.331.3011.34<0.001
 Hospital ownership
  Private, not-for-profit62.0348.5569.2446.15<0.001
  Government18.9339.2013.1533.79<0.001
  Private, for-profit19.0339.2817.6138.090.099
 Availability of catheterization lab
  No49.0450.0213.4934.16<0.001
  Yes35.0547.7471.2445.26<0.001
  Missing15.9136.6015.2735.970.422
 Hospital teaching status
  Teaching hospital21.9541.4146.7149.89<0.001
Facility volume characteristics
 Emergency department volume
  Low (<34,681 visits)60.0249.0132.7046.91<0.001
  Medium (34,681–60,921)22.3641.6834.5947.57<0.001
  High (>60,921)17.6238.1232.7146.92<0.001
 Proportion of admission from emergency department
  Low (<19%)71.6045.1232.4546.82<0.001
  Medium (19%–26%)19.5439.6734.0147.38<0.001
  High (>26%)8.8628.4333.5447.21<0.001
 Hospital occupancy rate
  Low (<50%)19.9439.973.4418.22<0.001
  Medium (50%–70%)52.9749.9453.0149.910.971
  High (>70%)27.0944.4643.5549.58<0.001
Visit characteristics
 ED crowding on day of visit
  Low (<71%)50.5550.0232.7646.93<0.001
  Medium (71%–80%)28.2045.0234.0847.40<0.001
  High (>80%)21.2540.9333.1747.08<0.001
 Weekend/weekday visit
  Visit during weekend27.6944.7725.7443.720.047
 Month of visit
  January–June48.2449.9953.0849.91<0.001
Hospital disposition
 Mortality rate4.230.646.680.08<0.001

aFor missed diagnoses, the hospital characteristics relate to the ED where the patient was treated, whether or not the patient was admitted to the same or a different hospital. Source: Agency for Healthcare Research and Quality, Healthcare Cost and Utilization Project (HCUP), State Inpatient Databases and State Emergency Department Databases, nine states combined (Arizona, Florida, Massachusetts, Missouri, New Hampshire, New York, South Carolina, Tennessee, and Utah), 2007.

Table 3 displays the estimates, odds ratios (ORs), and probability values from the hierarchical model of missed diagnoses. Compared to younger patients, older patients had lower odds of missed diagnoses (aged 45–64 years, OR=0.70, p=0.001; 65–74 years, OR=0.61, p<0.001; 75+ years, OR=0.49, p<0.0001). Compared to White patients, Black patients (OR=1.31, p=025) and patients identified as other races or ethnicities (OR=1.45, p<0.005) had higher odds of missed diagnoses. The odds of missed diagnoses were 20% lower for patients covered by Medicare compared to those who were privately insured (OR=0.80, p=0.04). However, the associations between missed diagnoses and expected payers (other than Medicare), household income, and most comorbidity characteristics were not significant when other demographic and clinical conditions were controlled.

Table 3

Odds ratios of factors associated with missed diagnoses before acute myocardial infarction admission in 2007.

VariablesEstimatesOdds ratiop-Value
Patient characteristics
 Age group, years (reference=18–44 years)
  45–64–0.3560.7000.0013
  65–74–0.5010.6060.0007
  75 and over–0.7100.492<0.0001
 Sex (reference=female)
  Male–0.0130.9880.8634
 Race/ethnicity (reference=White)
  Black0.2731.3140.0245
  Hispanic0.1771.1930.3181
  Other0.3731.4520.0048
 Expected primary payer (reference=private)
  Medicare–0.2220.8010.0389
  Medicaid0.1171.1240.3938
  Uninsured–0.1380.8710.2798
  Other–0.2440.7840.2294
 Median household income by ZIP Code (reference=highest)
  Moderate0.0651.0670.6111
  Low0.0061.0060.9606
  Lowest–0.0990.9060.4550
 Patient unrelated comorbidities (reference=absence of specific comorbidities)
  Paralysis–0.4930.6110.1388
  Other neurological disorder–0.1550.8560.3247
  Diabetes without chronic complications–0.2000.8190.0123
  Diabetes with chronic complications–0.3550.7010.0469
  Hypothyroidism0.0011.0010.9956
  Renal failure–0.0900.9140.4083
  Liver failure0.4141.5120.1607
  AIDS–0.6680.5130.5257
  Lymphoma0.4011.4930.3874
  Metastatic cancer–0.4650.6280.2767
  Solid tumor without metastasis0.0321.0320.9142
  Rheumatoid arthritis–0.0560.9450.8179
  Coagulopathy0.0261.0260.8942
  Obesity0.1131.1200.3113
  Weight loss–0.4260.6530.2229
  Fluid and electrolyte disorder–0.2770.7580.0069
  Chronic blood loss anemia–0.2920.7470.4250
  Alcohol disorder–0.3960.6730.0750
  Deficiency anemias–0.2350.7900.0411
  Drug abuse0.0931.0970.6616
  Psychoses–0.5270.5900.0713
  Depression–0.2020.8170.1795
Facility characteristics
 Hospital region (reference=Northeast)
  Midwest0.7742.1690.0003
  South0.0101.0100.9534
  West–0.4100.6640.1074
 Hospital location (reference=large metropolitan area with at least 1 million residents)
  Small metropolitan area with <1 million residents–0.1550.8560.2488
  Micropolitan area0.6771.968<0.0001
  Not metropolitan or micropolitan areas0.6191.8580.0002
 Hospital ownership (reference=private, not-for-profit)
  Government–0.0100.9900.9371
  Private, for-profit–0.0580.9440.6354
 Availability of cardiac catheterization lab (reference=not available)
  Yes–1.6800.186<0.0001
  Unknown–0.2520.7770.0538
 Hospital teaching status (reference=non-teaching)
  Teaching–0.5060.6030.0002
Facility volume characteristics
 Emergency department volume (reference=low)
  Medium–0.0500.9510.6801
  High0.0771.0800.6451
 Percent admitted from ED (reference=low)
  Medium–0.6980.497<0.0001
  High–1.8990.150<0.0001
 Occupancy rate (reference=low)
  Medium–0.5530.576<0.0001
  High–0.4700.6250.0018
Visit characteristics
 ED crowding on day of visit (reference=low)
  Medium–0.0940.9100.2590
  High–0.2470.7810.0085
 Day of week (reference=weekend)
  Weekday–0.0060.9940.9407
 Month of visit (reference=July–December)
  January–June–0.3670.693<0.0001

Source: Agency for Healthcare Research and Quality, Healthcare Cost and Utilization Project (HCUP), State Inpatient Databases and State Emergency Department Databases, nine states combined (Arizona, Florida, Massachusetts, Missouri, New Hampshire, New York, South Carolina, Tennessee, and Utah), 2007.

The odds of missed diagnoses also varied with facility and visit characteristics. Hospitals in the Midwest had more than twice the odds of missed diagnoses as hospitals in the Northeast (OR=2.17, p=0.0003). Compared to hospitals in large population centers, those in areas of between 10,000 and 50,000 residents (OR=1.97, p<0.0001) and those with <10,000 residents (OR=1.86, p=0.0002) demonstrated higher odds of missed diagnoses. The odds of missed diagnoses were about 80% lower for facilities with available cardiac catheterization laboratories (OR=0.19, p<0.0001). Teaching hospitals had lower odds of missed diagnoses compared to non-teaching hospitals (OR=0.60, p=0.0002). EDs that admit a higher proportion of patients to the hospital had about 85% lower odds of missed diagnoses (highest category, OR=0.15, p<0.0001) compared to hospitals with lower admissions from the ED. Hospitals with high occupancy rates (OR=0.63, p<0.0001) demonstrated lower odds than those with low occupancy. ED visits in January through June had lower odds of missed diagnoses than visits in July through December (OR=0.69, p<0.0001).

Discussion

We retrospectively evaluated patients who presented to an ED with chest pain or a cardiac condition and for whom an AMI diagnosis was confirmed on an inpatient admission within 1 week – that is, those who had a probable missed diagnoses in the ED. Our study indicates an overall rate of 0.9% for missed diagnoses of AMI. This rate is lower than the 2% rate reported in recent decades [6–8], and it is considerably lower than earlier estimates of 3.8% in 6 U.S. hospitals [5] and of 7.7% in an Israeli hospital [24]. Differences could reflect progress in cardiac care over time or methodological differences in the studies. We used a conservative approach by including only a few symptoms as suspected misdiagnoses and leaving others (e.g., esophageal disorders, abdominal pain) uncounted. Some of these uncounted patients may have had missed AMI diagnoses. Also, our study could only count patients who subsequently were hospitalized for AMI within 7 days. Therefore, we would have missed patients who did not seek further medical care, sought care more than 7 days later, sought care in another state, or who died of AMI at home. Inpatient mortality was lower among those with missed diagnoses, which could imply that missed diagnoses presented with less extensive disease than those who were not missed. Although previous studies with prospective clinical data could track and identify AMIs in all patients, the total numbers of patients were limited relative to those included in the HCUP data. Thus, our findings are substantially more robust with regard to subgroup analyses of demographic and facility characteristics.

Younger patients and Black patients experienced higher odds of missed diagnoses than older and White patients, respectively. These findings are consistent with previous studies suggesting that patients’ demographic characteristics influence diagnosis [13] and treatment [25]. Black patients have a higher risk for coronary artery disease [26, 27], which should lead to a heightened awareness of cardiac symptoms. However, these patients are more likely to present with atypical symptoms, which may increase the odds of missed diagnoses. Young patients may receive a missed diagnosis because they are viewed as unlikely to have AMI – similar to missed diagnosis in young patients who have a stroke [13]. These findings underscore the need for clinician education in care of patients with lower baseline prevalence of AMI and atypical presentations.

Hospitals in locales of fewer than 50,000 residents demonstrated higher odds of missed diagnoses than hospitals in larger population centers. Resources, including medical staff and modern technologies, are more limited in smaller areas [28], which could affect diagnostic accuracy. For example, hospitals with cardiac catheterization facilities demonstrated lower odds of missed diagnoses. In addition, missed diagnoses were less likely to occur in teaching hospitals. Teaching hospitals may have more ready access to cardiologists and diagnostic tests that support the accurate detection of AMI, and medical students and residents may encourage the application of evidence-based algorithms and clinical decision support software. It should also be noted that missed diagnoses happened less often from January through June (which corresponds with the second half of the traditional residency training year) than from July through December. Further investigation of the interaction between the timing of ED rotations for residents and missed diagnoses is warranted. Another explanation may be the preponderance of summer and winter holidays during July through December, which may attenuate ED effectiveness.

In our analyses, we did not find that EDs with high volumes missed fewer diagnoses, as previously reported [9]. However, hospitals with higher proportions of admissions from the ED and higher occupancy were less likely to miss diagnoses. These findings may be tautological – fewer diagnoses are missed because more patients with chest pain are admitted from the ED to the inpatient setting. Alternatively, this may suggest that busy hospitals that are oriented toward emergency care, rather than elective and direct admissions from physician offices, are less likely to miss diagnoses in the ED.

Implications

Although diagnostic errors – including missed, incorrect, or delayed diagnoses – are common [12] and can be devastating and costly for patients [13], they have received relatively little attention and study from the patient safety community [15, 29–31]. The rapid pace and short duration of observation may make patients in EDs particularly vulnerable to diagnostic error. Missed diagnoses in the ED often result in additional acute care services along with repeated testing, delays in appropriate treatment, increased mortality [32, 33], and more dollars recovered in malpractice suits than any other medical error [34–37].

Extensive previous research on missed diagnoses has focused on clinical reasoning and provider-related causal factors. Contributors to diagnostic error include gaps in provider knowledge or memory (e.g., the ability to recall a similar case), mistakes in judgment, fatigue, poor referral choices, incomplete history-taking, misinterpretation of laboratory tests, and provider overconfidence [36, 38–42]. Cognitive [43] and system-related interventions [44] to reduce the likelihood of diagnostic errors have also been examined.

Because of the complexity of recognizing and analyzing missed diagnoses, they are not monitored regularly, and new methods are needed to detect and track these events [15, 45]. It is probably not possible to eliminate all missed diagnoses [46]; however, measuring and monitoring their incidence across EDs could help administrators identify ranges of acceptable missed diagnosis rates and target training for facilities with extreme rates. Knowledge about variation in missed diagnoses across populations should help health care teams identify patients who are most vulnerable to these events. Previous work showed that hospital administrative data can be used to identify missed diagnoses of stroke in EDs [13]. The present study demonstrates that rates of missed diagnoses in the ED before an AMI may also be calculated using hospital data. This approach may prove to be generalizable across symptoms (at least for acute diseases), which could lead to missed diagnosis dashboards and prioritization of clinical symptoms, conditions, or populations at greatest risk.

Based on our findings, non-teaching hospitals without cardiac catheterization facilities that are in less-populated locales might benefit from tracking rates of missed diagnoses before AMI. If the results are high, these hospitals may want to review their diagnostic protocols, including standardized triage, serial electrocardiogram (ECG), and troponin testing [47]. Tools to aid in ECG interpretation are known to be effective in improving evaluation and potentially reducing errors [48]. Facilities without cardiologists may consider access to specialists through telemedicine links for help with diagnosis and treatment [49]. In addition, efforts to reduce the number of patients leaving emergency departments against medical advice when presenting with cardiac symptoms may help improve diagnosis of AMI.

This study has limitations. First, we used a cross-sectional analysis, which determines associations but not causality. We also lacked granular clinical data, so some of our missed cases were likely coincidental. Second, administrative data cannot reveal clinical detail. We used ICD codes as a surrogate for presenting symptoms, which might lead to some miscoding or misclassification, although two thirds of the missed cases were coded as non-specific chest pain at the initial visit. Furthermore, potentially useful data such as ED triage level were not available in the study data, and the reporting of secondary diagnoses on ED records is limited. Third, a retrospective approach cannot account for patients who are lost to follow-up (including those who died after discharge from the ED), which would underestimate the number of potentially missed diagnoses. In addition, a hospital admission does not necessarily indicate that a correct diagnosis was made. Some patients who had AMI diagnoses that were missed in the ED might have been admitted for a wrong diagnosis, which would underestimate our rate of misses in the ED. Conversely, planned admissions for patients returning to the hospital in the event of intensifying symptoms would overestimate missed diagnoses in this study. Fourth, we focus on available hospital and ED measures; staffing and workflow measures would provide a more complete analysis. Nevertheless, the fixed effects employed in this model adjust for unobservable factors related to hospital environment, culture, and personnel. Finally, this work is based on data from 9 states and is not generalizable to all states. Despite these limitations, this study remains the largest on missed diagnoses associated with AMI to date, including nearly 112,000 patients and 797 hospitals.

Conclusions

Missed diagnoses are complex, costly, underemphasized, and difficult to detect. There is a need to increase recognition of these events. Our results reinforce previous findings that AMI diagnosis is missed in a small percentage of visits to EDs when the patient has cardiac symptoms. The results suggest that the likelihood of missed diagnoses is related to both patient and ED factors, especially the patient’s age and race and the facility’s resources for detecting AMI.

We also demonstrate that it is feasible to assess missed diagnoses using linked ED and inpatient administrative data. This method allows examination of missed diagnoses across a broader spectrum of facilities and geographic areas than previous methods, which rely on more costly methods to abstract clinical information. Findings from this study support a systemic approach across inpatient and ED settings to understand where missed diagnoses occur. Future work should examine the roles of staffing and workflow on missed diagnoses in the ED and study trends in missed diagnoses over time as new diagnostic technologies are developed and adopted.


Corresponding author: Anika L. Hines, Truven Health Analytics and ML Barrett, Inc, 7700 Old Georgetown Road, Suite 650, Bethesda, MD 20814, Phone: +301-547-4374, E-mail:

Acknowledgments

The analysis included data from the following Healthcare Cost and Utilization Project Partner organizations: Arizona Department of Health Services, Florida Agency for Health Care Administration, Massachusetts Division of Health Care Finance and Policy, Missouri Hospital Industry Data Institute, New Hampshire Department of Health & Human Services, New York State Department of Health, South Carolina State Budget & Control Board, Tennessee Hospital Association, and Utah Department of Health. We would also like to acknowledge Arpit Misra and Minya Sheng for their technical support on the study and Linda Lee, PhD, for editorial review.

  1. Author contributions: All the authors have accepted responsibility for the entire content of this submitted manuscript and approved submission. Ernest Moy: Led study design and analysis from conception through completion; contributed to all major revisions; and approved final manuscript. Marguerite Barrett: Contributed to study design, analysis, and interpretation; led data preparation; revised methods section; and approved final manuscript. She had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. Rosanna Coffey: Contributed to study design and analysis; oversaw drafts and interpreted results; contributed to all major revisions; and approved final manuscript. Anika Hines: Interpreted results; drafted and revised the manuscript; and approved final manuscript. David Newman-Toker: Provided review and clinical context; revised manuscript; and approved final.

  2. Research funding: The analysis and preparation of this manuscript were funded by the Agency for Healthcare Research and Quality.

  3. Employment or leadership: The authors from Truven Health Analytics were under contract to the Agency for Healthcare Research and Quality (Contract No. HHSA-290-2013-00002-C).

  4. Honorarium: None declared.

  5. Competing interests: The funding organization(s) played no role in the study design; in the collection, analysis, and interpretation of data; in the writing of the report; or in the decision to submit the report for publication.

  6. Important disclaimers: The views expressed in this article are those of the authors and do not necessarily reflect those of the Agency for Healthcare Research and Quality or the U.S. Department of Health and Human Services.

References

1. National Heart, Lung and Blood Institute. What is a heart attack? Available at: http://www.nhlbi.nih.gov/health/dci/Diseases/HeartAttack/HeartAttack_WhatIs.html. Accessed May 26, 2014.Search in Google Scholar

2. Stranges E, Kowlessar N, Elixhauser A. Components of growth in inpatient hospital costs, 1997–2009. HCUP Statistical Brief #123. Rockville, MD: Agency for Healthcare Research and Quality, 2011, Available at: http://www.hcup-us.ahrq.gov/reports/statbriefs/sb123.pdf. Accessed June 25, 2014.Search in Google Scholar

3. Krumholz HM, Wang Y, Chen J, Drye EE, Spertus JA, Ross JS, et al. Reduction in acute myocardial infarction mortality in the United States. J Am Med Assoc 2009;302:767–73.10.1001/jama.2009.1178Search in Google Scholar

4. Hines A, Stranges E, Andrews RM. Trends in hospital risk-adjusted mortality for select diagnoses by patient subgroups, 2000–2007. HCUP Statistical Brief #98. Rockville, MD: Agency for Healthcare Research and Quality, 2010. Available at: http://www.hcup-us.ahrq.gov/reports/statbriefs/sb98.pdf. Accessed June 4, 2014.Search in Google Scholar

5. Lee TH, Rouan GW, Weisberg MC, Brand DA, Acampora D, Stasiulewicz C, et al. Clinical characteristics and natural history of patients with acute myocardial infarction sent home from the emergency department. Am J Cardiol 1987;60:219–24.10.1016/0002-9149(87)90217-7Search in Google Scholar

6. McCarthy BD, Beshansky JR, D’Agostino RB, Selker HP. Missed diagnoses of acute myocardial infarction in the emergency department: results from a multicenter study. Ann Emerg Med 1993;22:579–82.10.1016/S0196-0644(05)81945-6Search in Google Scholar

7. Pope HJ, Aufderheide TP, Ruthazer R, Woolard RH, Feldman JA, Beshansky JR, et al. Missed diagnoses of acute cardiac ischemia in the emergency department. N Engl J Med 2000;342:1163–70.10.1056/NEJM200004203421603Search in Google Scholar

8. Schull MJ, Vermeulen MJ, Stukel TA. The risk of missed diagnosis of acute myocardial infarction associated with emergency department volume. Ann Emerg Med 2006;48:647–55.10.1016/j.annemergmed.2006.03.025Search in Google Scholar

9. Storrow AB, Gibler WB. Chest pain centers: diagnosis of acute coronary syndromes. Ann Emerg Med 2000;35:449–61.10.1016/S0196-0644(00)70006-0Search in Google Scholar

10. Goldman L, Kirtane AJ. Triage of patients with acute chest pain and possible cardiac ischemia: the elusive search for diagnostic perfection. Ann Intern Med 2003;139:987–95.10.7326/0003-4819-139-12-200312160-00008Search in Google Scholar

11. Maynard C, Beshansky JR, Griffith JL, Selker HP. Causes of chest pain and symptoms suggestive of acute cardiac ischemia in African-American patients presenting to the emergency department: a multicenter study. J Natl Med Assoc 1997;89:665–71.Search in Google Scholar

12. Singh H, Meyer AN, Thomas EJ. The frequency of diagnostic errors in outpatient care: estimations from three large observational studies involving US adult populations. Brit Med J Qual Saf 2014. doi: 10.1136/bmjqs-2013-002627. [Epub ahead of print].10.1136/bmjqs-2013-002627Search in Google Scholar

13. Newman-Toker DE, Moy E, Valente E, Coffey R, Hines AL. Missed diagnosis of stroke in the emergency department: a cross-sectional analysis of a large population-based sample. Diagnosis 2014;1:155–66.10.1515/dx-2013-0038Search in Google Scholar

14. HCUP Nationwide Databases. Healthcare Cost and Utilization Project (HCUP), 2006–2009. Rockville, MD: Agency for Healthcare Research and Quality. Available at: www.hcup-us.ahrq.gov/databases.jsp. Accessed June 25, 2014.Search in Google Scholar

15. Graber M. Diagnostic errors in medicine: a case of neglect. Jt Com J Qual Patient Saf 2005;31:106–13.10.1016/S1553-7250(05)31015-4Search in Google Scholar

16. HCUP State Inpatient Databases (SID). Healthcare Cost and Utilization Project (HCUP), 2005–2009. Rockville, MD: Agency for Healthcare Research and Quality. Available at: www.hcup-us.ahrq.gov/sidoverview.jsp. Accessed June 25, 2014.Search in Google Scholar

17. HCUP State Emergency Department Databases (SEDD). Healthcare Cost and Utilization Project (HCUP), 2009. Rockville, MD: Agency for Healthcare Research and Quality. Available at: www.hcup-us.ahrq.gov/seddoverview.jsp. Accessed June 25, 2014.Search in Google Scholar

18. HCUP Supplemental Variables for Revisit Analyses. Healthcare Cost and Utilization Project (HCUP), 2009. Rockville, MD: Agency for Healthcare Research and Quality. Available at: http://www.hcup-us.ahrq.gov/toolssoftware/revisit/revisit.jsp. Accessed June 25, 2014.Search in Google Scholar

19. HCUP Clinical Classifications Software (CCS) for ICD-9-CM. Healthcare Cost and Utilization Project (HCUP), 2009. Rockville, MD: Agency for Healthcare Research and Quality. Available at: www.hcup-us.ahrq.gov/toolssoftware/ccs/ccs.jsp. Accessed June 25, 2014.Search in Google Scholar

20. Elixhauser A, Steiner C, Harris DR, Coffey RM. Comorbidity measures for use with administrative data. Med Care 36:8–27.10.1097/00005650-199801000-00004Search in Google Scholar

21. Goldstein H, Browne W, Rasbash J. Multilevel modeling of medical data. Stat Med 2002;21:3291–315.10.1002/sim.1264Search in Google Scholar

22. Rasbash J, Steele F, Browne WJ, Goldstein H. A user’s guide to MLwiN, Version 2.0. Bristol, UK: Centre for Multilevel Modeling, University of Bristol, 2009. Available at: http://www.cmm.bristol.ac.uk/MLwiN/download/userman_2005.pdf. Accessed April 14, 2014.Search in Google Scholar

23. Raudenbush SW, Bryk AS. Hierarchical linear models. Thousand Oaks, CA: Sage Publications, 2002.Search in Google Scholar

24. Schor S, Behar S, Modan B, Barell V, Drory J, Kariv I. Disposition of presumed coronary patients from an emergency room: a follow up study. J Am Med Assoc 1976;236:941–3.10.1001/jama.1976.03270090035024Search in Google Scholar

25. Schulman KA, Berlin JA, Harless W, Kerner JF, Sistrunk S, Gersh BJ, et al. The effect of race and sex on physicians’ recommendations for cardiac catheterization. N Engl J Med 1999;340:618–26.10.1056/NEJM199902253400806Search in Google Scholar

26. Cooper RS, Ford E. Comparability of risk factors for coronary heart disease among blacks and whites in the NHANES-I Epidemiologic Follow-up Study. Ann Epidemiol 1992;2:637–45.10.1016/1047-2797(92)90008-ESearch in Google Scholar

27. Maynard C, Fisher LD, Passamani ER, Pullum T. Blacks in the coronary artery surgery study: risk factors and coronary artery disease. Circulation 1986;74:64–71.10.1161/01.CIR.74.1.64Search in Google Scholar

28. Ricketts TC. The changing nature of rural health care. Annu Rev Public Health 2000;21:639–57.10.1146/annurev.publhealth.21.1.639Search in Google Scholar PubMed

29. Altman DE, Clancy C, Blendon RJ. Improving patient safety: five years after IOM report. N Engl J Med 2004;351:2041–3.10.1056/NEJMp048243Search in Google Scholar PubMed

30. Newman-Toker DE, Pronovost PJ. Diagnostic errors – the next frontier for patient safety. J Am Med Assoc 2009;301:1060–2.10.1001/jama.2009.249Search in Google Scholar PubMed

31. Wachter RM. Why diagnostic errors don’t get any respect – and what can be done about them. Health Aff (Millwood) 2010;29:1605–10.10.1377/hlthaff.2009.0513Search in Google Scholar PubMed

32. Collinson PO, Premachandram S, Hashemi K. Prospective audit of incidence of prognostically important myocardial damage in patients discharged from emergency department: commentary: time for improved diagnosis and management of patients presenting with acute chest pain. Brit Med J 2000;320:1702–5.10.1136/bmj.320.7251.1702Search in Google Scholar

33. Sequist TD, Bates DW, Cook EF, Lampert S, Schaefer M, Wright J, et al. Prediction of missed myocardial infarction among symptomatic outpatients without coronary heart disease. Am Heart J 2005;149:74–81.10.1016/j.ahj.2004.06.014Search in Google Scholar

34. Herren KR, Mackway-Jones K. Emergency management of cardiac chest pain: a review. Emerg Med J 2001;18:6–10.10.1136/emj.18.1.6Search in Google Scholar

35. Vukmir RB. Medical malpractice: managing the risk. Med Law 2004;23:495–513.Search in Google Scholar

36. White AA, Wright SW, Blanco R, Lemonds B, Sisco J, Bledsoe S, et al. Cause-and-effect analysis of risk management files to assess patient care in the emergency department. Acad Emerg Med 2004;11:1035–41.10.1197/j.aem.2004.04.012Search in Google Scholar

37. Saber Tehrani AS, Lee H, Mathews SC, Shore A, Makary MA, Pronovost PJ, et al. 25-Year summary of US malpractice claims for diagnostic errors 1986–2010: an analysis from the National Practitioner Data Bank. Brit Med J Qual Saf 2013;22:672–80.10.1136/bmjqs-2012-001550Search in Google Scholar

38. Berner ES, Graber ML. Overconfidence as a cause of diagnostic error in medicine. Am J Med 2008;212:S2–23.10.1016/j.amjmed.2008.01.001Search in Google Scholar

39. Crosskerry P. The cognitive imperative: thinking about how we think. Acad Emerg Med 2000;7:1223–31.10.1111/j.1553-2712.2000.tb00467.xSearch in Google Scholar

40. Guly HR. Diagnostic errors in an accident and emergency department. Emerg Med J 2001;18:263–9.10.1136/emj.18.4.263Search in Google Scholar

41. Rusnak RA, Stair TO, Hansen K, Fastow JS. Litigation against the emergency physician: common features in cases of missed myocardial infarction. Ann Emerg Med 1989;18:1029–34.10.1016/S0196-0644(89)80924-2Search in Google Scholar

42. Wears RL, Perry SJ. Human factors and ergonomics in the emergency department. Ann Emerg Med 2002;40:206–12.10.1067/mem.2002.124900Search in Google Scholar PubMed

43. Graber ML, Kissam S, Payne VL, Meyer AN, Sorensen A, Lenfestey N, et al. Cognitive interventions to reduce diagnostic error: a narrative review. Brit Med J Qual Saf 2012;21:535–57.10.1136/bmjqs-2011-000149Search in Google Scholar PubMed

44. Singh H, Graber ML, Kissam SM, Sorensen AV, Lenfestey NF, Tant EM, et al. System-related interventions to reduce diagnostic errors: a narrative review. Brit Med J Qual Saf 2012;21:160–70.10.1136/bmjqs-2011-000150Search in Google Scholar PubMed PubMed Central

45. Schiff GD, Kim S, Abrams R, Cosby K, Lambert B, Elstein AS, et al. Diagnosing diagnosis errors: lessons from a multi-institutional collaborative project. Adv Patient Safety 2005;2:255–78.Search in Google Scholar

46. Graber M, Gordon R, Franklin N. Reducing diagnostic errors in medicine: what’s the goal? Acad Med 2002;77:981–92.10.1097/00001888-200210000-00009Search in Google Scholar PubMed

47. Crosskerry P. Achieving quality in clinical decision making: cognitive strategies and detection of bias. Acad Emerg Med 2000;9:1184–204.10.1197/aemj.9.11.1184Search in Google Scholar

48. Selker HP, Beshansky JR, Griffith JL, Aufderheide TP, Ballin DS, Bernard SA, et al. Use of the Acute Cardiac Ischemia Time-Insensitive Predictive Instrument (ACI-TIPI) to assist with triage of patients with chest pain or other symptoms suggestive of acute cardiac ischemia: a multicenter, controlled clinical trial. Ann Intern Med 1998;129:845–55.10.7326/0003-4819-129-11_Part_1-199812010-00002Search in Google Scholar

49. Terkelsen CJ, Norgaard BL, Lassen JF, Gerdes JC, Ankersen JP, Romer F, et al. Telemedicine used for remote prehospital diagnosing in patient suspected of acute myocardial infarction. J Intern Med 2002;252:412–20.10.1046/j.1365-2796.2002.01051.xSearch in Google Scholar PubMed

Received: 2014-8-4
Accepted: 2014-10-1
Published Online: 2014-12-6
Published in Print: 2015-2-1

©2014, Anika L. Hines et al., published by De Gruyter

This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License.

Downloaded on 28.3.2024 from https://www.degruyter.com/document/doi/10.1515/dx-2014-0053/html
Scroll to top button