ERRATUM

The implantable cardioverter defibrillator was a major advance in the prevention of sudden death. However, since its introduction in 1980, it has also been the subject of considerable scrutiny, especially by insurers and policy makers concerned about its cost and potential for widespread use. First used strictly in post– cardiac arrest situations (secondary prevention), its indication has since expanded to include prophylaxis for high-risk patients (primary prevention), with 50,000 implantations performed in the United States in 1999 (1). In the 1990s, several randomized controlled trials (2– 8) reported that implantable cardioverter defibrillators significantly improved (5) or trended toward (3,4) improving mortality in patients with ventricular tachycardia/ventricular fibrillation. Subgroup analyses showed greater efficacy in patients with low ejection fraction ( 35%) (5), or with low ejection fraction plus factors such as age 70 years and a history of cerebrovascular disease (9,10). In the case of primary prevention, the results of three trials involving post–myocardial infarction patients with low ejection fraction were positive (6,8,10A), including one entering patients with evidence of previous myocardial infarction, ejection fraction 30% and no further selector (10a). By contrast, those of a study that involved patients who had undergone coronary artery bypass graft were negative (11). In these trials, use of a defibrillator was compared with electrophysiologic testing, empiric amiodarone therapy, “conventional treatment” (left to the clinician’s discretion), and map-guided surgical ablation. Randomized clinical trials, while the gold standard for efficacy, are conducted in a limited universe with selection and enrollment bias and nonrepresentative characteristics. They tend to involve patients with few comorbid conditions, a lack of compliance problems, similar demographic characteristics; academic physicians, and a low number of events (which may have contributed to the underpowered analyses in some studies). In some instances, screened, high-risk patients may be sent directly to the index therapy rather than be assigned randomly to treatment. However, once trials have established efficacy, medical interventions move into the practice setting with broader population and patient groups and variation in team and local expense. The next step, therefore, is determination of effectiveness, an importantly different measure, which Weiss et al. ably address in this issue of The American Journal of Medicine (12). Since the implantable cardioverter defibrillator is expensive, it is also important to estimate its cost-effectiveness in years per life gained or quality-adjusted years of life gained. Early cost-effective analyses, which were derived from nonrandomized models (historical controls, time of first discharge with sensitivity analysis), compared implantable defibrillators with electrophysiologically guided antiarrhythmic drugs, empirical amiodarone, or map-guided surgery. They found defibrillators to be generally (13,14) but not always (15) cost-effective, at $50,000 per life-year gained. However, two studies from trials (13,16) reported less favorable cost-effectiveness ( $100,000 and $200,000/life-year gained), whereas cost-effectiveness was $30,000 per life-year gained in a primary prevention trial (17). The device has also been shown to be more cost effective when targeted to highrisk patients who had ejection fractions 35% (16). Weiss et al. (12) examined the effectiveness and costeffectiveness of implantable cardioverter defibrillators using Medicare administrative data for a cohort of 125,892 patients hospitalized from 1987 through 1995. Using a propensity score adjustment, patients with ventricular tachycardia or ventricular fibrillation (n 7789) who had defibrillators implanted were matched with an equal number of controls. The authors found that patients who received a defiAm J Med. 2002;113:87–90. From the Department of Medicine, Texas Tech University School of Medicine, Lubbock, Texas. Requests for reprints should be addressed to Joel Kupersmith, MD, Department of Medicine and Health Services Research/Management, Texas Tech University School of Medicine, 3601 Fourth Street, Lubbock, Texas 79430. Manuscript submitted December 7, 2001.

T he implantable cardioverter defibrillator was a major advance in the prevention of sudden death. However, since its introduction in 1980, it has also been the subject of considerable scrutiny, especially by insurers and policy makers concerned about its cost and potential for widespread use. First used strictly in postcardiac arrest situations (secondary prevention), its indication has since expanded to include prophylaxis for high-risk patients (primary prevention), with 50,000 implantations performed in the United States in 1999 (1).
In the 1990s, several randomized controlled trials (2)(3)(4)(5)(6)(7)(8) reported that implantable cardioverter defibrillators significantly improved (5) or trended toward (3,4) improving mortality in patients with ventricular tachycardia/ventricular fibrillation. Subgroup analyses showed greater efficacy in patients with low ejection fraction (Ͻ35%) (5), or with low ejection fraction plus factors such as age Ն70 years and a history of cerebrovascular disease (9,10). In the case of primary prevention, the results of three trials involving post-myocardial infarction patients with low ejection fraction were positive (6,8,10A), including one entering patients with evidence of previous myocardial infarction, ejection fraction Յ30% and no further selector (10a). By contrast, those of a study that involved patients who had undergone coronary artery bypass graft were negative (11). In these trials, use of a defibrillator was compared with electrophysiologic testing, empiric amiodarone therapy, "conventional treatment" (left to the clinician's discretion), and map-guided surgical ablation.
Randomized clinical trials, while the gold standard for efficacy, are conducted in a limited universe with selection and enrollment bias and nonrepresentative charac-teristics. They tend to involve patients with few comorbid conditions, a lack of compliance problems, similar demographic characteristics; academic physicians, and a low number of events (which may have contributed to the underpowered analyses in some studies). In some instances, screened, high-risk patients may be sent directly to the index therapy rather than be assigned randomly to treatment. However, once trials have established efficacy, medical interventions move into the practice setting with broader population and patient groups and variation in team and local expense. The next step, therefore, is determination of effectiveness, an importantly different measure, which Weiss et al. ably address in this issue of The American Journal of Medicine (12).
Since the implantable cardioverter defibrillator is expensive, it is also important to estimate its cost-effectiveness in years per life gained or quality-adjusted years of life gained. Early cost-effective analyses, which were derived from nonrandomized models (historical controls, time of first discharge with sensitivity analysis), compared implantable defibrillators with electrophysiologically guided antiarrhythmic drugs, empirical amiodarone, or map-guided surgery. They found defibrillators to be generally (13,14) but not always (15) cost-effective, at Ͻ$50,000 per life-year gained. However, two studies from trials (13,16) reported less favorable cost-effectiveness (Ͼ$100,000 and $200,000/life-year gained), whereas cost-effectiveness was Ͻ$30,000 per life-year gained in a primary prevention trial (17). The device has also been shown to be more cost effective when targeted to highrisk patients who had ejection fractions Ͻ35% (16).
Weiss et al. (12) examined the effectiveness and costeffectiveness of implantable cardioverter defibrillators using Medicare administrative data for a cohort of 125,892 patients hospitalized from 1987 through 1995. Using a propensity score adjustment, patients with ventricular tachycardia or ventricular fibrillation (n ϭ 7789) who had defibrillators implanted were matched with an equal number of controls.
The authors found that patients who received a defi-brillator had reduced mortality, thus supporting the generalizability of results from randomized controlled trials, their most important observation. However, the difference between the survival curves of the two groups converged over 8 years. Attenuation of mortality gain was also reported in the Cardiac Arrest Study Hamburg (3), whereas the one large statistically significant secondary prevention trial had only 3-year data (5). Weiss et al. noted that recipients of defibrillators were more likely to undergo other invasive interventions, such as angiography, coronary artery bypass graft surgery, and electrophysiologic study, not percutaneous transluminal coronary angioplasty, perhaps because of the earlier time window of analysis. These patients were more likely to be white and male, and to have a history of ischemic heart disease, cardiomyopathy, or heart failure, and ventricular fibrillation rather than ventricular tachycardia.
Although the cost-effectiveness of the defibrillator ($78,000 per life-year gained) did not fall into the "costeffective" range of Ͻ$50,000 per life-year gained (14), it was at a reasonable level, with some of the money spent on other procedures. The higher costs in the defibrillator group persisted during the 8 years, presumably due to battery replacement and lead and other complications.
There were several limitations to the study, as noted by the authors. Despite the use of a propensity score adjustment, which served to improve post hoc methodology, use of matched controls was still an imperfect method and did not ensure the same balance as randomization. There may have been differences between groups below the limits of scrutiny, especially in administrative data that were collected for billing, and not clinical, purposes. An important missing parameter was ejection fraction. Also, younger recipients than in this Medicare analysis may have less long-term attenuation of defibrillator effectiveness on mortality, among other differences.
In addition, devices have since improved substantially. Cardioverter defibrillators are now implanted transvenously in the electrophysiologic laboratory rather than by thoracotomy. They require shorter hospital stays (2 to 5 days vs. 14 to 24 days), have longer-lasting batteries, are associated with lower long-term morbidity, and have sophisticated computer features, including tiered treatment, electrocardiographic (ECG) interpretation, and atrial capabilities. In our study, the difference in cost decreased from $34,500 (in 1999 U.S. dollars) per life-year gained (with thoracotomy) to $28,500 (without thoracotomy) to $15,800 (without thoracotomy and with no preceding electrophysiologic study, as is now a common approach) (18). In this regard, it is interesting that in the study by Weiss and colleagues, the survival advantage improved in later entry years. On the other hand, interventions for ischemic disease and many pharmacologic treatments (e.g., angiotensin-converting enzyme inhibitors, beta-blockers, and aspirin), which are now used more commonly, should improve survival in patients who do not receive an implantable defibrillator and blunt its effect on mortality though such was not the case in one recent trial (10a). Weiss et al. give us a snapshot of the "average" elderly patient with a defibrillator in a somewhat earlier technologic period. Where do we go from here, and what are the current and future issues in implantable cardioverter defibrillator therapy? At present, the device is indicated for the ventricular tachycardia/ventricular fibrillation patient, and preventively in other less common conditions, such as long QT syndrome, Brugada syndrome, hypertrophic cardiomyopathy, and arrhythmogenic right ventricular dysplasia. It is preferred over other strategies, including electrophysiologically guided drug therapy, map-guided surgery, catheter ablation, and various antiarrhythmic drugs such as nonamiodarone class III agents. Empiric amiodarone therapy is an alternative, although the strategy of starting with amiodarone and switching to the implantable cardioverter defibrillator, "if necessary," is expensive (15).

THE FUTURE
Perhaps the most important question in determining the use of a defibrillator is the identification of risk and need for implantation. In patients with ventricular tachycardia or ventricular fibrillation, despite the above recommendations, those who also have coronary heart disease and an ejection fraction Ͼ35% may be triaged to empiric amiodarone therapy. However, caution should be exercised when considering recommendations based on subgroup analyses, especially in studies that are not statistically significant overall.
In the case of primary prevention, the issue is more complex. Even apart from the sudden death survivor group, the large majority of patients who die from sudden death have had a previous cardiac event, known cardiac disease (19), or potential clues to cardiac disease, all of which may help to identify patients at high risk.
In this regard, the recent trial showing mortality benefit in patients with previous myocardial infarction and ejection fraction Յ30% (10a) is important. However, unless further definition is possible, it also creates a very large pool of eligible patients, many of whom will never use the device. Other predictors of risk-nonsustained ventricular tachycardia, electrophysiologic testing, signal averaged ECG and age-may be useful for additional screening in such patients though they are far more problematic when screening patients without a known ischemic event.
Regarding cost, in the U.S. health care system, costeffectiveness influences policy makers, but cost per se and reimbursement are the main drivers. Even with more ef-ficient methods, use of the implantable cardioverter defibrillator for primary prevention will be expensive especially with the large pool of candidates now established in coronary artery disease (10a). One approach to cost is the development of a cheaper, basic device (19), which would be appropriate for primary prevention and enable more widespread and cost-effective use.
Thus, the objective in the future will be to match the patient to the device, and vice versa. In the ideal, it will include cheaper, customized, and improved devices; a routinely efficient health care system; and improved patient identification for primary prevention. Finally, data will be available for effectiveness analyses to address both targeted and average or base-case patients, and in this way be more helpful in individual clinical as well as policy decisions.