Acessibilidade / Reportar erro

Reliability of the interpretation of coronary angiography by the simple visual method

Abstract

OBJECTIVE: Evaluation of inter and intraobserver reproducibility of by the visual method interpretation of cineangiogram in a clinically based context. METHODS: Five interventional cardiologists analyzed 11 segments of 8 coronary cineangiograms at a two month apart sessions. The percent luminal reduction by the lesions were analyzed by two different classifications: in one (A) the lesions were graded in 0% = absent, 1-50% = mild, 51 - 69 = moderate, and > or = 70% = severe; the other classification (B) was a dichotomic one : <70% = nonsignificant and > or = 70%=significant lesions. The agreement were measured by the kappa (k) index. RESULTS: Interobserver agreement was moderate for classification A (1st measurement, k = 0.36 -- 0.63, k m = 0.49; 2nd measurement, k = 0.39-0.68, k m = 0.52) and good for classification B (1st measurement, k = 0.55-0.73, k m = 0.63; 2nd measurement, k = 0.37-0.82, k m = 0.61). Intraobserver levels of agreement were k = 0.57-0.95 for classification A and 0.62-1.0 for classification B. CONCLUSION: The higher level of reproducibility obtained by adopting the dichotomous criteria usually considered for ischemic limits demonstrates that in the present clinical context, the reliability of the simple visual method is adequate for the identification of patients with clinically significant lesions and candidates for myocardial revascularization procedures.

coronary angiography; observer variation; kappa statistic


Original Article

Reliability of the Interpretation of Coronary Angiography by the Simple Visual Method

Jorge Augusto Nunes Guimarães, Edgar Guimarães Victor, Maria do Rosário de Britto Leite, José Maria Pereira Gomes, Edgar Victor Filho, Jesus Reyes Liveras

Recife, PE - Brazil

OBJECTIVE: Evaluation of inter and intraobserver reproducibility of by the visual method interpretation of cineangiogram in a clinically based context.

METHODS: Five interventional cardiologists analyzed 11 segments of 8 coronary cineangiograms at a two month apart sessions. The percent luminal reduction by the lesions were analyzed by two different classifications: in one (A) the lesions were graded in 0% = absent, 1-50% = mild, 51 - 69 = moderate, and ³ 70% = severe; the other classification (B) was a dichotomic one : <70% = nonsignificant and ³ 70%=significant lesions. The agreement were measured by the kappa (k) index.

RESULTS: Interobserver agreement was moderate for classification A (1st measurement, k = 0.36 ¾ 0.63, km = 0.49; 2nd measurement, k = 0.39-0.68, km = 0.52) and good for classification B (1st measurement, k = 0.55-0.73, km = 0.63; 2nd measurement, k = 0.37-0.82, km = 0.61). Intraobserver levels of agreement were k= 0.57-0.95 for classification A and 0.62-1.0 for classification B.

CONCLUSION: The higher level of reproducibility obtained by adopting the dichotomous criteria usually considered for ischemic limits demonstrates that in the present clinical context, the reliability of the simple visual method is adequate for the identification of patients with clinically significant lesions and candidates for myocardial revascularization procedures.

Key words: coronary angiography, observer variation, kappa statistic

Any measurement procedure, has some degree of variability in its results. In scientific investigation, one of the criteria that evaluate measurement variability is reliability (reproducibility, precision), defined as the capacity to produce agreeing results for measuring procedures repeated over time or when the same phenomenon is measured by several individuals at the same time 1,2. Reliability can be estimated from the agreement between the analysis of a phenomenon by different examiners (interobserver agreement) or by the consistency of the results obtained by repeated analysis of the phenomenon by the same examiner (intra-observer agreement) 1.

In the field of cardiology, cinecoronariography remains as the main method for diagnosis of atherosclerotic coronary disease and for definition of their therapeutic strategies 3,, however, there is relevant questions about its reproducibility and accuracy. The observation and estimation of the magnitude of an obstructive lesion usually depend to a great extent on the opinion of one sole clinician who analyses the result of his own procedures. Such involvement can lead to underestimation of some factors that influence the operator and may limit the credibility of the results obtained 4.

Studies on the reliability of simple visual interpretation of cinecoronariography arose in the 1970s, but are still relatively scant. Methodological differences between data and indexes used to evaluate reliability render effective comparisons between them difficult 5-9. The large margin of variability, especially for results expressed in terms of percent values, has become evident. Although some authors 10-13 have suggested alternative approaches aimed at increasing the precision of visual estimates, application of quantitative digital analysis has become a standard in scientific literature 13-18. Yet, the use of this method in clinical practice is not automatic. As a diagnostic method, the most important information derived from cinecoronariography (CCG) is above the presence of obstructive coronary atherosclerotic disease capable of evoking myocardial ischemia. Gould et al 19-21 identified ischemic limits correlating the degree of coronary lumen obstruction and coronary flow reserve. These results influenced the tendency to downgrade the precision of the quantification of lesions of less than 50% obstruction 4,22,23, by describing them in classes according to their magnitude as discrete, moderate and severe. Furthermore, several studies have revealed important limitations of the routine application of quantitative digital analysis to diagnostic procedures 4, 23-27.

At this time, although equipment for digital quantitative cineangiography analysis is present in the major of cathlabs, the great majority of examinations are still interpreted by the same specialist who performed them using the traditional form of visual examination. In this case an incongruity between evidence of the scientific reliability of the method and its use in clinical practice becomes noticeable. Furthermore, the reliability of the way in which the visual method is routinely applied for diagnostic purposes has not been adequately evaluated.

The purpose of the present study was to evaluate the reproducibility of the interpretation of CCG by the simple visual method performed by interventional cardiologists, using a model capable of estimating inter- and intraobserver agreement in a clinically based context.

Methods

Five of 17 interventional cardiologists, actives in nine public and private hospitals in Recife were chosen as observers by one of this work's authors, who also selected from his personal files 23 CCGs of patients who had not had myocardial revascularization. The only prerequisites of the chosen CCGs were that they showed multiarterial coronary atherosclerotic disease of any degree and to be of an adequate technical quality to show arterial opacity, and with projections sufficient to enable a clear identification of the various arterial segments.

All examinations were performed by the Judkins 28 technique with 6F catheters in a cineangiographic Philips, Poly Diagnostic U.P.I. apparatus with a 6.5 in image intensifier, at the Real Hospital Português de Beneficiência in Recife, Pernambuco. Images were recorded on 35mm Kodak ® CFT cinefilms at 30 frames/sec. Eight exams were chosen to obtain an ample spectrum of lesion sizes in all segments to be studied.

Coronary arteries were divided into 11 segments for film analysis (chart I).


The five observers were codified and kept anonymous during the study. Each received a form containing a table with lines for the recording of the 8 films and columns for each of the 11 segments referred to in chart 1. No clinical information of the patients was provided; identification records of the cinefilms were obscured.

Analyses were performed using a TAGARNO® 35CX film projector, individually and independently, without a time limitation. Each observer recorded a single 0 to 100 percent obstruction value for each segment. In cases where more than one lesion was observed in a segment the highest value was recorded. To avoid visual fatigue and to minimize the natural tendency of observers to dedicate greater attention to the first cinefilms examined possibly leading to a biased interpretation of the last ones, analyses were made in two times with the interpretation of four films in each. The sequence of observations of the films was performed informally.

Using the same above described protocol, each observer re-evaluated the films after a minimum period of two months. None knew the values annotated by the other observers in either step or his own values prior to initiating the 2nd step.

It was emphasized to observers that they should avoid merely classifying or placing lesions between border values. In cases where they were eventually unable to establish a percentage value for a given lesion, they were allowed to place, at appropriate sites, a question mark (?) followed by an interpretation of the degree of obstruction as being mild (mi), moderate (M) or severe (S).

For comparative analysis, two charts, one for each manner of classification of the lesions according to the grade of obstruction, were set up (chart II).


The analysis of the frequency distribution of the lesions indicated by each observer was performed by Friedman's 29 test with four classes of variables for the A classification A and by Cochran's 30 test with binary variables for the classification B.

Pairing of the 5 observers permitted the formation of 10 pairs of combinations, whose contingency tables were used for the statistical analysis of each classification of lesions in both phases of the study. The general rate of agreement, defined as the proportional agreement between observers relative to the whole sample, was calculated for each classification in both stages. The kappa (k) statistic, defined as the proportion of the observed agreement over the agreement expected by chance, is expressed by formula 30, 31

k = (po - pe)

(1-pe)

where po = proportion of observed agreements and pe = proportion of expected agreements.

This calculation takes all discrepancies into account in the same way. When more than two categories are aligned, degrees of discrepancy between contiguous or more distant categories may have different clinical relevance. To correct for the evaluation of these discrepancies, the weighted kappa index 31 was used to measure agreement between the four categories defined in classification A.

Criteria for the interpretation of values of kappa are described in chart III.


The level of statistical significance of the differences between kappa indexes of pairs of observers and between the values of each observer in both classifications was analyzed by the paired t test, with a= 0.05.

Charts were developed and statistical calculations performed with the aid of computer programs MicrosoftÒ Excel ¾ version 8.0, Epi Info ¾ 6.02 and SPSS for Windows ¾ 6.0.

Results

Each observer individually interpreted 11 coronary segments in the eight cinefilms on two independent occasions; a total of 440 observations were made in each step of the study. Only in 8 (0.9%) segments of the 880 evaluations were no percent obstruction values assigned.

Frequency distributions of the values determined according to the degree of obstruction defined by classification A in the 1st step of the study are described in Table 1. Observers 1, 2, 3 and 5 determined that the majority of the segments had no obstructive lesions (grade Z), with incidences between 62.5% and 73%, but for observer 4 the majority of the segments had mild lesions (grade Mi, 60.2%). All five observers determined that moderate (grade M) lesions were least frequent, with an incidence varying between 0% (observer 2) and 4.6% (observers 4 and 5). Severe lesions (grade S) were estimated at 8% (observer 1) and 17% (observer 5). These differences were statistically significant (Friedman's test, P<0.0001). In the 2nd step of the study (table II), the average frequency of segments considered to be grade Z decreased due to the smaller proportion attributed to it by observers 1, 2 ,3 and 5 (46.6; 68.2; 64.8; 67.1 %, respectively). Again, observer 4 interpreted the majority of the segments to be lesions of grade Mi (55.7%), and all observers considered that lesions of grade M occurred at a low frequency, with an incidence varying between 1.1% (observer 2) and 5.7% (observers 4 and 5). Lesions of grade S were considered to fall between 10.2% (observer 4) and 22.7% (observer 5). The results of the analysis of these differences were also statistically significant (Friedman's test, P=0.00015).

Distribution of the frequencies of the lesions attributed by the five observers according to classification criterion B in the first step are described in table III. Observers 1, 2, 3 and 4 pointed towards an incidence between 8% and 12.5% of significant (grade S) lesions, but observer 5 described them in 17% of the cases. The analysis of these differences was statistically significant (Cochran's test, P=0.03). In the 2nd stage of the study (Table IV), this pattern repeated itself with a more marked difference between the results of observers 1, 2, 3, and 4 (grade S in 10.2 to 11.4%) and those of observer 5 (grade S = 22.7%). These differences were highly significant (Cochran's test, P=0.0008).

Interobserver agreement ¾Table V describes weighted kappa indexes among 10 combinations of pairs of observers, calculated to measure the agreement regarding grade of lesion according to classification A in the study's two steps, and the respective general agreement rates (GAR). In the 1st step, the GAR of the 10 possible combinations between the 5 observers varied between 38% (observer 4 vs. observer 5) and 81% (observer 1 vs. observer 2). Weighted kappa indexes varied between 0.36 (observer 4 vs. observer 5) and 0.63 (observer 1 vs. observer 2). In the 2nd step, the GAR varied between 42% (observer 4 vs. observer 5) and 78% (observer 2 vs. observer 3). Weighted kappa indexes varied between 0.39 (observer 3 vs. observer 5) and 0.68 (observer 1 vs. observer 4). Differences between kappa indexes of the 10 combinations in both steps were not statistically significant (paired t test, P=0.62).

Kappa indexes between the 10 combinations of pairs of observers calculated to measure agreement regarding lesion grade by classification B in the two steps of the study and their respective GAR are described on table VI. In the first step, GAR of the 10 combinations varied between 91% (observer 1 vs. observer 5; observer 2 vs. observers 3 and 5, observer 3 vs. observers 4 and 5; observer 4 vs. observer 5) and 95% (observer 1 vs. observers 2 and 4). kappa indexes varied between 0.37 (observer 3 vs. observer 5) and 0.82 (observer 1 vs observer 4). In the same manner as in relation to classification A in the 2nd step, the GAR varied from 42% (observer 3 vs. observer 5) and 97% (observer 1 vs. observer 4) and kappa indexes varied from k = 0,37 (observer 3 vs. observer 5) and k = 0,82 (observer 1 vs. observer 4) differences between kappa indexes of the 10 combinations were not statistically significant in both steps (paired t test, p=0.65).

With the objective of estimating a general index of agreement between all observers, by each classification, the average of the 10 combined analyses of the kappa indexes was considered for each step. In relation to classification A, the averages of the weighted kappa indexes were km = 0.49 and km = 0.52 in the first and second step, respectively. By applying the criteria defined in chart III, the level of agreement between observers on interpreting the absence or three possible categories of lesion degrees (mild, moderate or severe) was characterized as moderate for both steps. In relation to step B, averaged kappa indexes were km = 0.63 and km = 0.61 for the first and second steps, respectively. Therefore, the agreement between observers when evaluating the presence or not of a clinically significant lesion, reached a good level in both steps.

The extent of the variation of the kappa indexes of the 10 pairs of observers in both classifications in both steps of the study is represented in figure 1. It will be observed that in classification A, the variation was similar for both steps, differing from the pattern observed for classification B, which was noticeably wider in the 2nd step.


The performance of each observer in the results concerning the presence or absence of a significant lesion (Classification B) was subsequently analyzed using kappa indexes of pairs of observers in which the observer was included. Each observer participated in four pairs of combinations and the averaged kappa (km) of each relative to the others in both steps of the study, is described in figure 2. In the first step, these averages varied between 0.59 (observer 3) and 0.67 (observer 1). In the second step, the averages varied between 0.47 (observer 5) and 0.67 (observers 1 and 4). The analysis of these data confirms the homogeneous pattern of the interpretation of the observers in the first step and indicates that one of the observers (observer 5) was responsible for the broader amplitude of the kappa indexes verified in the 2nd step of the study.


Intraobserver agreement ¾ kappa indexes calculated to determine the reproducibility of the evaluations by each observer in the two steps of the study are demonstrated in table VII. In relation to classification A, results varied between k=0.57 (observed 1) and k=0.95 (observer 4). Concerning classification B, results varied between 0.62 (observer 3) and 1.0 (observer 4). Analysis of the differences between the values obtained by each observer, in both classifications, demonstrated that the level of intraobserver agreement was significantly higher when they judged the presence or not of clinically significant lesions than when they evaluated them according to a greater number of categories of the degree of obstruction (paired t test, P=0.03).

Regarding classification A, observers 2, 3 and 5 achieved good levels of agreement and observer 4 reached an excellent kappa level; observer 1 obtained a moderate level. Relative to classification B, observers 3 and 5 maintained their good level, but the others improved the levels of reproducibility of their own evaluations (observer 1: good; observer 2, excellent; observer 4: perfect).

With the objective of analyzing whether the pattern of intraobserver agreement influenced the interobserver agreement in relation to the identification of clinically significant lesions (classification B), the following results were grouped for each observer: kappa indexes of intraobserver agreement and its averaged kappa indexes relative to the other interobserver agreements in both steps of the study (table VIII). It was noted that the level of interobserver agreement was relatively homogeneous in both steps of the study with the exception of that obtained by observer 5 in the 2nd step. The indexes that measured intraobserver agreement varied in a broader way from good and perfect. We did not find relationship between intraobserver level of consistency and level of interobserver agreement. Observer 4, whom reached maximal reproducibility in his interpretations in both steps, obtained an average similar to those of the others regarding interobserver agreement. On the other hand, observer 5, in spite of having an average lower than the others in the 2nd step, had a good level of reproducibility of his own interpretations, similar to the level reached by observers 1 and 3.

Discussion

Few studies have been designed especially to analyze the realiability of the visual interpretation of cinecoronariography 5-8, 32,33. Our results are similar to those observed in other studies 9,11,22,34,however their objective and design were much different so hampering the determination of a suitable pattern for estimating the precision of the method employed. Under the impact of the appearance of coronary angioplasty, several studies proposing to enhance reliability of the interpretation of CCG have been published 10-13, 15-18. Technological developments led to improvement in quantitative digital analysis methodology and evolution of their indexes of reproducibility 14,27,35,36. Differently from the visual method 22, 37, digital quantification leads to the attainment of normal distribution curves for its measurements. At the present time, even considering the limitations pointed out in the medical literature 4,23-26, the demand for digital quantification in research based on angiographic interpretation of coronary atherosclerosis has become consensual.

The use of CCG as a complementary diagnosis tool has however a different meaning. In clinical practice, its major area of information concerns the presence or not of obstructive disease able to cause myocardial ischemia, even though a tendency to disregard lesions of less than 50% exists 4. According to Fleming et al 22 observers tend to categorize lesions by the visual method, even when their objective is to quantify percent obstruction. In his study, this fact is shown to result in greater variability in lesions interpreted as mild (<50%) and in a tendency towards underestimation, in comparison with results of quantitative digital analysis. A similar conclusion has been formulated by Gurley et al 23.

What should be compared and how should it be collected? ¾ The value of percent obstruction, by being a continuous variable, permits the calculation of variability indexes based on the standard deviation of their means. Some authors suggest that the standard error of the estimated percent value of a given lesion could be used as a numerical parameter of the variation of the method, describing variability indexes of 28-36% 7,11,32.

However, when we analyzed for example, the results of Derouen et al 7 in which standard deviations of the segments analyzed varied between 0 and 51.3%, the generalization based on the obtained mean (18%) appears to be of a debatable practical utility.

The present study was planned to evaluate the method of visual CCG in the way it is currently routinely performed, favoring the comparison based on class variables and adopting as major references the best known values for these ischemic limits ¾ obstructive lesion ³50% in the trunk of the left coronary and ³70% in the other arteries. In reality, simple observation in tables I and II of the high values of the standard deviations of the frequencies attributed to grades Z and D compared to with the lower frequencies given to degrees M and S indicates that the observers varied more when quantifying clinically insignificant lesions (£50% obstruction), confirming results commented on above.

Studies on the variability in CCG evaluated in terms of time of activity or experience of the observers, either did not demonstrate significant differences related to these criteria 11,22 or found a positive correlation with the maintenance of regular activity in the area 5. The five observers chosen for the present study make up 29.4% of the total number of professionals qualified by the Brazilian Society of Hemodynamics and Interventional Cardiology regularly active in the various clinics of the city of Recife.

The chosen protocol adopted a model similar to daily practice. Cinefilms were selected without major restrictions and the observers handled them freely. With the criteria defined by classification A with four classes of variables, we tried to establish the level of agreement regarding the most detailed evaluation of the degree of obstruction. The main objective of the present study was to analyze the agreement rate when the dichotomous criterion to classify lesion (classification B) is used.

Which is the most adequate index of reliability? ¾ The use of different indexes to measure agreement is an important limitation for the comparison between studies. The simplest form to evaluate the agreement between categorial variables, the general agreement rate has been adopted by some authors 5,6,9 in spite of important restrictions concerning the significance of their results. The general degree of agreement reached does not identify the proportion of cases in which chance was responsible for agreement and is influenced by the proportion of positive findings and cannot be compared with degrees of agreement resulting from other studies 31,38. In the present work, these rates were shown with merely descriptive objectives. Actually, values such as those obtained, for example, in classification B (table VI), with a general degree of agreement between 91 and 95% in the first step and between 82 and 97% in the second, do not permit clear interpretations of the quality of the level of agreement between observers.

The kappa index is a coefficient that excludes chance when the agreement between paired observations is calculated, enabling the qualification of the degree of agreement and the comparison with indexes obtained from other studies 2,31. The main criticism to its application in the present study is the impossibility of estimating the agreement of the results of all of the observers, because it deals with an index that measures agreement between pairs of observers. With the aim of obtaining a general idea of the agreement of all of the observers, we adopted the principle of the mean of the indexes, a resource applied by others in studies cited in the medical literature 7, 34.

Weighted kappa index is recommended to minimize the discrepancies between levels of disagreement in situations in which more than two variables are considered 30,31. The manner of weighting the indexes in classification A, however, does not correct for the distortions regarding the clinical relevance of these disagreements. Disagreement on the attribution of moderate or severe degrees to a given lesion, has greater therapeutic and prognostic implications than disagreement about moderate or discrete degrees. Only an arbitrary intervention in the application of weights would correct such discrepancies; this, however, would affect the possibility of comparing the results of this study with those of others.

Interobserver agreement ¾ Detre et al 5 considering a criterion of a significant lesion to be an obstruction of ³50%, applied indexes derived from the standard deviation of the positive findings and concluded that the levels of agreement among observers resided in the middle between perfect agreement and that due to chance. Correlating with the kappa function, an index of 0.50 would be considered as moderate. The same level (k = 0.55) was shown by Derouen et al 7 whose criterion was similar to that adopted by us in classification B (significant lesion ³70%).

Although intended to compare the visual method with the use of the caliper, the study related by Holder et al 34 yielded results amenable to comparison with ours. Using means of weighted kappa indexes, the agreement between the five observers classifying lesions in three categories was of a good level (k = 0.62). kappa averages of each observer relative to those of the others were similar for all observers.

In CASS a study thre was a moderate level of agreement about the number of significant lesions per exam (k = 0.57) 8. Although the criterion for significant lesions was similar to ours, this index actually reflected the agreement between interpretations of one of fourteen participating clinical centers and those of four quality control centers.

In our study, the repetition of the analyses of the five observers allowed levels of interobserver agreement to be established between 10 pairs of observers in each step, as well as between these same pairs in both steps. Results demonstrated that to identify in greater detail the degree of coronary obstruction (classification A), the simple visual method of CCG interpretation achieved a merely moderate level of reproducibility between results of different observers. However, it resulted in a good level of reproducibility for the estimation of the existence or not of obstructions capable of causing ischemia (classification B).

Intraobserver agreement ¾ In the study by Detre et al 5 intraobserver agreement was estimated from the general rate of agreement and varied between 72 and 91%. Holder et al 34 applying weighted kappa indexes, described intraobserver levels of consistency from moderate to good (k = 0.57 to 0.79).

In our study, observers were more consistent in their own evaluations when the dichotomous criterion (B) was applied. The results demonstrated that all observers reproduced their interpretations with a minimum level considered good regarding the presence or not of a significant lesion. In reality, as described on table VII, one observer obtained an excellent level [observer 2: (k = 0.89)] and another reached maximal agreement [(observer 4: (k = 1.0)]. This qualitative difference of the intraobserver reproducibility did not agree with the more homogeneous pattern presented by the observers regarding the agreement between them (table VIII). At the other extreme, the lower index of observer 5 relative to the others in the 2nd step did not prevent him from achieving a good level of reproducibility of his own evaluation. These observations demonstrate that no relationship exists between the level of intra-observer consistency and the levels of interobserver agreement.

Final considerations ¾ In the medical literature, quantitative digital angiography has become a standard procedure for angiographic interpretation of coronary arteries. In clinical practice, however, its routine application has important limitations and does not eliminate the operator's subjectivity in various stages of the examination and of selection of images and segments to be interpreted. The tendency to classify lesions according to the degree of obstruction, and to define conduct based on dichotomous criteria, renders the level of precision of the estimation of lesions below limits considered capable of evoking myocardial ischemia irrelevant.

The present study demonstrated that interpretation of cinecoronarioangiography (CCG) by the simple visual method based on kappa statistics only reached a good level of reproducibility among interventional cardiologist when the adopted criterion was that routinely used to consider the indication of a treatment of myocardial revascularization (lesions clinically significant or not). On expressing opinions about grades of obstruction classically considered in clinical practice (absence of obstructions, discrete, moderate or severe lesions), only a moderate level of reliability between observers was noted. As expected, because it is more plausible for each individual to agree with himself rather than with others, the level of intraobserver agreement was higher by each of the criteria adopted, although we did not find a relationship between the degree of precision among observers and the level of consistency of their own opinions.

In the clinical field, therefore, where the main objective is to diagnose and define the extent of coronary atherosclerotic disease in order to set up a therapeutic program, the present study demonstrated that the simple visual method, still most used in clinical practice, fulfills the requisites regarding reliability of its results.

Acknowledgements

To Profs. Eulálio Cabral and José N. Figueiroa for aid in the analysis of the results and to Prof. Sandra N. Coelho, MD for patience and judicious analyses during the elaboration and execution of this study.

Hospital das Clínicas UFPE and Real Hospital Português, Recife.

Mailing address: Jorge Augusto Nunes Guimarães - Rua Alfredo Fernandes, 136/401 - 52060-320 - Recife, PE - Brazil. e-mail: jang@realcor.com.br

  • 1. Dawson-Saunders B, Trapp RG. Summarizing Data. In: Dawson-Saunders B, Trapp RG, eds. Basic & Clinical Biostatatistics. 2nd ed. Norwalk: Appleton & Lange, 1994: 41-63.
  • 2. Pereira MG. Aferiçăo dos Eventos. In: Pereira MG, ed. Epidemiologia - Teoria e Prática. Rio de Janeiro: Guanabara Koogan, 1995: 358-76.
  • 3. Bittl JA, Levin DC. Coronary Arteriography. In: Braunwald E, ed. Heart Disease: A Textbook of Cardiovascular Medicine. 5th ed. Philadelphia: WB Saunders Co., 1997: 240-72.
  • 4. Stadius ML, Alderman EL. Coronary artery revascularization: Critical need for, and consequences of, objective angiographic assessment of lesion severity. Circulation 1990; 82: 2231-4.
  • 5. Detre KM, Wright E, Murphy ML, Takaro T. Observer agreement in evaluating coronary angiograms. Circulation 1975; 52: 979-86.
  • 6. Zir LM, Miller SW, Disnmore RE, Gilbert JP, Harthorne JW. Interobserver variability in coronary angiography. Circulation 1976; 53: 627-32.
  • 7. Derouen TA, Murray JA, Owen W. Variability in the analysis of coronary angiograms. Circulation 1977; 55: 324-8.
  • 8. Fisher LD, Judkins MP, Lesperance J, et al. Reproducibility of coronary arteriographic reading in the Coronary Artery Surgery Study (CASS). Cathet Cardiovasc Diagn 1982; 8: 565-75.
  • 9. Trask N, Califf RM, Conley MJ, et al. Accuracy and interobserver variability of coronary cineangiography: a comparison with postmortem evaluation. J Am Coll Cardiol 1984; 3: 1145-54.
  • 10. Meier B, Gruentzig AR, Goebel N, Pyle R, Von Gosslar W, Schlumpf F. Assessment of stenoses in coronary angioplasty: inter- and intraobserver variability. Int J Cardiol 1983; 3: 159-69.
  • 11. Beauman GJ, Vogel RA. Accuracy of individual and panel visual interpretations of coronary arteriograms: Implications for clinical decisions. J Am Coll Cardiol 1990;16: 108-13.
  • 12. Danchin N, Juilliere Y, Foley D, Serruys PW. Visual versus quantitative assessment of the severity of coronary artery stenoses: can the angiographer's eye be reeducated? Am Heart J 1993; 126: 594-600.
  • 13. Katrisis D, Lythall DA, Anderson MH, Cooper IC, Webb-Peploe MM. Assessment of coronary angioplasty by an automated digital angiographic method. Am Heart J 1988; 116: 1181-7.
  • 14. Selzer RH, Hagerty C, Azen SP, et al. Precision and reproducibility of quantitative coronary angiography with application to controled clinical trials: a sampling study. J Clin Invest 1989; 83: 520-6,.
  • 15. Goldberg RK, Kleiman NS, Minor ST, Abukhalil J, Raizner AE. Comparison of quantitative coronary angiography to visual estimates of lesion severity pre and post PTCA. Am Heart J 1990; 119: 178-84.
  • 16. Kalbfleisch SJ, McGillem MJ, Pinto IMF, Kavanaugh KM, Deboe SF, Mancini GBJ. Comparison of automated quantitative coronary angiography with caliper measurements of percent diameter stenosis. Am J Cardiol 1990; 65: 1181-4.
  • 17. Kimball BP, Bui S, Cohen EA, Cheung PK, Lima V. Systematic bias in the reporting of angioplasty outcomes: accuracy of visual estimates of absolute lumen diameters. Can J Cardiol 1994; 10: 815-20.
  • 18. Desmet W, Willems J, Van Lierde J, Piessens J. Discrepancy between visual estimation and computer-assisted measurement of lesion severity before and after coronary angioplasty. Cathet Cardiovasc Diagn 1994; 31: 192-8.
  • 19. Gould KL, Lipscomb K, Hamilton GW. Physiologic basis for assessing critical coronary stenosis. Am J Cardiol 1974; 33: 87-94.
  • 20. Gould KL, Lipscomb K. Effects of coronary stenoses on coronary flow reserve and resistance. Am J Cardiol 1974; 34: 48-55.
  • 21. Gould KL. Quantification of coronary artery stenosis in vivo. Circ Res 1985; 47: 341-53.
  • 22. Fleming RM, Kirkeeide RL, Smalling RW, Gould KL. Patterns in visual interpretation of coronary arteriograms as detected by quantitative coronary arteriography. J Am Coll Cardiol 1991; 18: 945-51.
  • 23. Gurley JC, Nissen SE, Booth DC, et al. Comparison of simultaneously performed digital and film-based angiography in assessment of coronary artery disease. Circulation 1988; 78: 1411-20.
  • 24. Gurley JC, Nissen SE, Booth DC, Demaria NA. Influence of operator- and patient-dependent variables on the suitability of automated quantitative coronary arteriography for routine clinical use. J Am Coll Cardiol 1992; 19: 1237-43.
  • 25. Herrington DM, Siebes M, Walford GD. Sources of error in quantitative coronary angiography. Cathet Cardiovasc Diagn 1993; 29: 314-21.
  • 26. Herrington DM, Siebes M, Sokol DK, Siu CO, Walford GD. Variability in measures of coronary lumen dimensions using quantitative coronary angiography. J Am Coll Cardiol 1993; 22: 1068-74.
  • 27. Jost S, Deckers J, Nikutta P, et al. Influence of the selection of angiographic projections on the results of coronary angiographic follow-up trials. International nifedipine trial on antiatherosclerotic therapy investigators. Am Heart J 1995; 130: 433-9.
  • 28. Baim DS, Grossman W. Coronary angiography. In: Baim DS, Grossman W, eds. Cardiac Catheterization, Angiography, and Intervention. 4th ed. Philadelphia: Lea & Febiger, 1991: 185-214.
  • 29. Altman DG. Relation between several variables. In: Altman DG, ed. Practical Statistics for Medical Research. London: Chapman & Hall, 1995: 325-64.
  • 30. Armitage P, Berry G. Further Analysis of Categorical Data. In: Armitage P, Berry G, eds. Statistical Methods in Medical Research, 3rd ed. Oxford: Blackwell Scientific Publications, 1994: 402-447.
  • 31. Altman DG. Some Common problems in medical research. In: Altman DG, ed. Practical Statistics for Medical Research. London: Chapman & Hall, 1995: 396-439.
  • 32. Sanmarco ME, Brooks SH, Blankenhorn DH. Reproducibility of a consensus panel in the interpretation of coronary angiograms. Am Heart J 1978; 96: 430-7.
  • 33. Kussmaul III WG, Popp RL, Norcini J. Accuracy and reproducibility of visual coronary stenosis using information from multiple observers. Clin Cardiol 1992; 15: 154-62.
  • 34. Holder DA, Johnson AL, Stolberg HO, et al. Inability of caliper measurement to enhance observer agreement in the interpretation of coronary cineangiograms. Can J Cardiol 1985; 1: 24-9.
  • 35. Serruys PW, Reiber JHC, Wijns W, et al. Assessment of percutaneous transluminal coronary angioplasty by quantitative coronary arteriography: diameter versus densitometric area measurements. Am J Cardiol 1984; 54: 482-8.
  • 36. Reiber JH, Van Eldik-Helleman P, Visser-Akkerman N, Kooijman CJ, Serruys PW. Variabilities in measurement of coronary arterial dimensions resulting from variations in cineframe selection. Cathet Cardiovasc Diagn 1988; 14: 221-8.
  • 37. Bertrand ME, Lablanche JM, Bauters C, Leroy F, Mac Fadden E. Discordant results of visual and quantitative estimates of stenosis severity before and after coronary angioplasty. Cathet Cardiovasc Diagn 1993; 28: 1-6.
  • 38. Koran LM. The reliability of clinical methods, data and judgements (First part). N Eng J Med 1975; 293: 642-6.

Publication Dates

  • Publication in this collection
    08 Jan 2002
  • Date of issue
    Apr 2000
Sociedade Brasileira de Cardiologia - SBC Avenida Marechal Câmara, 160, sala: 330, Centro, CEP: 20020-907, (21) 3478-2700 - Rio de Janeiro - RJ - Brazil, Fax: +55 21 3478-2770 - São Paulo - SP - Brazil
E-mail: revista@cardiol.br