Services on Demand
- Similars in SciELO
Print version ISSN 0066-782X
Arq. Bras. Cardiol. vol.99 no.4 São Paulo Oct. 2012 Epub Oct 02, 2012
Marcelo Souza HadlichI,II,IV,V,VI; Gláucia Maria Moraes OliveiraI,II,IV; Raúl A. FeijóoIII; Clerio F. AzevedoIV,V,VI; Bernardo Rangel TuraV; Paulo Gustavo Portela ZiemerIII; Pablo Javier BlancoIII; Gustavo PinaI; Márcio MeiraI; Nelson Albuquerque de Souza e SilvaI,II
IUniversidade Federal do Rio de Janeiro (UFRJ). Rio de Janeiro, RJ - Brazil
IIInstituto do Coração Edson Saad. Rio de Janeiro, RJ - Brazil
IIILaboratório Nacional de Computação Científica (LNCC). Rio de Janeiro, RJ - Brazil
IVRede Labs Dor. Rio de Janeiro, RJ - Brazil
VInstituto Nacional de Cardiologia. Rio de Janeiro, RJ - Brazil
VIInstituto Dor de Pesquisa e Ensino. Rio de Janeiro, RJ - Brazil
BACKGROUND: The standardization of images used in Medicine in 1993 was performed using the DICOM (Digital Imaging and Communications in Medicine) standard. Several tests use this standard and it is increasingly necessary to design software applications capable of handling this type of image; however, these software applications are not usually free and open-source, and this fact hinders their adjustment to most diverse interests.
OBJECTIVE: To develop and validate a free and open-source software application capable of handling DICOM coronary computed tomography angiography images.
METHODS: We developed and tested the ImageLab software in the evaluation of 100 tests randomly selected from a database. We carried out 600 tests divided between two observers using ImageLab and another software sold with Philips Brilliance computed tomography appliances in the evaluation of coronary lesions and plaques around the left main coronary artery (LMCA) and the anterior descending artery (ADA). To evaluate intraobserver, interobserver and intersoftware agreements, we used simple and kappa statistics agreements.
RESULTS: The agreements observed between software applications were generally classified as substantial or almost perfect in most comparisons.
CONCLUSION: The ImageLab software agreed with the Philips software in the evaluation of coronary computed tomography angiography tests, especially in patients without lesions, with lesions < 50% in the LMCA and < 70% in the ADA. The agreement for lesions > 70% in the ADA was lower, but this is also observed when the anatomical reference standard is used.
Keywords: Diagnostic imaging; tomography scanners, X-Ray computed; coronary vessels; software validation.
In the medical area, a specific imaging pattern called DICOM (Digital Imaging and Communications in Medicine) is used. This standard was developed to facilitate communication between the software and hardware associated with this process.
The DICOM format, standardized1 in 1993 by the RSNA (Radiological Society of North America) Congress, aimed at standardizing the rules with which medical information is transmitted and stored2-4.
Medical images are viewed and processed by means of specific software applications. Many of these resources are not available for free or are sold together with equipment, but it is not common to find free and open-source software applications for that image format. The public domain of a medical software technology could reduce medical costs, homogenize multicenter use, allow continued development and facilitate the development of several lines of research, and some believe that this format could contribute to reducing the social gap between countries5.
The National Institute of Science and Technology in Medicine Assisted by Scientific Computing (INCT-MACC) involves 33 national institutions from 11 states, totaling 128 researchers. One area of research of the INCT-MACC is the processing of medical images, enabling the acquisition of information to improve the computational modeling of the human cardiovascular system, also under development by the INCT-MACC. For this purpose, a free and open-source software application named ImageLab was designed. The characteristics of this software supposedly include the fact that it is user-friendly, but that requires further evaluation.
The purpose of the study was to develop and evaluate reliability, reproducibility and agreement of the new software when compared to another being currently used in coronary computed tomography angiography equipment.
In a concerted effort undertaken by the Universidade Federal do Rio de Janeiro (UFRJ) together with the National Laboratory for Scientific Computing (LNCC), a software application was developed to analyze DICOM images. This free and open-source software is called ImageLab and has some tools created to analyze and process coronary computed tomography angiography images.
The inclusion criteria was applied to a population of 6,216 patients who underwent coronary computed tomography angiography tests between May 2005 and December 2010, i.e., patients with complete data recorded in the database created in 2008, of which 2,895 patients were selected. Subsequently, after the exclusion criteria were applied, i.e., presence of stents in the left main coronary artery or anterior descending artery (ADA), surgical clip, resulting from coronary artery bypass grafting and patients who had no images stored in the server, a total of 534 cases were selected (Figure 1).
A random selection was carried out in the randon.org website, which removed 120 of the 534 patients, and the first 100 cases were then used. All patients had their identifiers replaced by consecutive numbers, by people not involved in the image analysis study, making it blind.
Raw data from coronary computed tomography angiography images obtained through CT scanners available in the service were used (GE Lightspeed VTC 64 and 32, Philips Brilliance 64, 40 and 16). All images followed a standard acquisition protocol using similar techniques.
All 100 tests were analyzed by two software applications. The Philips Brilliance software was purchased along with the cardiovascular package (available for a specific workstation for analyses of DICOM images), and the ImageLab software, developed in this project, was installed in the same workstation (to neutralize potential variations of processing capacity if they were installed in computers with different capacities).
We chose the left main coronary artery and the ADA due to their clinical importance and to enable the analysis, as the use of the 16 segments in their entirety would result in a large number of variables. We followed the nomenclature proposed by the AHA, regarding segments 5, 6, 7 and 86. The evaluation of the presence of coronary lesions greater than or smaller than 50% was used for segment 5 (left main coronary artery) and for the other segments (anterior descending artery), we used cutoff points of 70%. Additionally, the following variables were also recorded: Presence of calcified, partially calcified (in each segment) and non-calcified atherosclerotic plaques; subjective quality of images; time of image analysis (recorded by observer 1).
Image analysis was performed by viewing orthogonal planes selected from image blocks in three-dimensional space, starting with the chest analysis in the transverse plane from head to foot, in order to view the aorta, the origin of the left main coronary artery and the ADA. Evaluations on the degree of stenosis followed the standard analysis, where the filling failure in the contrasted vessel defines the lesion. The analysis of the type of plaque responsible for the lesions was performed subjectively. Plaque characterization was performed according to their morphology and signal intensity following the traditional scale adopted in CT tests, the scale of Hounsfield (HU)7. The plaques were classified as: calcified (tissue adjacent to the vessel with more signal strength than the contrasted vessel - signal >130 HU); non-calcified (tissue adjacent to the vessel with lower signal than the contrasted vessel - signal < -50 HU), and partially calcified (heterogeneous content).
The two software applications feature a tool designed to find and analyze any points in the three-dimensional space. This tool was used to evaluate the vessel in its axial plane (viewing from within the vessels) throughout their extension, and when coronary lesions and plaques are present, all the orthogonal planes may be used simultaneously at the exact topography where these findings are located.
Analysis of the Images
Two observers with more than five years of experience, equivalent to level 3 of clinical competence8, used the commercial Philips software and ImageLab, installed on the same computer, to analyze the segment of the ADA according to the variables described in Table 1. To avoid bias, the analyses of the two software applications occurred at intervals of more than 15 days.
A total of 600 analyses were performed to assess the intraobserver, interobserver and intersoftware agreements.
Descriptive analysis of numerical variables used means +/- standard deviations. Categorical variables were analyzed as numbers (n) and percentages (%). Kappa statistics were used to calculate the interobserver and intraobserver reproducibility, all with a confidence interval of 95%, as well as simple agreement to measure the cases with a small number of disagreements. The software used was R for Linux. We used Table 1 to assess the degree of consistency between the agreements9.
After the random selection of 120 cases, 7 cases that scored less than 100 were excluded due to lack of contrast examination (calcium score tests without coronary computed tomography angiography). A total of 100 tests that scored 1-107 were included in the 600 analyses divided by observers and software. (All results of agreements obtained with the R program for Linux can be viewed at http://cl.ly/3c193E0J1o1M0f360u1d)
The results of the ImageLab software version used in this project showed to be appropriate, demonstrating convenient format and usability (Figure 2).
All 100 patients analyzed were tested between July 2009 and November 2010, most of whom were male (65) and mean age was 58 years (23-85). Mean body mass index was 27 kg/m2, which classified the population as overweight (25% were classified as obese). Only 15% of the population did not have any risk factor, and 35% were asymptomatic (Table 2).
The most frequent symptom observed was the presence of atypical pain for CAD, and most test indications were performed to evaluate any symptoms. It is noteworthy that 36% of patients were referred due to the presence of abnormal functional test, and 16%, for risk stratification (routine evaluation unrelated to symptoms or abnormal tests); only 2%, to exclude the diagnosis of atherothrombotic disease as the etiology of a cardiomyopathy; and 2%, due to preoperative evaluation (assessment of surgical cardiovascular risk).
Observer 1 used a pad-shaped electronic device (Apple® tablet - Wi-fi 64 Gb iPad) to record the analyses using the software application Bento® for iPad. This allowed registering the time of the analyses in a simple way (Table 3).
The total time of the analyses performed by observer 1 was approximately 6.8 hours, throughout 14 days. The average time spent with each analysis was 2 minutes and 4 seconds (21 to 612 seconds). The mean time using ImageLab software to analyze all cases was 226.2 minutes, while with the software Philips Brilliance, it was 180.9 (20% less). The second analysis using both software applications was faster with the decrease in total time by 11.9 minutes and 10.7 minutes for the software applications ImageLab and Philips Brilliance, respectively.
We recorded the subjective quality of images through a 1-3 score, 1 - low quality, 2 - intermediate quality, and 3 - high quality. Twenty-four percent of the images were classified as low quality; 64.2% as intermediate, and 11.8%, as high quality (76% with intermediate or high quality).
To evaluate the intraobserver and interobserver agreements, we used the simple agreement (sum of agreeing findings in relation to the total number of cases), in the presence of few or no cases of disagreement (heterogeneous distribution in the 2 x 2 table with few or no cases in one of the cells), and using kappa statistics (Table 1) as a reference to the degree of agreement between the results8.
The average observations of the 600 analyzes (Table 3) showed no lesion in the main coronary artery in 82%, and in the ADA in 49.3%. Less than 1% had lesions >50% in the main coronary artery and 9% had lesions >70% in some segment of the ADA. The most frequently observed plaque type was the calcified one.
It was possible to perform the analysis of agreement using Kappa, for all types of comparison (intraobserver, interobserver, and intersoftware), in the following variables: absence of lesion and lesion > 50% in the left main coronary artery; absence of lesion in the ADA and > 70% in the middle third of this artery (Table 4). For the evaluation of plaques, Kappa may be performed only with the variable calcified plaque in the left main coronary artery (Table 5). The other agreements were performed by measuring the simple agreement.
In order to evaluate the intraobserver agreements, we compared the analyses performed by observer 1 with the software ImageLab at time 1 ImageLab with time 2 as well as the Philips Brilliance software at time 1 Philips Brilliance at time 2.
All evaluations measured by Kappa or by simple agreement in left main coronary artery were >60. Only the variable lesion >70% in the mid third of the descending artery, in the interobserver evaluation by Philips Brilliance, presented moderate agreement (49.7 - 13.8 to 85.6). All the other agreements observed in the left ADA were >609.
The observation of absence of lesions and lesion <50% in the left main coronary artery, as well as lesion >70% in the mid-third of the left ADA showed greater intraobserver agreement by the software Philips Brilliance. The others were similar.
The intersoftware analysis showed agreement >80 for all measurements performed by Kappa, except for the observation of lesion >70% in the mid-third of the ADA (79.6 - 63.6 to 95.7), while all simple intersoftware agreements were >80.
When the most frequent observations with greater clinical significance are analyzed (Table 6), it was possible to use the Kappa, which showed higher agreement for the identification of absence of lesion (Kappa >90 for all comparisons) and then, the identification of calcified plaques that also displayed a substantial degree of agreement. The agreement decreases in the evaluation of significant lesions in the ADA (>70%), in which the worst kappa was 47.8 for the interobserver evaluation with the ImageLab software.
Our study analyzed patients referred by their physicians to undergo coronary computed tomography angiography due to different clinical indications. These patients, seen at the routine of private services referred to these types of tests, generally have low to intermediate likelihood of having the disease and the high negative predictive value of the test10 is used to rule out the diagnosis. We observed that in the population studied, only 9% had chest pain typical of coronary artery disease and when we evaluated the results of calcium scores of the same population (not used in the assessments), only 20% had values above the 75th percentile for sex and age, which is associated with a higher overall risk for cardiovascular events11.
The use of patients referred for cardiological indications allows the feasibility of this study, but it may also pose a problem. The lack of all types of lesions and plaques by coronary segments surely is a limitation for the analysis of agreements. In these cases, it was only possible to perform the simple agreement, as the Kappa needs a more varied distribution of findings to be calculated. One way to correct this problem would be to include patients distributed by lesions and plaques in order to force the varied entry of all kinds of findings. This approach would change the profile of the population to be studied, which would certainly be different from the population seen in everyday life, which is referred by physicians for clarification of an uncertain diagnosis.
The assessment of only part of the test (contrasted phase) without the patients' clinical data, possibly diminishes the observers' capacity to interpret the images. This may interfere with the test report and perhaps with medical management, but our goal was not to evaluate how accurate the new software was in relation to the reference standard (coronary angiography), but to compare it with another widely used software application.
We did not find any specific scientific validation for the Philips' software in our review, but we believe that its regular use in clinical practice and in scientific works allows us to conclude that there is some sort of validation for that software application.
This study was motivated by the research area funded by Faperj, concerning computer modeling of the cardiovascular system. It aims to use ImageLab in coronary angiography tests in DICOM format and possibly in other tests using this format.
The subjective evaluation of the degree of lesion, from a representative image of the artery that is constantly in motion and is generally smaller than 3 mm, can be difficult. Moreover, viewing a complex structure in three-dimensional space expressed by a two-dimensional shape image is a challenge to accuracy. The lesion degree estimated with these limitations will define the clinical conduct to be followed in several cases.
Despite the lower accuracy compared with cine angiocardiography (CA), it is possible to analyze the image in four dimensions (three dimensions plus the time of one cardiac cycle) when using coronary computed tomography angiography and that possibly facilitates agreement with CA, which is reflected in the high sensitivity and specificity of coronary computed tomography angiography, with values above 90%9.
We obtained good agreement in most analyses and in the direct evaluation of the software applications. This agreement was greater in the intraobserver evaluation, which was expected. Another predictable finding was greater agreement in cases of absence of lesion. The worst evaluation occurred between observers with the software ImageLab in the analysis of lesion >70% in the ADA (Kappa 47.8). This discrepancy could be partly explained by the difficult subjective quantification in some cases and the different interpretations of the site where the lesion is found, or even if there was a correct assessment of a particular lesion, as this information was not accurately recorded to state whether it occurred in different segments (lesion >70% in the ADA at the intersection of the proximal and middle segments, sometimes interpreted as in the anterior segment, sometimes interpreted as in the middle segment).
Despite the limitations of the study population described above, the simple agreements performed in cases with few observations of lesion and plaques were always very good. The lack of such information may jeopardize the external validity of the software for rarer findings, but the vast majority of patients who are referred to this test do not have these alterations.
In the analysis of agreements with greater clinical significance and more frequent observations (Table 6) we also observed greater agreement for the observation of absence of lesions in the ADA, with Kappa always >90. In this same table, we observed lower agreement for lesion >70% and in the identification of calcified plaques. These data are consistent with the characteristics of coronary angiography, of which use is more appropriate to rule out the presence of lesions and plaques, i.e., the method is more accurate and is more reliable when coronary alterations are observed.
When we performed a search in the literature for evaluation of agreements with CA, although the anatomical reference standard is used, we found a great variation, with Kappa ranging from 36 to 63 for detailed dichotomous measures on the degree of lesion (0.1 - 50.51 - 69 and >70) and Kappa of 37 to 82 for less detailed dichotomous measures (>70 or >70)12.
Another interesting data using two observers is the change of interpretation in a previous report of CAT compared to a recent evaluation, with moderate agreement and Kappa ranging from 54 to 60 among observers. This occurs even when the recent agreement between these observers is good (Kappa 69)13. These data may be related to the information available for the image observer during the time of analysis (images associated with clinical history, the test context, previous examinations and symptoms), compared to the analysis of images without prior information about the patients.
In our study, the absence of patient information, including identification, characteristics of precordial pain, result of functional tests and evaluation of calcium score, may have influenced the interpretation of images, but our goal was not to approve a clinical report, but to compare software applications.
We observed very good agreements between the two software applications, both in the intraobserver and the interobserver evaluation.
The evaluation of intersoftware agreements, analysis of lesions and plaques was generally greater than 80.
The software ImageLab agrees with the Philips software in the evaluation of coronary computed tomography angiography, has a great potential to help clinical practice and research in this area, despite the need for a more robust evaluation. It can be used as a tool in the research area of computer modeling of the cardiovascular system and possibly in other areas related to images in DICOM format.
Potential Conflict of Interest
No potential conflict of interest relevant to this article was reported.
Sources of Funding
There were no external funding sources for this study.
This article is part of the dissertation of master submitted by Marcelo Souza Hadlich, from Universidade Federal do Rio de Janeiro.
1. Yamauchi T, Yamazaki M, Okawa A,Furuya T, Hayashi K, Sakuma T, et al. Efficacy and reliability of highly functional open source DICOM software (OsiriX) in spine surgery. J Clin Neurosc.. 2010;17(6):756-9. [ Links ]
2. Peanykh OS. A brief history of DICOM. In: Digital Imaging and Communications in Medicine. Philadelphia:Springer;2008.p.17-22. [ Links ]
3. Gibaud B. The DICOM standard: a brief overview. Philadelphia:Springer; 2008.p.1-10. [ Links ]
4. Graham RNJ, Perriss RW, Scarsbrook AF. DICOM demystified: a review of digital file formats and their use in radiological practice. Clin Radiol. 2005;60(11):1133-40. [ Links ]
5. Kon F. O Software aberto e a questão social. Relatório Técnico RT-MAC-2001-07. 2011:1-10.Departamento de Ciência da Computação, IME-USP.[Citado em 2011 jan 10].Disponível em: http://www.ime.usp.br/r Kon/publications.html [ Links ]
6. Raff GL, Abidov A, Achenbach S, Berman DS, Boxt LM, Budoff MJ, et al. SCCT guidelines for the interpretation and reporting of coronary computed tomographic angiography. J Cardiovasc Comput Tomogr. 2009;3(2):122-36..
7. Brooks RA. A quantitative theory of the Hounsfield unit and its application to dual energy scanning. J Comput Assist Tomogr. 1977;1(4):487-93. [ Links ]
8. Budoff MJ, Cohen MC, Garcia MJ, Hogson JM, Hundley WG, Lima JA, et al. ACCF/AHA clinical competence statement on cardiac imaging with computed tomography and magnetic resonance: a report of the American College of Cardiology Foundation/American Heart Association/American College of Physicians Task Force on Clinical Competence and Training. J Am Coll Cardiol. 2005;46(2):383-402. [ Links ]
9. Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics. 1977;33(1):159-74. [ Links ]
10. Janne d'Othée B, Siebert U, Cury R, Jadvar H, Dunn EJ, Hoffmann U. A systematic review on diagnostic accuracy of CT-based detection of significant coronary artery disease. Eur J Radiol. 2008;65(3):449-61. [ Links ]
11. Raggi P, Callister TQ, Cooil B, He ZX, Lippolis NJ, Russo DJ, et al. Identification of patients at increased risk of first unheralded acute myocardial infarction by electron-beam computed tomography. Circulation.2000;101(8):850-5. [ Links ]
12. Guimaraes JA, Victor EG, de Britto Leite MR, Gomes JM, Victor Filho E, Reyes Liveras J. Reliability of the interpretation of coronary angiography by the simple visual method. Arq Bras Cardiol. 2000;74(4):300-8. [ Links ]
13. Banerjee S, Crook AM, Dawson JR, Timmis AD, Hemingway H. Magnitude and Consequences of Error in Coronary Angiography Interpretation (The ACRE Study). Am J Cardiol. 2000;85(3):309-4 [ Links ]
Manuscript received November 4, 2011; manuscript revised November 4, 2011; accepted June 13, 2012.