Acessibilidade / Reportar erro

Evaluation of machine learning classifiers in keratoconus detection from orbscan II examinations

Abstract

PURPOSE: To evaluate the performance of support vector machine, multi-layer perceptron and radial basis function neural network as auxiliary tools to identify keratoconus from Orbscan II maps. METHODS: A total of 318 maps were selected and classified into four categories: normal (n = 172), astigmatism (n = 89), keratoconus (n = 46) and photorefractive keratectomy (n = 11). For each map, 11 attributes were obtained or calculated from data provided by the Orbscan II. Ten-fold cross-validation was used to train and test the classifiers. Besides accuracy, sensitivity and specificity, receiver operating characteristic (ROC) curves for each classifier were generated, and the areas under the curves were calculated. RESULTS: The three selected classifiers provided a good performance, and there were no differences between their performances. The area under the ROC curve of the support vector machine, multi-layer perceptron and radial basis function neural network were significantly larger than those for all individual Orbscan II attributes evaluated (p<0.05). CONCLUSION: Overall, the results suggest that using a support vector machine, multi-layer perceptron classifiers and radial basis function neural network, these classifiers, trained on Orbscan II data, could represent useful techniques for keratoconus detection.

Neural networks; Artificial intelligence; Clinical decision support systems; Corneal topography


CLINICAL SCIENCE

Evaluation of machine learning classifiers in keratoconus detection from orbscan II examinations

Murilo Barreto Souza; Fabricio Witzel Medeiros; Danilo Barreto Souza; Renato Garcia; Milton Ruiz Alves

Faculdade de Medicina da Universidade de São Paulo, Ophthalmology, São Paulo, São Paulo, Brazil. E-mail: murilobsouza@gmail.com. Tel.: 55 71 3203-3466

ABSTRACT

PURPOSE: To evaluate the performance of support vector machine, multi-layer perceptron and radial basis function neural network as auxiliary tools to identify keratoconus from Orbscan II maps.

METHODS: A total of 318 maps were selected and classified into four categories: normal (n = 172), astigmatism (n = 89), keratoconus (n = 46) and photorefractive keratectomy (n = 11). For each map, 11 attributes were obtained or calculated from data provided by the Orbscan II. Ten-fold cross-validation was used to train and test the classifiers. Besides accuracy, sensitivity and specificity, receiver operating characteristic (ROC) curves for each classifier were generated, and the areas under the curves were calculated.

RESULTS: The three selected classifiers provided a good performance, and there were no differences between their performances. The area under the ROC curve of the support vector machine, multi-layer perceptron and radial basis function neural network were significantly larger than those for all individual Orbscan II attributes evaluated (p<0.05).

CONCLUSION: Overall, the results suggest that using a support vector machine, multi-layer perceptron classifiers and radial basis function neural network, these classifiers, trained on Orbscan II data, could represent useful techniques for keratoconus detection.

Keywords: Neural networks; Artificial intelligence; Clinical decision support systems; Corneal topography; Diagnosis.

INTRODUCTION

Keratoconus (KC) is a bilateral and non-inflammatory condition characterized by progressive thinning, protrusion and scarring of the cornea.1 The disease usually becomes clinically evident at puberty, and its etiology remains unknown.2 Although it has well-described clinical signs, early forms of the disease may be undetected, even when computer-assisted videokeratography techniques or other methods are used to evaluate the cornea.3

Prior to the development of refractive surgery, it was considered sufficient to diagnose clinically evident kerato-conus.4 However, given the spread of refractive surgery,5 a careful differentiation between normal and keratoconus cases is essential to avoid postoperative complications such as keratectasia.6

Classification represents an important process in medical care. To help with this task, predictive models are used in a variety of medical domains, including diagnostic. These models are usually based on knowledge acquired from actual cases stored in databases. The data used to build these models can either be preprocessed and expressed in a set of rules or serve as training data for statistical or machine learning models.7

Machine learning models have already been used in keratoconus detection. Previous papers have focused on the assessment of neural networks in keratoconus diagnosis; however, only multi-layer perceptron (MLP) and anterior topographic data have been used.3,5,8,9

Likewise, the most popular MLP artificial neural networks, support vector machine (SVM) and radial basis function neural network (RBFNN), also represent supervised learning methods that can be used for regression or classification.10

The Orbscan IITM (Bausch & Lomb) is a hybrid system that acquires data through slit-scanning and Placido ring technology. This instrument is able to map multiple ocular surfaces beyond the anterior corneal surface.11 A well-known theorem in prediction theory states that, when more variables describing an event can be measured, the model can predict the outcome more precisely.12 Thus, we hypothesized that a high accuracy in the classification of keratoconus subjects can be reached when Orbscan II data are used to develop supervised learning methods.

In this study, we evaluated the performance of SVM, MLP and RBFNN to detect keratoconus apart from all other corneal patterns, using Orbscan II data.

METHODS

This study was composed of three phases. First, Orbscan II data were retrospectively collected from medical records. In the second phase, these data were preprocessed in order to properly present them to the classifiers. SVM, MLP and RBFNN classifications were applied in the third phase. Subjects were enrolled from patients examined at the private practice of one of the authors (M.B.S.) between January 2004 and January 2009. Research followed the tenets of the Declaration of Helsinki, and Institutional Review Board approval was obtained.

Only one eye of each patient was randomly included in the study. Diagnostic classification for all patients was obtained from medical records and Orbscan II data review.

The examinations were classified into four different corneal categories: normal, astigmatism, keratoconus (KC) and photorefractive keratectomy (PRK).

The maps were classified as keratoconus if they had a central corneal power superior to 48.7 D, an inferior superior asymmetry (I-S) above 1.913,14 or at least one of the following biomicroscopic findings: Vogt's striae or Fleischer's ring.

Clinically diagnosed normal eyes, with no abnormal flattening or steepening on tangential map and absence of irregular astigmatism, were included in the normal (<1.5 D cylinder) or astigmatism (>1.5 D cylinder) groups.

Orbscan II maps with poor corneal coverage, missing data points, poor fixation or lid artifacts were excluded.

The machine classifiers were developed to detect the presence of KC apart from other cornea patterns.

WEKA software15 version 3.6.2 was used to implement the SVM and RBFNN classifiers, and NETLAB16 software was used to implement the MLP model. Although the holdout method is the simplest technique for ''honestly'' estimating error rates, a single random partition can be misleading for small or moderately sized samples, and multiple train-and-test experiments can do better. In order to find the best classifier parameters and to evaluate their generalization ability, a 10-fold cross-validation was used. In 10-fold cross-validation, the cases are randomly divided into 10 mutually exclusive test partitions of approximately equal size.17 At each train-and-test experiment, nine partitions are used for training and one partition for testing the performance.17

Pooled examinations from the four corneal categories were randomly divided into each of the 10 partitions used to train and test the classifiers. The performance of the classifiers reflected the ability to detect keratoconus apart from the other non-keratoconus patterns in the test partitions.

We also applied a receiver operating characteristic (ROC) analysis, to obtain a ROC curve, and calculated the area under the curve (AROC).18-21

Data collection

All Orbscan II tests were performed by experienced examiners using the acquisition protocol recommended by the manufacturer. The center of the map was the apex determined by Placido data. Floating alignment and a cornea fit zone of 9 mm were applied for best-fit spheres in all cases.

Eleven quantitative attributes from each Orbscan II examination were used as input data for the algorithms: anterior best-fit sphere, posterior best-fit sphere, astigmatism, maximum and minimum simulated keratometry, index of irregularity of the central 5 mm, thinnest point pachymetry, central corneal power in diopters, I-S, maximum anterior elevation and maximum posterior elevation (Table 1).

The I-S value was calculated as the difference between the superior and inferior average powers of 15 data points, located approximately 2.5-3.0 mm peripheral to the corneal vertex, at 30° intervals.13,14

The central corneal power was obtained by averaging the dioptric power points on rings 2, 3 and 4, based on sagittal topography.13,14

The maximum anterior and posterior elevations were defined by the highest elevation point over the best-fit sphere within the central 5 mm of the Orbscan II map.

Data preprocessing

In order to avoid significant differences between variable magnitudes, all features were normalized to have zero mean and unit standard deviation. To normalize the data, we treated each input variable independently and, for each variable xi, we calculated its mean and variance .16 The rescaled variables were given by:

RBFNN

The RBFNN is a universal approximator and the main practical alternative to the MLP for non-linear modeling. It is characterized by a layer of input nodes, a layer of output nodes and one intermediate or hidden layer.16 The hidden layer performs a non-linear transformation from the input space into a high-dimensional space. The output layer applies a linear transformation from the hidden space to the output space. The idea behind a non-linear transformation followed by a linear transformation involves the fact that a complex pattern classification problem cast in a highdimensional space is more likely to be linearly separable than in a low-dimensional space.10

Each processing unit in the hidden layer implements a radial basis function. Among the various functions tested as activation functions for RBFNN, we chose the Gaussian function, as this function is preferred in pattern classification applications.22,23

The RBFNN available in the WEKA system uses a kmeans clustering algorithm to determine the centers and widths of the radial basis functions; the weights are determined by logistic regression. The adjustable parameters included the number of clusters and the ridge parameter for linear regression.23 These parameters were experimentally determined. The number of clusters tested were 2, 3, 4, 5, 6, 8, 10, 15, 30 and 50, and the ridge parameters tested were 1×10-8, 1×10-7,..., 1×101. Accuracy was used for model selection.

SVM

The support vector machine is a learning method developed from statistical learning theory. Like the previous approach, it can be applied to both classification and regression. After the input space is mapped into a highdimensional space, SVM uses a kernel function to find a hyperplane that maximizes the separation between two classes.10,24

The SVM was implemented using Platt’s sequential minimal optimization algorithm24 with a radial basis function kernel. Two parameters were experimentally optimized, the complexity parameter (C) and the width of the Gaussian function (σ). We varied C between 2-5, 2-4, 2-3, ..., 2-4, and σ between 1×10-8, 1×10-7, ..., 1×101. Accuracy was used for model selection.

Once the outputs of SVM are binary decisions, to obtain proper probability estimates, we used the option that fits logistic regression models to the outputs of the support vector machine.

MLP

A standard multi-layer perceptron neural network is characterized by a layer of input nodes, a layer of output nodes and one or more intermediate or hidden layers.25 In our study, we evaluated neural networks with a single hidden layer, with 11 units in the input layer and a single output neuron.

To determine the number of neurons in the hidden layer, we experimentally evaluated the performance of different neural network configurations, measuring the accuracy achieved on the validation set. The number of hidden neurons tested varied from a minimum of 1 to a maximum of 70 neurons. Weights and biases were initially generated from a spherically symmetric Gaussian distribution with a mean equal to zero16 and, as any training run is sensitive to initial connection weights, accuracy wasmeasured and averaged for a total of 20 runs for each hidden layer configuration.

The hyperbolic tangent activation function was used for neurons in the hidden layer, and a logistic activation function was used for the output neuron. The cross-entropy error function simplifies the optimization process when the logistic activation function is used in the output layer; thus, we considered this an appropriate choice.26 The scaled conjugate gradient27 was the training algorithm, as it generally shows faster convergence when compared with gradient descent-based techniques.28

It is useless to design a classifier that accurately models the sample data used during development but does poorly on new cases. The nature of this problem is called over-fitting of the classifier to the data. In order to avoid over-fitting during training, a validation set and weight decay regularization were used. A penalty term (EW) proportional to the sum of squared weights was added to the cross-entropy error function (ED). The function can be expressed as:

A large or small value of the regularization parameter a can lead to under-fitting or over-fitting respectively. The values of α evaluated were between 0 and 0.4, in 0.05 steps.

In order to find the best neural network architecture, we chose the MLP that achieved the highest accuracy with the simplest architecture.

RESULTS

A total of 318 subjects were enrolled in the study, 129 males (41%) and 189 females (59%). The mean age was 38.1 + 9.7 years. Subjects were classified into four categories: normal (n=172), astigmatism (n=89), keratoconus (n=46) and photorefractive keratectomy (n = 11).

The parameters that reached the best performance for the RBFNN were 8 clusters and a ridge of 1×10-8. For the SVM classifier, a C value of 0.5 and an σvalue of 1×10-6 were used. The MLP reached the best performance with a regularization parameter (α) of 0.15 and 17 hidden units.

ROC curves for classifying eyes as keratoconus or non-keratoconus were determined for each machine learning technique and each individual attribute.

Sensitivity, specificity and AROC given by each individual Orbscan II attribute and by the machine learning classifiers are showed in Table 2. To ease the comparison of the results, we have displayed the sensitivity at defined specificities. Sensitivities at 75% and 90% were chosen arbitrarily to represent moderate and high specificity respectively (Table 2).

The individual attributes with the highest ROC areas were I-S (0.96), followed by 5 mm irregularity (0.95), maximum anterior elevation (0.95) and maximum posterior elevation (0.94). The areas under the ROC curves of these attributes showed no statistical difference, but were significantly larger than the areas of the other individual attributes (p<0.05).30

There were no differences between the performances of SVM, MLP and RBFNN. The ROC curves of the three classifiers are shown in Figure 1. The AROC of the SVM (0.99), MLP (0.99) and RBFNN (0.98) classifiers were significantly larger than those for all the individual attributes evaluated (p<0.05).29


DISCUSSION

Early forms of keratoconus can be detected without any slit-lamp sign of keratoconus.30 In these cases, the evaluation of the anterior topography of the cornea is essential.31 Corneal topography maps provide useful information about corneal surface; however, interpretation of this may represent a difficult task, specially because of the many forms in which keratoconus may present.32 Thus, the ability to automatically screen KC corneal topographic patters would be a useful aid in screening candidates for refractive procedures.5

In order to help clinicians, numerical methods and quantitative parameters, calculated from corneal maps3,31,32 or Orbscan II examinations,6,33,34 have been proposed.

Machine learning methods, such as artificial neural networks and discriminant analysis, were already in use for identifying the topographic patterns of KC.5,8,35,36 Unlike the majority of previous publications, in this study, we used Orbscan II examinations instead of anterior topography data alone. In addition to anterior topography, Orbscan II examination provides important information, such as pachymetry and elevation maps. As the analysis of Orbscan II data has already been demonstrated to be useful in KC detection,6,35-39 we hypothesized that the processing of Orbscan II data could provide high accuracy in the classification of keratoconus examinations.

Maeda et al,3 Smolek and Klyce8 and Accardo and Stefano5 have already demonstrated the value of a neural network approach in identifying keratoconus patterns from corneal topography, and our work agrees with their results. However, besides the use of Orbscan II data, we also used machine learning models that, although already described previously in other fields of ophthalmology,40-42 have not been used for keratoconus detection. Thus, despite similar results, on account of methodological differences and different populations, it is not possible to compare our results with previous studies.5,8,35,36

In the absence of a definitive or genetic test to detect patients with KC, computer-assisted corneal analysis represents the most effective method.

Although the 5 mm irregularity, I-S, maximum anterior elevation and maximum posterior elevation showed good performance, the results in this study indicate that SVM, MLP and RBFNN classifiers, trained on combined Orbscan II measurements, are superior to all the single parameters evaluated to detect keratoconus. This is in accordance with previous publications that recommended the use of anterior and posterior corneal data, or the association of Orbscan II measurements to improve keratoconus detection ability.39

In our study, simulated astigmatism showed the worst performance of the individual attributes evaluated. Smolek and Klyce8 also reported this observation.

SVM, MLP and RBFNN were effective in detecting keratoconus. There were no differences between the classifiers' performance. It is important to highlight, however, that the performance of the classifiers is always influenced by the datasets used to develop and test the model. Thus, our results may be somewhat overestimated, as we used very similar train-and-test sets, reflecting the characteristics of our clinic population.

Although we trained and tested the classifiers on different data, each data input was generated from the same rather homogeneous pool.

Although similar previous studies have concentrated on MLP, some studies have encouraged the use of SVM and RBFNN classifiers.

RBFNN has some advantages over MLP. In general, RBFNN is more resilient to a bad training set than MLP. In addition, the simple linear transformation in the output layer can be optimized using traditional linear modeling techniques, which are fast and do not suffer from problems such as local minima, which plague MLP training techniques. In addition, using only one single hidden layer removes some design decisions about numbers of layers.43

MLP error surfaces are complex and are characterized by a number of unhelpful features, such as local minima, which correspond to a partial solution for the network in response to the training data. Like RBFNN, a significant advantage of SVM is that, although MLP can suffer from multiple local minima, the solution to a SVM is global and unique. Besides that, fewer samples are required to prevent over-fitting.10

On the other hand, a disadvantage of RBFNN and SVM, in contrast to MLP, is that they give every attribute the same weight. Hence, they cannot deal effectively with irrelevant attributes.23

It is not known beforehand which parameters are best for a given problem; consequently, some kind of model selection (parameter search) must be done. In this study, we used a grid search. Although time-consuming, the time required to find good parameters with this strategy is not much more than that required by approximations or heuristics methods, as there were only two parameters in each classifier. Another advantage of this method is that the grid search can be easily parallelized, once each pair of parameters tested is independent.44 However, as it is impossible to try all possible combinations, any model can provide only a suboptimal result.

The evaluation criteria used to report results are directly implicated with the results of a classifier. As our study focuses on whether it is possible to distinguish one class of data from others, based on the same set of measurements, we used only the discrimination ability to access the model performance.45

Orbscan II can provide plenty of data,11 and it is known that, as the number of variables used to train a learning method increases, so does the amount of information available. However, as features are added, more samples are needed to prevent over-fitting. In order to avoid this situation, known as ''the curse of dimensionality'',46 we limited the data used. However, despite the satisfactory performance, we believe that the use of a different combination of attributes, selected with a data-mining strategy from a bigger database, could be associated with performance improvement.

The ability to screen automatically for keratoconus patterns would be a helpful tool in clinical practice, especially if the classifier has the ability to detect early cases, once it is clinically easy to identify KC by clinical signs.

In accordance with previous studies, we did not include maps that could not be classified differently from suspected keratoconus.5,39 This strategy was adopted to allow more precise criteria in the assessment of the results, as the main purpose of this study is to evaluate keratoconus detection.

However, we believe that further investigation, with the inclusion of suspected keratoconus or other confusing patterns, would be desirable once there is a wide range of corneal patterns in clinical practice. Although the inclusion of these patterns can increase the false-positive rate, in order to screen for keratoconus, a method with high sensitivity would be more appropriate than a method with high specificity, as the risk of misclassifying a keratoconus subject is greater than misclassifying a normal subject.

In general, physicians will not accept and act on the advice of a computer system without knowing the basis for the system's decision,47 and one of the greatest disadvantages of the methods tested in this study is their inability to produce meaningful explanations for their decisions.48 However, some factors in the keratoconus detection task represent favorable indicators for applying them: 1) an outcome influenced by multiple factors; 2) the need for results that apply to an individual rather than to a population; and 3) the desirability of constructing composite indices from multiple measurements.49

A good KC screening tool should identify the largest number of cases, with the minimum possible number of false-positives. Overall, our results suggest that SVM, MLP and RBFNN classifiers, trained on Orbscan II data, could represent useful techniques for keratoconus detection. We believe that future work, with larger databases and the use of different combinations of attributes, would probably be associated with better results.

Received for publication on June 27, 2010

First review completed on July 27, 2010

Accepted for publication on September 2, 2010

  • 1. Krachmer JH, Feder RS, Belin MW. Keratoconus and related noninflammatory corneal thinning disorders. Surv Ophthalmol. 1984;28:293-322, doi: 10.1016/0039-6257(84)90094-8.
  • 2. Rabinowitz YS. Keratoconus. Surv Ophthalmol. 1998;42:297-319, doi: 10.1016/S0039-6257(97)00119-7.
  • 3. Maeda N, Klyce SD, Smolek MK, Thompson HW. Automated keratoconus screening with corneal topography analysis. Invest Ophthalmol Vis Sci. 1994;35:2749-57.
  • 4. Belin MW, Khachikian SS. Keratoconus: it is hard to define, but. Am J Ophthalmol. 2007;143:500-3, doi: 10.1016/j.ajo.2006.12.030.
  • 5. Accardo PA, Pensiero S. Neural network-based system for early keratoconus detection from corneal topography. J Biomed Inform. 2002;35:151-9, doi: 10.1016/S1532-0464(02)00513-0.
  • 6. Fam HB, Lim KL. Corneal elevation indices in normal and keratoconic eyes. J Cataract Refract Surg. 2006;32:1281-7, doi: 10.1016/j.jcrs.2006.02.060.
  • 7. Stephan C, Meyer HA, Cammann H, Lein M, Loening SA, Jung K. Re: Felix K.-H. Chun, Markus Graefen, Alberto Briganti, Andrea Gallina, Julia Hopp, Michael W. Kattan, Hartwig Huland and Pierre I. Karakiewicz. Initial biopsy outcome prediction - head-to-head comparison of a logistic regression-based nomogram versus artificial neural network. Eur Urol. 2007;51:1236-43. Eur Urol. 2007;51:1446-7; author reply 8., doi: 10.1016/j.eururo.2006.11.035
  • 8. Smolek MK, Klyce SD. Current keratoconus detection methods compared with a neural network approach. Invest Ophthalmol Vis Sci. 1997;38:2290-9.
  • 9. Klyce SD, Karon MD, Smolek MK. Screening patients with the corneal navigator. J Refract Surg. 2005;21:S617-22.
  • 10. Haykin SS. Neural networks : a comprehensive foundation, 2nd ed. Upper Saddle River, NJ: Prentice Hall; 1999.
  • 11. Cairns G, McGhee CN. Orbscan computerized topography: attributes, applications, and limitations. J Cataract Refract Surg. 2005;31:205-20, doi: 10.1016/j.jcrs.2004.09.047.
  • 12. Holladay JT. Standardizing constants for ultrasonic biometry, keratometry, and intraocular lens power calculations. J Cataract Refract Surg. 1997;23:1356-70.
  • 13. Maeda N, Klyce SD, Smolek MK. Comparison of methods for detecting keratoconus using videokeratography. Arch Ophthalmol. 1995;113:870-4.
  • 14. Rabinowitz YS, McDonnell PJ. Computer-assisted corneal topography in keratoconus. Refract Corneal Surg. 1989;5:400-8.
  • 15. Hall M, Frank E, Holmes G, Pfahringer B, Reutemann P, Witten IH. The WEKA Data Mining Software: An Update. SIGKDD Explorations 2009;11, doi: 10.1145/1656274.1656278.
  • 16. Nabney YT. Netlab: Algorithms for Pattern Recognition, 4 ed. London: Springer; 2004.
  • 17. Weiss SM, Kulikowski CA. Computer systems that learn : classification and prediction methods from statistics, neural nets, machine learning, and expert systems. San Mateo, CA: M. Kaufmann Publishers; 1991.
  • 18. Hanley JA, McNeil BJ. The meaning and use of the area under a receiver operating characteristic (ROC) curve. Radiology. 1982;143:29-36.
  • 19. Scheipers U, Perrey C, Siebers S, Hansen C, Ermert H. A tutorial on the use of ROC analysis for computer-aided diagnostic systems. Ultrason Imaging. 2005;27:181-98.
  • 20. Prati RC, Batista GEAPA, Monard MC. Evaluating classifiers using ROC curves. IEEE America Latina. 2008;6:215-22, doi: 10.1109/TLA.2008.4609920.
  • 21. Altman DG, Bland JM. Diagnostic tests 3: receiver operating characteristic plots. BMJ. 1994;309:188.
  • 22. Bors AG, Pitas I. Median radial basis function neural network. IEEE Trans Neural Netw. 1996;7:1351-64, doi: 10.1109/72.548164.
  • 23. Witten IH, Frank E. Data Mining: Practical Machine Learning Tools and Techniques, 2nd ed. Amsterdam; Boston, MA: Morgan Kaufman; 2005.
  • 24. Platt J. Fast training of support vector machines using sequential minimal optimization. In: Scholkopf C, Burges C, Smola A, eds. Advances in Kernel Methods: Support Vector Learning. Cambridge, MA: MIT Press; 1998.
  • 25. Reed RD, Marks RJ. Neural Smithing: Supervised Learning in Feedforward Artificial Neural Networks. Cambridge, MA: The MIT Press; 1999.
  • 26. Garcia M, Sanchez CI, Lopez MI, Abasolo D, Hornero R. Neural network based detection of hard exudates in retinal images. Comput Methods Programs Biomed. 2009;93:9-19, doi: 10.1016/j.cmpb.2008.07.006.
  • 27. Moller M. A scaled conjugate gradient algorithm for fast supervised learning. Neural Networks. 1993;6:525-33, doi: 10.1016/S0893-6080(05)80056-5.
  • 28. Bishop CM. Neural Networks for Pattern Recognition. Oxford; New York: Clarendon Press; Oxford university Press; 1995.
  • 29. Hanley JA, McNeil BJ. A method of comparing the areas under receiver operating characteristic curves derived from the same cases. Radiology. 1983;148:839-43.
  • 30. Zadnik K, Barr JT, Gordon MO, Edrington TB. Biomicroscopic signs and disease severity in keratoconus. Collaborative Longitudinal Evaluation of Keratoconus (CLEK) Study Group. Cornea. 1996;15:139-46.
  • 31. Rabinowitz YS, Rasheed K. KISA% index: a quantitative videokeratography algorithm embodying minimal topographic criteria for diagnosing keratoconus. J Cataract Refract Surg. 1999;25:1327-35, doi: 10.1016/S0886-3350(99)00195-9.
  • 32. Rabinowitz YS. Videokeratographic indices to aid in screening for keratoconus. J Refract Surg. 1995;11:371-9.
  • 33. Pflugfelder SC, Liu Z, Feuer W, Verm A. Corneal thickness indices discriminate between keratoconus and contact lens-induced corneal thinning. Ophthalmology. 2002;109:2336-41, doi: 10.1016/S0161-6420(02)01276-9.
  • 34. Tanabe T, Oshika T, Tomidokoro A, Amano S, Tanaka S, Kuroda T, et al. Standardized color-coded scales for anterior and posterior elevation maps of scanning slit corneal topography. Ophthalmology. 2002;109: 1298-302, doi: 10.1016/S0161-6420(02)01030-8.
  • 35. Maeda N, Klyce SD, Smolek MK. Neural network classification of corneal topography. Preliminary demonstration. Invest Ophthalmol Vis Sci. 1995;36:1327-35.
  • 36. Carvalho LA. Preliminary results of neural networks and zernike polynomials for classification of videokeratography maps. Optom Vis Sci. 2005;82:151-8, doi: 10.1097/01.OPX.0000153193.41554.A1.
  • 37. Lim L, Wei RH, Chan WK, Tan DT. Evaluation of keratoconus in Asians: role of Orbscan II and Tomey TMS-2 corneal topography. Am J Ophthalmol. 2007;143:390-400, doi: 10.1016/j.ajo.2006.11.030.
  • 38. Rao SN, Raviv T, Majmudar PA, Epstein RJ. Role of Orbscan II in screening keratoconus suspects before refractive corneal surgery. Ophthalmology. 2002;109:1642-6, doi: 10.1016/S0161-6420(02)01121-1.
  • 39. Sonmez B, Doan MP, Hamilton DR. Identification of scanning slit-beam topographic parameters important in distinguishing normal from keratoconic corneal morphologic features. Am J Ophthalmol. 2007;143:401-8, doi: 10.1016/j.ajo.2006.11.044.
  • 40. Bowd C, Medeiros FA, Zhang Z, Zangwill LM, Hao J, Lee TW, et al. Relevance vector machine and support vector machine classifier analysis of scanning laser polarimetry retinal nerve fiber layer measurements. Invest Ophthalmol Vis Sci. 2005;46:1322-9, doi: 10.1167/iovs.04-1122.
  • 41. Bowd C, Goldbaum MH. Machine learning classifiers in glaucoma. Optom Vis Sci. 2008;85:396-405, doi: 10.1097/OPX.0b013e3181783ab6.
  • 42. Goldbaum MH, Sample PA, Chan K, Williams J, Lee TW, Blumenthal E, et al. Comparing machine learning classifiers for diagnosing glaucoma from standard automated perimetry. Invest Ophthalmol Vis Sci. 2002;43:162-9.
  • 43. StatSoft I. Eletronic statistics texbook. In: , StatSoft, ed.Tulsa: StatSoft; 2010.
  • 44. Hsu C, Chang C, Lin C. A Practical Guide to Support Vector Classification. Taipei: Departament of Computer Science, National Taiwan University; 2010.
  • 45. Dreiseitl S, Ohno-Machado L. Logistic regression and artificial neural network classification models: a methodology review. J Biomed Inform. 2002;35:352-9, doi: 10.1016/S1532-0464(03)00034-0.
  • 46. Bellman R. Adaptive Control Processes: a Guided Tour, 1 ed. Princeton, NJ: Princeton University Press; 1961.
  • 47. Teach RL, Shortliffe EH. An analysis of physician attitudes regarding computer-based clinical consultation systems. Comput Biomed Res. 1981;14:542-58, doi: 10.1016/0010-4809(81)90012-4.
  • 48. Kahn CE, Jr. Artificial intelligence in radiology: decision support systems. Radiographics. 1994;14:849-61.
  • 49. Dayhoff JE, DeLeo JM. Artificial neural networks: opening the black box. Cancer. 2001;91:1615-35, doi: 10.1002/1097-0142(20010415)91:8+<1615::AID-CNCR1175>3.0.CO;2-L.

Publication Dates

  • Publication in this collection
    21 Mar 2011
  • Date of issue
    2010

History

  • Accepted
    02 Sept 2010
  • Reviewed
    27 July 2010
  • Received
    27 June 2010
Faculdade de Medicina / USP Rua Dr Ovídio Pires de Campos, 225 - 6 and., 05403-010 São Paulo SP - Brazil, Tel.: (55 11) 2661-6235 - São Paulo - SP - Brazil
E-mail: clinics@hc.fm.usp.br