Content Based Geographic Image Retrieval using Local Vector Pattern

A. Jenitta R. Samson Ravindran About the authors

ABSTRACT

Large image archives formed by satellite remote sensing missions are getting an increasing valuable source of information in Geographic Information Systems (GIS). The need for retrieving a required image from a huge image database is increasing significantly for the purpose of analyzing resources in GIS. Content Based Geographic Image Retrieval (CBGIR) in the image processing field is the best solution to meet the requirement. In this work, we used Local Vector Pattern (LVP) to extract fine features present in the geographical image and retrieve the applicable images from a large remote sensing image database. The primary idea of our method is generating micro patterns of LVP by the vectors of each pixel that are constructed by calculating the values between the centre pixels and its neighbourhood pixels with various distances of different directions. Then the proposed method was designed for concatenating these vector patterns to produce more unique features of geographical images and comparing the results with Local Binary Pattern (LBP), Local Derivative Pattern (LDP) and Local Tetra Pattern (LTrP). Ultimately, the extensive analysis carried out on different geographical image collections proved that the proposed method achieves the improved classification accuracy and better retrieving results.

Key words:
Geosciences; Geographic Information Systems; Content Based Geographic Image Retrieval; Local Vector Pattern

INTRODUCTION

Accumulating information, from geographical images is a very prominent research work in the field of geosciences to know the climate change and natural disasters. Since the development of Geographic Information System (GIS) is increasing fast, it is needed to propose effective and exact methods for retrieving the remote sensing images. The fast and intelligent ways of Remote Sensing (RS) image acquisition increases the quantity of obtainable data exponentially and slowly approaching the zettabyte (which is equal to a billion terabyte) scale. As an example, a single in-orbit Synthetic Aperture Radar (SAR) sensor accumulates about 10-100 Gbytes of data per day, 10 Tbytes of data per year2222 Veganzones MA, Maldonado JO, Graña M. On Content-Based Image Retrieval Systems for Hyperspectral Remote Sensing Images. In: Graña M, Duro RJ, editor. Computational Intelligence for Remote Sensing. Springer-Verlag Berlin Heidelberg; 2008. p. 125-144.. Processing these geographical images manually is an extremely time consuming and tedious task. Content Based Image Retrieval (CBIR) is a technology that effectively manages image understanding and machine learning. The conventional CBIR depends on difficult and prolonged methods to extract features from a general image. In this study, we focus on the use of texture data; description of image content to produce an advanced and effective geographic image retrieval system. This process is named as Content Based Geographic Image Retrieval (CBGIR). CBGIR technique has been effectively used to retrieve applicable images on account of automatically derived features present in the geographical images. Since researchers of remote sensing field have started to recognize the effectiveness of local feature-based analysis of high-resolution images, in this paper, Local Vector Pattern (LVP) is applied to compute the features from geographical image.

Numerous works have been investigated with different feature extraction techniques for executing image retrieval in large collections of geographical images. A hierarchical approach is used to cluster geographic images and to measure distances between feature vectors in the feature space using wavelet-based multi-resolution decomposition with two different sets of texture features66 Gholamhosein S, Aidong Z, Ling B. A Multi-resolution content-based retrieval approach for geographic images. Geoinformatica., 1999; 3:109-139.. Geographic position, date of acquisition and spectral properties of acquisition devices are some of the conventional methods for retrieving images from geographic image databases1212 Li Y, Bretschneider T. Semantic-sensitive satellite image retrieval. IEEE Trans. Geosci. Remote Sens. 2007; 45(4): 853-860.. With minimum cost, classification speed is increased by splitting a set of categories recursively into two minimally confused subsets in greedy algorithm99 Griffin G, Perona P. Learning and using taxonomies for fast visual categorization. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR); Anchorage, AK. 2008. p. 1-8.. Color Co-occurrence Matrix (CCM) is also used for the image retrieval process of remote sensing image44 Chuen L, Rong C, Yung C. A smart content-based image retrieval system based on color and texture feature image and vision computing. Image Vis. Comput. 2009; 22(6): 658-665.. An automated semantic labeling algorithm framework was developed for multi spectral Geo spatial imagery77 Gleason S, Ferrell R, Cheriyadat A, Vatsavai R, De S. Semantic information extraction from multi spectral geospatial imagery via a flexible framework. In: Proceedings of IEEE International Geoscience and Remote Sensing Symposium (IGARSS); Honolulu, HI: 2010. p. 166-169.. An unsupervised semantic labeling framework based on the latent Dirichlet allocation method is used for nuclear proliferation monitoring using high resolution remote sensing images2121 Vatsavai RR, Cheriyadat A, Gleason S. Unsupervised semantic labeling framework for identification of complex facilities in high resolution remote sensing images. In: Proceedings of IEEE International Conference on Data Mining Workshops (ICDMW); Sydney, NSW: 2010. p. 273-280.. Since conventional Scale Invariant Feature Transform (SIFT) approach has less advantage in remote sensing images when directly applied, it is combined with an image segmentation method and used for automatic image registration application on remote sensing images88 Goncalves H, Corte RL, Goncalves J. Automatic image registration through image segmentation and SIFT. IEEE Trans. Geosci. Remote Sens. 2011; 49(7): 2589-2600.. An enhanced SIFT algorithm was developed by modifying the conventional SIFT algorithm. In this enhanced SIFT algorithm, an initial cross matching process and the projective transformation model were used to process the extracted features. Since this algorithm extracted a better accuracy and high quality features with the properties of uniformity, robustness, and scale invariant feature matching specially for optical remote sensing images, the authors proved this improved SIFT algorithm as an efficient algorithm2020 Sedaghat A, Mokhtarzade M, Ebadi H. Uniform robust scale invariant feature matching for optical remote sensing images. IEEE Trans. Geosci. Remote Sens. 2011; 49(1); 4516-4527.. K-nearest neighbor classifier and the support vector machine classifier are used to evaluate local features for scene classification on Very High Resolution (VHR) satellite images33 Chen YW, Xu K, Xu T. Evaluation of local features for scene classification using VHR satellite images. In: Proceedings of Joint Urban Remote Sensing Event (JURSE); Munich, Germany: 2011. p. 385-388.. The authors proved that the combined feature extraction methods constantly improve the results of experiments which were done on the VHR satellite images. Neuro-fuzzy and fuzzy logic methods are also used for the retrieval of the satellite images1111 Jose APF, Gloria O, James ZW, Manuel CG. Fuzzy content-based image retrieval for oceanic remote sensing. IEEE Trans. Geosci. Remote Sens. 2014; 52(9): 5422-5431..

The Local Patterns are used to extract important attributes from the given image and to increase the recognition rate of CBIR systems. As a beginning, the local binary pattern (LBP) was designed and applied in computer vision and pattern recognition field1515 Ojala T, Pietikäinen M, Harwood D. A comparative study of texture measures with classification based on feature distributions. Pattern Recognit. 1996; 29: 51-59.. LBP features have been initially designed for texture description, which changed to a rotational invariant version of texture classification1818 Pietikäinen M, Ojala T, Scruggs T, Bowyer KW, Jin C., Hoffman K, et al. Rotational invariant texture classification using feature distributions, Pattern Recogn., 2000; 33(1): 43-52.. Then, a signed gray level difference, instead of absolute differences for texture description has been introduced, together with the LBP operator1717 Ojala T, Valkealahti K, Oja E, Pietikäinen M. Texture discrimination with multidimensional distributions of signed gray level differences. Pattern Recog. 2001; 34(3): 727-739.. The generalized grayscale and rotation invariant operator was designed to represent the uniform patterns for any spatial resolution images1616 Ojala T, Pietikäinen M, Mäenpää T. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans. Pattern Anal. Mach. Intell. 2002; 24: 971-987.. The derivative-based LBPs were introduced to calculate face alignments1010 Huang X, Li SZ, Wang Y. Shape localization based on statistical method using extended local binary pattern. In: Proceedings of the Third International Conference on Image and Graphics; Hong Kong, China: 2004. p. 184-187.. Followed by, LBP is used to extract illumination-insensitive feature sets directly from the given image for face recognition applications. Specifically, the uniform pattern method for LBP code which is robust to noise develops the recognition performance11 Ahonen T, Hadid A, Pietikäinen M. Face recognition with local binary patterns. In: Proceedings of European Conference on Computer Vision; Prague, Czech Republic. 2005. p. 469-481.,22 Ahonen T, Hadid A, Pietikäinen M. Face description with local binary patterns: application to face recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2006; 28: 2037-2041.. The combination of LBP and Gabor is an effective way for face recognition. These combinations improve the performance of face recognition considerably compared to the individual representation and obtained more stable representation2424 Zhang B, Shan SG, Chen XL, Gao W. Histogram of Gabor phase patterns: A novel object representation approach for face recognition. IEEE Trans. Image Process. 2007; 16(1): 57-68.,2525 Zhang WC, Shan SG, Qing L, Chen XL, Gao W. Are Gabor phases really useless for face recognition? Pattern Anal Applic. 2009; 12(3): 301-307.,2626 Zhang WC, Shan SG, Gao W, Zhang HM. Local Gabor binary pattern histogram sequence: a novel non-statistical model for face representation and recognition. In: Proceedings of Conf. Comput. Vis. IEEE; 2005. p. 786-791.. Volume Local Binary Pattern (VLBP) was developed to recognize dynamic texture. VLBP is intended as an extension of the LBP operator and the co-occurrences of the LBP on Three Orthogonal Planes (LBP-TOP) 27. Subsequently, local derivative pattern (LDP) 23, local tetra pattern (LTrP) 13 and local vector pattern55 Fan KC, Hung TY. A Novel Local Pattern Descriptor - Local Vector Pattern in High-Order Derivative Space for Face Recognition. IEEE Trans. Image Process. 2014; 23(7): 2877-2891. were introduced for facial feature extraction. Thus, local descriptors have gained much attention in the facial recognition area. Since geographical image contains textural information like face image, we decided to implement the idea of making use of recently proposed local descriptor Local Vector Pattern (LVP) 55 Fan KC, Hung TY. A Novel Local Pattern Descriptor - Local Vector Pattern in High-Order Derivative Space for Face Recognition. IEEE Trans. Image Process. 2014; 23(7): 2877-2891. on geographical images. Even though LVP is used extensively in face recognition by achieving impressive classification results on representative texture databases than other local pattern descriptors, its usage in content based geographic image retrieval is considered as a novel approach.

The overview of the paper is as follows. Section 1 presents a short introduction about CBGIR and concise review of local patterns. Section 2 depicts the methods used in this paper. Section 3 illustrates the extensive analysis which is conducted on geographic image databases along with results and discussion. Section 4 draws the conclusion with the findings and proposal for the future work.

METHOD

The proposed system overview is presented in figure 1. The steps followed in this work are given below.

  1. Load the geographic image databases in the MATLAB workspace.

  2. Resize the image and convert it from RGB to grayscale.

  3. Perform preprocessing operations.

  4. Compute the local vector pattern value for each pixel.

  5. Construct the LVP histogram for each pattern over local image regions.

  6. Construct the feature vectors and store the attribute values in the database.

  7. Load the Query image.

  8. Apply the procedure 2-6 to find values of the query image.

  9. Compare the values of query image with the local features obtained in step 6.

  10. Retrieve the best matched images using Canberra distance measurements.

Figure 1
Overview of the system.

The processes shown in figure 1 are discussed in following section.

Pre-Processing

The given images are pre-processed at prior to computation to enhance fine details in images. Resizing, removing unwanted noise, and removing reflections are done in this step. Median filter is used to perform the filtering operation. The median is the value of the center pixel of a rank-ordered distribution, encompassed by the filter. Since median filter has an ability to remove salt and pepper noise, it is more suitable for geographic images.

Feature Extraction

The objective of feature extraction is to extract fine details that can unambiguously distinguish the different types of textures present in an image. In this work, we extracted the features using Local Vector Pattern (LVP). The LVP descriptor was developed to provide various 2-D spatial structures of micro patterns with various pair-wise directions of vector of the referenced pixel and its neighborhood55 Fan KC, Hung TY. A Novel Local Pattern Descriptor - Local Vector Pattern in High-Order Derivative Space for Face Recognition. IEEE Trans. Image Process. 2014; 23(7): 2877-2891.. Consider a local sub-region R, Pc denotes the center pixel in R, I be the index angle of the variation direction, and D be the distance between the referenced pixel and its neighbourhood pixels along the I direction. The distance D = 1, 2 and 3 are taken into consideration. The direction value of a vector at the referenced pixel Pc can be defined as

V I , D ( P C ) = ( R ( P I , D ) R ( P C ) ) (1)

Where VI,D(Pc) is the direction value of a vector. When D = 1, the vector can be considered as the first-order derivative values of LDP and LTrP. When D > 1, the implicit characteristics of 1D direction information can be acquired. The LVPK,L,I(Pc) , in I direction of vector at Pc is encoded as

LVP K , L , I ( P C ) = {T ( V I , D ( P 1, L ) , V I + 45 ° , D ( P 1, L ) , V I , D ( P c ) , V I + 45 ° , D ( P c ) ) , , T ( V I , D ( P k , L ) , V I + 45 ° , D ( P k , L ) , V I , D ( P c ) , V I + 45 ° , D ( P c ) ) } | K = 1,2, , M ; L = 1 (2)

Where T (·,·) adopts transform ratio which is calculated with a pair-wise direction of vector of the referenced pixel to transform the I direction value of neighborhood to (I+45°) direction, which is used to compare with the original (I+45°) direction value of neighborhoods for labeling the binary pattern of micro-patterns. Where T (·,·) can be formally defined as

T ( V I , D ( P K , L ) , V I + 45 ° , D ( P K , L ) , V I , D ( P c ) , V I + 45 ° , D ( P c ) ) = { 1, 0, if else V I + 45 ° , D ( P K , L ) ( V I + 45 ° , D ( P c ) V I , D ( P c ) × V I , D ( P K , L ) ) 0 . (3)

Finally, the LVPK,L(Pc) , at referenced pixel Pc is defined as the concatenation of the four 8-bit binary patterns LVPs.

LVP K , L ( P c ) = LVP K , L , I ( P c ) | I = 0 ° ,45 ° ,90 ° ,135 ° (4)

In order to extract more discriminative 2D spatial structures of micro-patterns, the proposed local pattern descriptor encodes the LVPs by using the four pair-wise directions of vector of the referenced pixel and its neighborhood. Especially, each pair-wise direction of vector of the referenced pixel generates the transform ratio which is used to design the weight vector of dynamic linear decision function for encoding the distinct 8-bit binary pattern of each LVP. The LVPs are concatenated to form as a 32(4 × 8)-bit binary pattern for each referenced pixel in the local sub-region.

Image Retrieval

The image retrieval process is done with respect to the knowledge obtained from the SM step. Let’s consider Q to be a query image and I to be an image in a database. The closeness of Q and I are measured by computing the distance between their feature vectors in the SM step. The feature vector for the query image Q represented as VQ = {VQ1 , VQ2 ,… VQm } is obtained from feature extraction. Likewise, each image in the database is represented by the feature vector, VDB = {VDBk1 , VDBk2 ,…, VDBkn }, Where k = 1, 2,…, ǀDBǀ. The goal is to select R, the best images that resemble the query image by measuring the distance between the query image and the images in the databases ǀDBǀ. We used Canberra metric to compute the similarity distance metric. The Canberra distance d1 between vectors VQ and VDB in n-dimensional real vector space is given as follows:

d 1 ( V Q , V D B ) = i = 1 n | V Q i V D B i | | V Q i | + | V D B i | (5)

The retrieved images are ranked as follows: For each of the query images Qz , 1 ≤ z ≤ q, a set of retrieval results within a cut-off number is formed. Let RQz = R1, R2,…, Rp stand for the set of images retrieved for the query image Qz. Where p is the number of images present in the retrieval set and q is the number of queries. The number of times an image appears on the retrieval sets is computed and hence the images are rearranged. The image appearing in all the sets attains a higher position and is displayed to the user while images with less number of appearances will have lower positions.

RESULTS AND DISCUSSION

Experimental Setup and Results

Totally one hundred bridge overpass, cyclone, beach and high density residential area images were collected. They were resized into 256 × 256 pixel images. Then those images were organized into four databases with twenty five images in each database. The four databases were named into DB1, DB2, DB3 and DB4 respectively. Figure 2 shows the example, satellite images that are collected.

Figure 2
Four different types of Land use classes such as bridge overpass, cyclones, beach and high density residential area images were collected and named as DB1, DB2, DB3, and DB4. Nine samples from each class are shown in Fig. 2 A, B, C and D.

The proposed CBGIR system has two modules namely training and testing modules. In training module, first, all the images were pre-processed by converting them from RGB to grayscale images. The noises were removed using median filter. Feature extraction using LVP was done and the histograms were stored as feature vectors. In testing module, all the above noted processes were repeated on a given query image. Finally, the similarity measurement was calculated between the reference feature vectors that were stored during the training process and the feature vector obtained from the query image. The relevant images that have the minimum or null distance difference were ranked and displayed to the user. These processes are implemented in MATLAB software and executed successfully.

Validation of this analysis was done in all the four databases randomly 10 times for different techniques and the results were compared. The relevant cyclone images retrieved from the database (DB2) after applying the proposed system are shown in figure 3.

Figure 3
Examples of retrieved images of a cyclone query image using LVP.

Performance Measures and Discussion

The performance of image retrieval systems can be measured to evaluate its effectiveness using Precision (P) and Recall (R) 1919 Ritindera D, Dhiraj J, Jia L, James ZW. Image retrieval: ideas, influences, and trends of the new age. ACM Comput. Surv., 2008; 40(2): Article 5.. The standard definitions of these two measures are as follows: Precision is defined as the ratio of the number of relevant images retrieved to the number of total retrieved images. Recall is defined as the number of retrieved relevant images over the total number of relevant images available in the database1414 Nidhi G, Priti S. A refined hybrid image retrieval system using text and color. International Journal of Computer Science Issues (IJCSI). 2012; 9(4): 48-56.. Precision measures the exactness or the quality of the retrieval. Recall measures the completeness or quantity of the retrieval measures. For each query image both precision and recall values were estimated. Ten experiments were conducted separately on all four different classes of databases. Finally, the average retrieval precision (ARP) and average retrieval rate (ARR) were calculated for the query imageQz by Equations (6) - (7) for a ≥ NDB and a ≤ NDB respectively. Where, N DB is the number of relevant images in the database, a is the number of top matches considered.

A R P = 1 | D B | i = 1 | D B | P ( Q i ) (6)

A R R = 1 | D B | i = 1 | D B | R ( Q i ) (7)

Figure 4 shows the comparison of the proposed method with an average retrieval rate of LBP, LDP and LTrP methods for a given query of cyclone image. It was observed from the results that LVP extracts the most relevant images with higher the degree of relevance.

Figure 4
Comparison of proposed method with other existing methods on cyclone database in terms of ARR.

Table 1
Performance of various methods in terms of ARP on all databases

The results are considered to be far better and findings are as follows: In Table 1, the average retrieval precision values of LVP are greater than LBP, LDP and LTrP methods. The obtained high precision value means that more relevant images are retrieved in our unique proposed method.

CONCLUSION AND FUTURE WORK

In this paper, we have proved that content based geographical image retrieval can be performed using LVP. A novel approach referred as LVPs for CBGIR is successfully implemented and the performances of a local pattern descriptor are compared with the existing feature extracting techniques of LBP, LDP and LTrP. Ultimately, it is concluded that the proposed method LVP has produced better retrieval performance than the other three techniques of LBP, LDP and LTrP. It is expected that this work might jump-start a new way of research in geographical computer vision. Results can be further improved by combining local patterns with fuzzy logic methods for retrieving geographical images in the future with faster and perfect image matching, searching and retrieval system.

REFERENCES

  • 1
    Ahonen T, Hadid A, Pietikäinen M. Face recognition with local binary patterns. In: Proceedings of European Conference on Computer Vision; Prague, Czech Republic. 2005. p. 469-481.
  • 2
    Ahonen T, Hadid A, Pietikäinen M. Face description with local binary patterns: application to face recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2006; 28: 2037-2041.
  • 3
    Chen YW, Xu K, Xu T. Evaluation of local features for scene classification using VHR satellite images. In: Proceedings of Joint Urban Remote Sensing Event (JURSE); Munich, Germany: 2011. p. 385-388.
  • 4
    Chuen L, Rong C, Yung C. A smart content-based image retrieval system based on color and texture feature image and vision computing. Image Vis. Comput. 2009; 22(6): 658-665.
  • 5
    Fan KC, Hung TY. A Novel Local Pattern Descriptor - Local Vector Pattern in High-Order Derivative Space for Face Recognition. IEEE Trans. Image Process. 2014; 23(7): 2877-2891.
  • 6
    Gholamhosein S, Aidong Z, Ling B. A Multi-resolution content-based retrieval approach for geographic images. Geoinformatica., 1999; 3:109-139.
  • 7
    Gleason S, Ferrell R, Cheriyadat A, Vatsavai R, De S. Semantic information extraction from multi spectral geospatial imagery via a flexible framework. In: Proceedings of IEEE International Geoscience and Remote Sensing Symposium (IGARSS); Honolulu, HI: 2010. p. 166-169.
  • 8
    Goncalves H, Corte RL, Goncalves J. Automatic image registration through image segmentation and SIFT. IEEE Trans. Geosci. Remote Sens. 2011; 49(7): 2589-2600.
  • 9
    Griffin G, Perona P. Learning and using taxonomies for fast visual categorization. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR); Anchorage, AK. 2008. p. 1-8.
  • 10
    Huang X, Li SZ, Wang Y. Shape localization based on statistical method using extended local binary pattern. In: Proceedings of the Third International Conference on Image and Graphics; Hong Kong, China: 2004. p. 184-187.
  • 11
    Jose APF, Gloria O, James ZW, Manuel CG. Fuzzy content-based image retrieval for oceanic remote sensing. IEEE Trans. Geosci. Remote Sens. 2014; 52(9): 5422-5431.
  • 12
    Li Y, Bretschneider T. Semantic-sensitive satellite image retrieval. IEEE Trans. Geosci. Remote Sens. 2007; 45(4): 853-860.
  • 13
    Murala S, Maheshwari RP, Balasubramanian R. Local tetra patterns: A new feature descriptor for content-based image retrieval. IEEE Trans. Image Process. 2012; 21(5): 2874-2886.
  • 14
    Nidhi G, Priti S. A refined hybrid image retrieval system using text and color. International Journal of Computer Science Issues (IJCSI). 2012; 9(4): 48-56.
  • 15
    Ojala T, Pietikäinen M, Harwood D. A comparative study of texture measures with classification based on feature distributions. Pattern Recognit. 1996; 29: 51-59.
  • 16
    Ojala T, Pietikäinen M, Mäenpää T. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans. Pattern Anal. Mach. Intell. 2002; 24: 971-987.
  • 17
    Ojala T, Valkealahti K, Oja E, Pietikäinen M. Texture discrimination with multidimensional distributions of signed gray level differences. Pattern Recog. 2001; 34(3): 727-739.
  • 18
    Pietikäinen M, Ojala T, Scruggs T, Bowyer KW, Jin C., Hoffman K, et al. Rotational invariant texture classification using feature distributions, Pattern Recogn., 2000; 33(1): 43-52.
  • 19
    Ritindera D, Dhiraj J, Jia L, James ZW. Image retrieval: ideas, influences, and trends of the new age. ACM Comput. Surv., 2008; 40(2): Article 5.
  • 20
    Sedaghat A, Mokhtarzade M, Ebadi H. Uniform robust scale invariant feature matching for optical remote sensing images. IEEE Trans. Geosci. Remote Sens. 2011; 49(1); 4516-4527.
  • 21
    Vatsavai RR, Cheriyadat A, Gleason S. Unsupervised semantic labeling framework for identification of complex facilities in high resolution remote sensing images. In: Proceedings of IEEE International Conference on Data Mining Workshops (ICDMW); Sydney, NSW: 2010. p. 273-280.
  • 22
    Veganzones MA, Maldonado JO, Graña M. On Content-Based Image Retrieval Systems for Hyperspectral Remote Sensing Images. In: Graña M, Duro RJ, editor. Computational Intelligence for Remote Sensing. Springer-Verlag Berlin Heidelberg; 2008. p. 125-144.
  • 23
    Zhang B, Gao Y, Zhao S, Liu J. Local derivative pattern versus local binary pattern: Face recognition with higher-order local pattern descriptor. IEEE Trans. Image Process. 2010; 19(2): 533-544.
  • 24
    Zhang B, Shan SG, Chen XL, Gao W. Histogram of Gabor phase patterns: A novel object representation approach for face recognition. IEEE Trans. Image Process. 2007; 16(1): 57-68.
  • 25
    Zhang WC, Shan SG, Qing L, Chen XL, Gao W. Are Gabor phases really useless for face recognition? Pattern Anal Applic. 2009; 12(3): 301-307.
  • 26
    Zhang WC, Shan SG, Gao W, Zhang HM. Local Gabor binary pattern histogram sequence: a novel non-statistical model for face representation and recognition. In: Proceedings of Conf. Comput. Vis. IEEE; 2005. p. 786-791.
  • 27
    Zhao G, Pietikäinen M. Dynamic texture recognition using local binary patterns with an application to facial expressions. IEEE Trans. Pattern Anal. Mach. Intell. 2007; 27(6) 915-928.

Publication Dates

  • Publication in this collection
    2018

History

  • Received
    03 Feb 2016
  • Accepted
    14 June 2016
Instituto de Tecnologia do Paraná - Tecpar Rua Prof. Algacyr Munhoz Mader, 3775 - CIC, 81350-010 Curitiba PR Brazil, Tel.: +55 41 3316-3052/3054, Fax: +55 41 3346-2872 - Curitiba - PR - Brazil
E-mail: babt@tecpar.br