Acessibilidade / Reportar erro

MEASURING PHOTOGRAMMETRIC CONTROL TARGETS IN LOW CONTRAST IMAGES

Medição de alvos fotogramétricos em imagens com baixo contraste

Abstract:

This paper presents an experimental assessment of photogrammetric targets and subpixel location techniques to be used with low contrast images such as images acquired by hyperspectral frame cameras. Eight different target patterns of varying shape, background, and size were tested. The aim was to identify an optimum distinctive pattern to serve as control point in aerial surveys of small areas using hyperspectral cameras when natural points are difficult to find in suitable areas. Three automatic techniques to identify the target point of interest were compared, which were weighted centroid, template matching, and line intersection. For assessment, hyperspectral images of the set of targets were collected in an outdoor 3D terrestrial calibration field. RGB images were also acquired for reference and comparison. Experiments were conducted to assess the accuracy at the sub-pixel level. Bundle adjustment with several images was used, and vertical and horizontal distances were directly measured in the field for verification. An experiment with aerial flight was also performed to validate the chosen target. The analysis of residuals and discrepancies indicated that a circular target is best suited as the ground control in aerial surveys, considering the condition in which the target appears with few pixels in the image.

Keywords:
Photogrammetric target; hyperspectral camera; low contrast image; target location

Resumo:

Este trabalho apresenta uma avaliação experimental de alvos fotogramétricos e técnicas de localização subpixel em imagens com baixo contraste como as adquiridas por câmaras hiperespectrais de quadro. Oito padrões de alvos com diferentes formas, fundos e tamanhos foram testados. O objetivo foi identificar um padrão distinguível ótimo para ser usado como ponto de controle em levantamentos aéreos hiperespectrais em pequenas áreas com escassez de feições identificáveis, como em áreas rurais. Três técnicas automáticas para extrair o centro do alvo foram comparadas: centroide ponderado, template matching e intersecção de linha. Para avaliação, imagens hiperespectrais dos alvos foram coletadas em um campo de calibração terrestre 3D. Imagens RGB também foram capturadas para referência e comparação. Alguns experimentos foram conduzidos para avaliar a acurácia sub-pixel. Ajustamento em bloco foi usado nos experimentos, e distâncias verticais e horizontais foram diretamente medidas no campo para verificação. Um experimento com levantamento aéreo também foi realizado para empregar o alvo ótimo. As análises de resíduos e discrepâncias indicaram que um alvo circular foi o melhor indicado para o apoio de campo, considerando as condições em que o alvo é formado por um mínimo de pixels na imagem.

Palavras-chave:
Alvo fotogramétrico; câmara hiperespectral; imagem com baixo contraste; localização de alvo

1. Introduction

Remote sensing applications using high resolution images require suitable geometric accuracy, which can be achieved with a step of image orientation usually performed with bundle adjustment. This task can be solved using ground control points (GCPs), which requires special targets, when natural distinct points are not available. Different patterns for flat targets, such as crosses, checkerboards, and circular types, have been tested, as demonstrated by Luhmann (2014Luhmann, T. 2014. Eccentricity in images of circular and spherical targets and its impact on spatial intersection. Photogrammetric Record, 29, pp.417-433.), who assessed the eccentricity of circular and spherical targets in convergent images. Some targets can also be embedded with a code for automatic labeling (Garrido-Jurado et al. 2014Garrido-Jurado, S.; Muñoz-Salinas, R.; Madrid-Cuevas, F. J. and Marín-Jiménez, M. J. 2014. Automatic generation and detection of highly reliable fiducial markers under occlusion. Pattern Recognition, 47, pp.2280-92.).

Special targets are usually used in camera calibration, and several different types of targets have been studied. Habib et al. (2013Habib, A.; Lari, Z.; Kwak, E. and Al-Durgham, K. 2013. Automated detection, localization, and identification of signalized targets and their impact on digital camera calibration. Revista Brasileira de Cartografia, 65, pp.785-803.) reported on various types of targets such as crosses, black dots on a white background, feature-encoded targets, retro-reflective targets, laser light projected targets, color-coded targets, as well as circular and checkerboard targets. In contrast to the numerous different trials carried out for close range photogrammetry, comparatively few studies have addressed aerial imagery targets. Wandresen et al. (2003Wandresen, R.; Mitishita, E. A. and Andrade, J. B. 2003. Identificação de pontos de apoio pré-sinalizados com o uso de redes neurais artificiais e correlação. Boletim de Ciências Geodésicas , 9, pp. 179-198.) used neural network to locate and recognize signalized targets for photogrammetric applications.

For aerial imagery over large areas, natural points are typically used because signalized targets are costly and difficult to install in suitable areas. Besides that, natural points can be selected after the aerial survey. In urban environments, natural features or man made objects are available to be used as control points, even though they may not always be located in suitable geometric positions within an image block. In small areas with a limited number of control points (e.g., agricultural (Honkavaara et al. 2013Honkavaara, E.; Saari, H.; Kaivosoja, J.; Pölönen, I.; Hakala, T.; Litkey, P.; Mäkynen, J. and Pesonen, L. 2013. Processing and assessment of spectrometric, stereoscopic imagery collected using a lightweight UAV spectral camera for precision agriculture. Remote Sensing, 5, pp.5006-50039.) and forestry areas (Govindaraju et al. 2014Govindaraju, V.; Leng, G. and Qian, Z. 2014. Multi-UAV surveillance over forested regions, Photogrammetric Engineering & Remote Sensing , 80, pp.1129-1137.)), unmanned aerial vehicle (UAV) platforms can be used to acquire high-resolution images, and targets can easily be installed. Additionally, if they fulfill certain requirements, distinctive patterns can provide high accuracy and be automatically identified and located. These requirements include targets designed to provide high contrast and that are suitably shaped for the location algorithms. In order to be visible and identifiable, the size of the targets must be compatible to the image scale to generate a suitable number of pixels (Trinder et al. 1995Trinder, J. C.; Jansa, J. and Huang, Y. 1995. An assessment of the precision and accuracy of methods of digital target location. ISPRS Journal of Photogrammetry and Remote Sensing , 50, pp.12-20.). Smaller areas require fewer targets, and the costs of manufacturing and installation are not a hindrance, as these steps can be performed when the GPS base for UAV operation is installed.

Hyperspectral framing cameras are currently available for UAV applications in vegetated areas, e.g., the Rikola model (Rikola Ltd, 2015). This camera has two CMOS sensors and is based on a Fabry-Perot interferometer (FPI) that produces spectral bands as a function of the interferometer air gap. A set of images with wavelengths between 500 and 900 nm is acquired as a data cube selected by the user (Mäkeläinen et al. 2013Mäkeläinen, A.; Saari, H.; Hippi, I.; Sarkeala, J. and Soukkamäki, J. 2013. 2D hyperspectral frame imager camera data in photogrammetric mosaicking. International Archives of the Photogrammetry,Remote Sensing and Spatial Information Sciences, XL-1/W2, pp.263-67.). Generally, photogrammetric studies that use this type of UAV-borne hyperspectral sensor involve flight altitudes of 100-200 m, which provide a ground sample distance (GSD) of 5-15 cm that is sufficient for agricultural and forest monitoring applications (Mäkeläinen et al. 2013). A review on remote sensing applications using UAV was published by Pajares (2015Pajares, G. 2015. Overview and current status of remote sensing applications based on unmanned aerial vehicles (UAVs). Photogrammetric Engineering & Remote Sensing , 81, pp.281-329.).

Due to their narrow spectral bands, hyperspectral images usually have a low contrast and high noise (Pal and Porwal, 2015Pal, M. K. and Porwal, A. 2015. A Local Brightness Normalization (LBN) algorithm for destriping Hyperion images. International Journal of Remote Sensing , 36, pp. 2674-2696.). Moreover, during the images acquisition, movement of the imaging platform or sensor instability can cause blurring effects. Depending on the variation in illumination, the saturation of white targets can also affect the point location algorithms. Such effects can reduce the accuracy of target measurement in the images, especially with the manual/interactive technique, but even when using certain automatic techniques.

The objective of this investigation was to assess the spatial accuracy of the location of artificial targets to be used as GCPs in photogrammetric surveys conducted using high resolution images with low contrast, in particular those acquired with a hyperspectral framing camera. This type of sensor enables to collect images with forward overlap and, thus image blocks can be simultaneously oriented by bundle adjustment generating accurate products from hyperspectral images (e.g., digital surface models and ortho-mosaics).

The following requirements were taken into consideration: the types of flat targets and their background, extraction techniques, transportation and installation issues, and assessment strategies. Another important requirement was the number of pixels used to define the targets (Trinder et al. 1995Trinder, J. C.; Jansa, J. and Huang, Y. 1995. An assessment of the precision and accuracy of methods of digital target location. ISPRS Journal of Photogrammetry and Remote Sensing , 50, pp.12-20.). Galo et al. (2012Galo, M.; Tommaselli, A. M. G. and Hasegawa, J. K. 2012. The influence of subpixel measurement on camera calibration. Revue Française de Photogrammétrie et de Télédétection, 198, pp. 62 - 70.) showed the relevance of image measurement with subpixel accuracy to achieve accurate results in photogrammetric processes, e.g., camera calibration and how this is dependent on several parameters, including the number of pixels used in the extraction algorithm. However, increasing the number of pixels requires assembling larger targets, which are more challenging to transport and install in suitable locations in the flight area, especially in the absence of open and flat places. Thus, a condition imposed in this study was to minimize target sizes while maintaining image extraction accuracy. Figure 1 presents the types of targets and their respective numbers for identification.

Figure 1:
Eight targets with identification number used for assessment considering different shapes and backgrounds

Only monochromatic targets were considered due to their higher contrast in sunlight. Black, white, and gray backgrounds were tested. The different backgrounds in targets with the same shapes enable to assess the influence of the image contrast and saturation in the target center location. Two circles were used instead of a single one to facilitate template matching application and refinement using the least squares method. A single circle limits analytical possibilities because its scale and rotation cannot be determined by least squares matching, as reported by Gruen (1996Gruen, A. 1996. Least square matching: a fundamental measurement algorithm. In: K. B. Atkinson.Close range photogrammetry and machine vision. Bristol: Whittle Publishing. Ch. 8.). When the image stations have good initial approximated coordinates, it is not necessary to have coded targets because they can be located by projecting the ground coordinates to the images. Nowadays, initial position can be obtained even with navigation grade GPS receivers, which provides enough accuracy for target location.

2. Techniques for target center measurement

Other authors have assessed the precision of target location using different algorithms such as the centroid technique (Trinder et al. 1995Trinder, J. C.; Jansa, J. and Huang, Y. 1995. An assessment of the precision and accuracy of methods of digital target location. ISPRS Journal of Photogrammetry and Remote Sensing , 50, pp.12-20.) and least squares matching (Gruen and Baltsavias, 1988Gruen, A and Baltsavias, E. P. 1988. Geometrically constrained multiphoto matching. Photogrammetric Engineering & Remote Sensing , 54, pp.633-641.). These techniques as well as a third technique using line intersection as introduced by Jain et al. (1995Jain, R.; Kasturi, R. and Schunck, B. G. 1995. Machine Vision. In: Computer Science Series. New York: McGraw-Hill.) and adapted by Bazan et al. (2008Bazan, W. S.; Tommaselli, A. M. G.; Galo, M. and Telles, S. S. S. 2008. Métodos para extração de feições retas com precisão subpixel. Boletim de Ciências Geodésicas , 14, pp.128 - 148.) were used to measure the target center.

2.1 Weighted centroids

A centroid can be calculated by the arithmetic mean position of all pixels within a segmented area. For grayscale images, effects from the acquisition process can cause blurring and displacement of the center position. Thus, weighting based on pixel intensity is used to improve the accuracy of the center location because blurring affects border pixels more than central pixels, as depicted in Figure 2.a.

In this assessment, to minimize blurring and saturation effects, the centroid technique was applied to double circular targets. The threshold separating the circle from the background was estimated using Otsu’s method (Otsu, 1979Otsu, N. 1979. A threshold selection method from gray level histograms. IEEE Transactions on Systems, Man, and Cybernetics, 9, pp.62-66.), and the weighting was defined by subtracting the pixel intensity from the threshold (Trinder et al. 1995Trinder, J. C.; Jansa, J. and Huang, Y. 1995. An assessment of the precision and accuracy of methods of digital target location. ISPRS Journal of Photogrammetry and Remote Sensing , 50, pp.12-20.). The midpoint between both weighted centroids determines the center of the target (Figure 2.b). This easily implemented technique usually provides good accuracy (Trinder et al. 1995).

Figure 2:
(a) Plot of a target with two circles showing stability in their centroids. (b) The target center is determined by the midpoint between both centroids.

2.2 Template matching

In this technique, area-based matching techniques compare gray level pixel values (or digital numbers of spectral bands) from two image patches, using a similarity measure such as the normalized cross-correlation coefficient (Wolf et al. 2014Wolf, P. R.; Dewitt, B. A. and Wilkinson, B. E. 2014. Elements of photogrammetry with applications in GIS. 4th ed. New York: McGraw-Hill., p. 355). Least squares matching (LSM) can be used for further refinement, providing coordinates with sub-pixel precision. LSM is a generalization of the correlation used to compare the gray levels between two images, such that their position and shape parameters can be determined via an adjustment process. Relative to a reference template (e.g., a synthetic target), the conjugate image (e.g., a collected image) is deformed to minimize the gray level differences (Gruen, 1996Gruen, A. 1996. Least square matching: a fundamental measurement algorithm. In: K. B. Atkinson.Close range photogrammetry and machine vision. Bristol: Whittle Publishing. Ch. 8.).

2.3 Line intersection

This technique can be implemented in different modes, but the algorithm used in this work involves five main steps, some of which are well-known feature extraction algorithms: (1) Edge detection, which can be performed with several techniques, e.g., Nevatia and Babu filter (Nevatia and Babu, 1980Nevatia, R. and Babu, K. R. 1980. Linear feature extraction and detection. Computer Vision, Graphics, and Image Processing, 13, pp.257-269.); (2) Smoothing: applying an edge preserving median filter (convolution mask of 5 × 5) to reduce noise while maintaining edges and corners; (3) Thresholding, which separates the edges from the background of the image: high gradient values are preserved and homogenous areas (background) with low gradients values are labeled zero; (4) Thinning: skeletonizing the extracted lines to the width of a single pixel, while preserving the middle axis of the original line; (5) Sub-pixel refinement, as detailed below.

Figure 3:
(a) Sub-pixel positions calculated along the transverse profile. (b) Intersection of two lines in the test target.

After thinning, lines with one pixel width are segmented by linking the individual pixels extracted (Artero and Tommaselli, 2002Artero, A. O. and Tommaselli, A. M. G. 2002. Um método para a conexão e aproximação poligonal de elementos de bordas em imagens digitais. Boletim de Ciências Geodésicas, 8, pp.71 - 94.), but further refinement can be carried out to estimate the axis of these linear features with sub-pixel precision. The approach (Figure 3.a), adapted from Jain et al. (1995Jain, R.; Kasturi, R. and Schunck, B. G. 1995. Machine Vision. In: Computer Science Series. New York: McGraw-Hill.) and detailed in Bazan et. al. (2008Bazan, W. S.; Tommaselli, A. M. G.; Galo, M. and Telles, S. S. S. 2008. Métodos para extração de feições retas com precisão subpixel. Boletim de Ciências Geodésicas , 14, pp.128 - 148.), involves fitting a line using least squares adjustment from sub-pixel edge positions, which are calculated from the grayscale cross sections of the skeletonized line. First, the cross sections are defined for each edge pixel based on its local gradient orientation and with a predefined length, which depends on the image blurring. Considering that this cross section has an arbitrary direction, the gradients for each point in the cross section have to be interpolated from the gradient image generated in the previous steps. Then, the weighted mean position is computed using the gradient magnitude over the threshold for weighting. This produces a point in the edge with subpixel precision. The line direction of each edge point, used to define the cross section, can be computed from the gradient direction or estimated by analyzing the two neighboring points. A set of sub-pixel points is thereby obtained, and a line can then be adjusted by the least squares method. Lastly, the angular and linear coefficients are determined and used to calculate where the line intersects the perpendicular line (itself determined using the same technique) (Figure 3.b).

3. Materials, experiments and results

This section describes the datasets and experiments used to assess the types of targets and the location techniques. The trials were conducted in a terrestrial camera calibration field, where two set of images from two different cameras (RGB and hyperspectral) were acquired. The target measurement techniques were then applied to these two sets of images, and the results were compared. Additionally, an aerial survey with an UAV carrying the hyperspectral camera was performed to assess the selected target type and size in a practical application.

3.1 Preparation

Two cameras, detailed in Table 1, were previously calibrated to acquire RGB and hyperspectral images. The interior orientation parameters (IOPs) of each camera were determined by camera self-calibration (Kenefick et al. 1972Kenefick, J. F.; Gyer, M. S. and Harp, B. F. 1972. Analytical self-calibration. Photogrammetric Engineering , 38, pp.1117-1126.) in a terrestrial calibration field composed of automatically detected and recognized Aruco coded markers (Garrido-Jurado et al. 2014Garrido-Jurado, S.; Muñoz-Salinas, R.; Madrid-Cuevas, F. J. and Marín-Jiménez, M. J. 2014. Automatic generation and detection of highly reliable fiducial markers under occlusion. Pattern Recognition, 47, pp.2280-92.), see Figures 4(a, b).

Table 1:
Technical specifications of both cameras.

Table 2 presents the IOPs estimated by self-calibrating bundle adjustment using the collinearity model with distortion parameters from the Conrady-Brown model (Brown 1971Brown, D. C. 1971. Close-range calibration. Photogrammetric Engineering, 37, pp.855-866.). The decentering distortion parameters resulted in insignificant values and were removed with the calibration process being computed again with this reduced set of parameters. Both camera calibrations were calculated using the calibration with multi-cameras (CMC) software (Ruy et al. 2009Ruy, R.; Tommaselli, A. M. G.; Galo, M.; Hasegawa, J. K. and Reis, T. T. 2009. Evaluation of bundle block adjustment with additional parameters using images acquired by SAAPI system. In: Proceedings of 6th International Symposium on Mobile Mapping Technology. Presidente Prudente, Brazil, 21-24 July 2009.). The hyperspectral camera calibration only used images of the 680 nm spectral band near the central spectrum because the other bands could be registered with respect to this reference band.

Figure 4
Aruco markers and test targets in the calibration field. (a) RGB and (b) hyperspectral images. Example of a target panel appearing in the (c) RGB image and (d) hyperspectral image.

Parameter RGB Hyperspectral Estimated value Standard deviation Estimated value Standard deviation f (mm) 28.3246 0.0091 8.6990 0.0084 x0(mm) 0.0809 0.0020 0.4144 0.0062 y0(mm) -0.0923 0.0023 0.3991 0.0045 K1(mm) -1.42×10-4 1.67×10-6 -4.47×10-3 4.45×10-5 K2(mm) 1.65×10-7 9.82×10-9 -1.90×10-5 3.66×10-6 σ naught (σ0) 0.1723 - 0.0899 -

3.2 Experiments in the terrestrial calibration field

The experiments to compare the suitability of the targets and to determine the precision were firstly conducted in the terrestrial calibration field, where the 3D coordinates of Aruco markers (their four corners) were automatically located using an in house developed software (Silva et. al., 2014Silva, S. L. A.; Tommaselli, A. M. G. and Artero, A. O. 2014. Utilização de alvos codificados do tipo aruco na automação do processo de calibração de câmaras. Boletim de Ciências Geodésicas , 20, pp.626-646.; Tommaselli et al. 2014Tommaselli, A. M. G.; Marcato Jr., J.; Moraes, M. V. A.; Silva, S. L. A. and Artero, A. O. 2014. Calibration of panoramic cameras with coded targets and a 3D calibration field. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, XL-3/W1, pp.137-142.).

Firstly, the test targets (Figure 1) were plotted on sixteen panels and distributed on the wall, as displayed in Figure 4, which shows examples of the targets in the RGB and hyperspectral images. Then, three exposure stations were defined at appropriate distances from the wall (~7 m), such that the target size in the hyperspectral image would be similar to that in an aerial image collected with an UAV. This process was used to generate images of the smaller targets of only a few pixels, such as those with a diameter of 5-7 pixels. The same exposure stations were used to capture three RGB and three hyperspectral images with GSDs of 1.6 mm and 4.3 mm, respectively. A static platform was used to ensure an assessment focusing on the geometric features of each target without introducing further external interferences, for instance, motion or vibration blur. The automatic techniques were adopted to improve location accuracy even with some image blurring caused by the internal camera components.

The tests were based on the Bundle Block Adjustment (BBA) of an image triplet in a local reference system using the CMC software. Due to their higher spatial resolution, RGB images were used as the reference imagery to assess the targets and extraction techniques for the hyperspectral images. The application of the BBA to the RGB images provided more accurate 3D and image coordinates, which were then compared to those obtained from the application of the BBA to the hyperspectral images. The eight targets (in Figure 1) were analyzed by locating their centers with different automatic techniques depending on the target features:

• Template matching was used with all patterns (targets 1-8);

• Weighted centroid was applied to targets 1, 2, 5, and 6;

• Line intersection technique was only used for target 3 because the length of their straight lines can provide better precision for the center location.

The three techniques (implemented in the C/C++ language), and the three experiments were conducted as follows. The experiments with targets in the terrestrial calibration field were performed by indirect image orientation with all EOPs being considered as unknowns in the BBA. This ensures that any effect resulting from BBA is reflected in the EOPs. In all experiments, the value of a priori sigma was 1.

Experiment I

The BBA of both RGB and hyperspectral triplets were performed using the following configuration:

• Calibrated IOPs were considered as fixed;

• The 3D control coordinates of three Aruco points (on the triplet corners) were used as absolute constraints. These coordinates were obtained from the self-calibration step (both for Rikola and Nikon cameras);

• The image coordinates of these three Aruco points (Garrido-Jurado et al. 2014Garrido-Jurado, S.; Muñoz-Salinas, R.; Madrid-Cuevas, F. J. and Marín-Jiménez, M. J. 2014. Automatic generation and detection of highly reliable fiducial markers under occlusion. Pattern Recognition, 47, pp.2280-92.) were automatically extracted;

• The image coordinates of the test targets were used as tie points in the BBA.

The BBA trials were run separately for each type of target using image coordinates extracted by the three center location techniques. Using the target centers as tie points, their 3D coordinates were estimated by the BBA along with the exterior orientation parameters (EOPs) of each image. Next, an inverse process was carried out. The estimated 3D coordinates of each target were back projected to the image space using collinearity equations, and these projected image coordinates were compared against their respective target centers. The aim was to verify the differences for each type of target with the three automatic techniques. Such differences, expressed as the root mean square error (RMSE), and the a-posteriori sigma naught (σ0) (square root of the a posteriori variance σ0 2) of each BBA trial are presented in Table 3.

As expected, the RMSEs of the hyperspectral images were higher than of the RGB images. This can be explained both by the lower spatial resolution of the hyperspectral images (with fewer pixels forming the same target) and by the dynamic range of the image band. In general, all targets generated results with sub-pixel precision, but the smallest RMSEs were obtained when the weighted centroid technique was applied to the target number 6 for both the RGB and the hyperspectral images. The results with the template matching were intermediate, and the largest differences resulted with the line intersection technique. The σ0 values were less than one, given that the a priori sigma was set to 1.

Table 3:
Differences between estimated and projected target centers.

Another analysis was performed with the XYZ coordinates calculated by using the BBA to the hyperspectral images. The XYZ coordinates of the target type 6 (16 points), estimated by BBA using RGB images, were used as the reference. This type was selected because its image coordinates in the RGB images, extracted with the centroid technique, achieved the highest precision, as shown in Table 3. The differences between the coordinates estimated from the hyperspectral and RGB images were calculated.

Table 4 presents the RMSEs, indicating that the best results (RMSE ≤ 0.3 cm in X and Y and RMSE ≤0.4 cm in Z) were obtained for the centroid (C6) and the template matching (TM6) techniques with target type 6. Their RMSEs were low and with small relative variation among the three coordinates (the other techniques produced larger differences among the XYZ coordinates). When using weighted centroids with target 6, the results were consistently suitable both in the image (Table 3) and in the object spaces (Table 4).

Table 4:
RMSE (in cm) in XYZ coordinates using the hyperspectral images.

Experiment II

A second experiment was performed based on the space resection technique (Wolf et al. 2014Wolf, P. R.; Dewitt, B. A. and Wilkinson, B. E. 2014. Elements of photogrammetry with applications in GIS. 4th ed. New York: McGraw-Hill.). The purpose was to verify the σ0 (dispersion of residuals) in the computation of a single image orientation. As the GSDhyp of the hyperspectral camera is approximately three times larger than the GSDRGB, the 3D coordinates estimated by the BBA with RGB images were used as the control points. The image coordinates of each type of target were extracted from the hyperspectral images in order to be used as observations in the space resection. This procedure also used fixed values for the calibrated IOPs and control point coordinates and the EOPs as free unknowns.

Table 5 shows the σ0 calculated by space resection for the three techniques used to locate the target center in the hyperspectral images. The centroid technique yielded the best results, mainly with targets type 1 and 6 (σ0 = 0.05). The results using template matching varied from σ0= 0.17-1.22, and the largest value was obtained with the line intersection technique (σ0 = 1.51). It can be concluded that some of the algorithms do not compensate for certain systematic effects stemming from the acquisition process of the hyperspectral images, such as blurring caused by optical aberrations or saturation due to high integration time.

Table 5:
A-posteriori sigma naught calculated after space resection in hyperspectral image using three measurement techniques.

Experiment III

The third assessment was conducted to calculate the discrepancies between the distances defined by tie points (the 3D coordinates of which were estimated by BBA), and the corresponding distances directly measured with a caliper in the calibration field (i.e., the distances between the Aruco corners).

The same image triplets from Experiment I were used to perform a BBA in a different configuration. In this test, the 3D coordinates previously estimated for targets 1-8 with the RGB images were redefined as the ground control points (using absolute constraints), and the Aruco corners were considered as tie points. The IOPs remained fixed, and the EOPs were considered as free unknowns. Using each type of target separately as a ground control, the 3D coordinates of Aruco were estimated, and then, distances between these estimates were compared to the distances collected directly in the field. The distances between the Aruco corners were measured with a caliper in both vertical and horizontal alignments and ranged from 0.34 m to 1.15 m. This process allowed an indirect assessment of the quality of the ground control generated for the targets.

Table 6 presents the RMSE of the control distances for both RGB and hyperspectral triplets. Similar RMSEs (2.58-2.6 mm) were obtained with the RBG triplet for all types of targets and all three techniques. With the hyperspectral triplet, larger RMSEs were obtained (4.13 4.39 mm) due to the higher GSDhyp compared to that of the RGB camera. The largest discrepancies (> 4.25 mm) were obtained for the template matching technique with targets type TM5, TM6 and TM8. The line intersection (L3) technique gave an intermediate RMSE (4.20 mm) while the centroid technique provided the best results, which was similar across all four assessed targets (4.13-4.16 mm).

Table 6:
RMSE of the discrepancies between estimated and measured distances using the BBA with the RGB and hyperspectral images.

Over all three experiments, the weighted centroid technique presented the most accurate results. This can be explained by the number of pixels defining the target, as it was previously concluded by Trinder et al. (1995Trinder, J. C.; Jansa, J. and Huang, Y. 1995. An assessment of the precision and accuracy of methods of digital target location. ISPRS Journal of Photogrammetry and Remote Sensing , 50, pp.12-20.). When a target is formed by only a few image pixels, its contour is roughly defined and can reduce the precision of the location techniques. Additionally, image saturation causes a loss of accuracy due to the flooding effect of the bright areas on the darker ones. Although saturation can also cause the degradation of the edges in circular targets, the center of the circle is maintained. Because the degradation effect is radially symmetric with respect to the target center, weighting the central pixels can improve the target precision level. The results with the hyperspectral images also indicate that due to the high contrast, the use of a white background is better suited to extract the target. Target 6 was shown to be the most appropriate as a control target for aerial surveys using hyperspectral framing cameras.

3.3 Performing a UAV survey with hyperspectral images

This section describes an aerial survey performed to collect hyperspectral frame images over a vegetation area and to use the target 6 as ground control point (GCP). The aerial survey was performed using a UAV platform equipped with a Rikola hyperspectral camera and GNSS receiver, shown in Figure 5.a. The hyperspectral camera (Figure 5.b) was configured to acquire image cubes with a flight height of 160 m with flying speed of 4 m/s, which generated images with a GSD of 10 cm. Two image strips were collected covering a range of 800 m over a vegetation area (with forest and sugarcane) (Figure 5.c). Before carrying out the aerial survey, five targets type 6, as shown in Figure 5.d, were geometrically positioned along the planned perimeter of the test area. Figure 5.e displays the block geometry and Figure 5.f shows an example of the target 6 appearing in one image.

A BBA was performed in a photogrammetric project of ERDAS Imagine - LPS software (v. 2015) using the following configuration:

• Initial positions of the camera perspective centers were based on the raw data collected with GNSS receiver (weighted constraints of 20 cm) and attitude angles were considered unknowns;

• Calibrated IOPs were used as fixed constraints;

• Image coordinates with s standard deviation of 1/2 pixel;

• GCPs (target centroids) with standard deviation of 5 cm, based on the GNSS positioning accuracy. Each GCP was measured in one image and then transferred to homologue positions in the adjacent images using least squares matching via LPS;

• Tie points were automatically generated in the image block.

Figure 5:
(a) UAV equipped with (b) hyperspectral camera and GNSS receiver. (c) Trajectory flown by the UAV acquiring images. (d) GNSS surveying with markers. (d) Image block geometry with 93 images, five GCPs, and four checkpoints. (f) Marker in the hyperspectral image.

One of the advantages in using targets is the automatic center location to achieve a better precision. Thus, the target center measurement was also performed manually (with point transfer by LSM) for comparison of the results achieved with BBA. Table 7 presents the RMSE at five GCPs resulting from the BBA considering both manual and automatic (centroid) measurements. Similar RMSEs were obtained between the techniques in the object coordinates of the GCPs, as well as the σ0 of each technique was also similar.

Table 7:
RMSE of GCPs from the BBA with hyperspectral aerial images.

In the image space the centroid technique generated smaller residuals, indicating a more accurate point measurement. This had an effect on the resulting accuracy, which was assessed at four altimetry checkpoints available in the test area. The altimetry discrepancy presented a RMSEZ of 0.349 m with the manual measurement and was reduced to 0.195 m with the centroid technique, i.e. the accuracy was improved at 44% in this case. These discrepancies show that using signalized targets with automatic measurement can produce more accurate results in a hyperspectral block orientation.

4. Conclusion

This study presented an assessment of eight types of artificial targets for photogrammetric measurements and the results achieved with hyperspectral images using RGB images as reference. Characteristics of shape, background gray levels, and target size were experimentally assessed. Tests conducted in the outdoor camera calibration field assessed three target extraction techniques and the target location precision achieved with the BBA. In all the experiments with the hyperspectral images, the results were within the sub-pixel level. The line intersection generated the largest RMSE (~ 1/2 pixel) in the BBA, which can be explained by the small number of pixels defining the straight lines and by the dependence on the threshold to define edges. The template matching technique presented intermediate results (0.27-0.47 pixels), and the highest precision was achieved with the weighted centroid technique (0.20-0.40 pixels).

In conclusion, the experiments with the hyperspectral images achieved the best results when using target 6 (double black circles with diameter of 5-7 pixels on white background) and the weighted centroid technique. Therefore, this type of target can be recommended for aerial surveys, especially with hyperspectral camera similar to that used in this study. Efficiency of the selected target was also assessed for hyperspectral images collected by an UAV and the results were motivating. Future studies should consider assessing the impact of automatic sub-pixel location in large aerial image blocks, and the improvement in overall accuracy.

ACKNOWLEDGEMENT

The authors would like to acknowledge the support of São Paulo Research Foundation (FAPESP - grants 2013/50426-4 and 2014/05033-7) and Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq - grant 307554/2014-7).

REFERENCES

  • Artero, A. O. and Tommaselli, A. M. G. 2002. Um método para a conexão e aproximação poligonal de elementos de bordas em imagens digitais. Boletim de Ciências Geodésicas, 8, pp.71 - 94.
  • Bazan, W. S.; Tommaselli, A. M. G.; Galo, M. and Telles, S. S. S. 2008. Métodos para extração de feições retas com precisão subpixel. Boletim de Ciências Geodésicas , 14, pp.128 - 148.
  • Brown, D. C. 1971. Close-range calibration. Photogrammetric Engineering, 37, pp.855-866.
  • Galo, M.; Tommaselli, A. M. G. and Hasegawa, J. K. 2012. The influence of subpixel measurement on camera calibration. Revue Française de Photogrammétrie et de Télédétection, 198, pp. 62 - 70.
  • Garrido-Jurado, S.; Muñoz-Salinas, R.; Madrid-Cuevas, F. J. and Marín-Jiménez, M. J. 2014. Automatic generation and detection of highly reliable fiducial markers under occlusion. Pattern Recognition, 47, pp.2280-92.
  • Govindaraju, V.; Leng, G. and Qian, Z. 2014. Multi-UAV surveillance over forested regions, Photogrammetric Engineering & Remote Sensing , 80, pp.1129-1137.
  • Gruen, A. 1996. Least square matching: a fundamental measurement algorithm. In: K. B. Atkinson.Close range photogrammetry and machine vision Bristol: Whittle Publishing. Ch. 8.
  • Gruen, A and Baltsavias, E. P. 1988. Geometrically constrained multiphoto matching. Photogrammetric Engineering & Remote Sensing , 54, pp.633-641.
  • Habib, A.; Lari, Z.; Kwak, E. and Al-Durgham, K. 2013. Automated detection, localization, and identification of signalized targets and their impact on digital camera calibration. Revista Brasileira de Cartografia, 65, pp.785-803.
  • Honkavaara, E.; Saari, H.; Kaivosoja, J.; Pölönen, I.; Hakala, T.; Litkey, P.; Mäkynen, J. and Pesonen, L. 2013. Processing and assessment of spectrometric, stereoscopic imagery collected using a lightweight UAV spectral camera for precision agriculture. Remote Sensing, 5, pp.5006-50039.
  • Jain, R.; Kasturi, R. and Schunck, B. G. 1995. Machine Vision. In: Computer Science Series New York: McGraw-Hill.
  • Kenefick, J. F.; Gyer, M. S. and Harp, B. F. 1972. Analytical self-calibration. Photogrammetric Engineering , 38, pp.1117-1126.
  • Luhmann, T. 2014. Eccentricity in images of circular and spherical targets and its impact on spatial intersection. Photogrammetric Record, 29, pp.417-433.
  • Mäkeläinen, A.; Saari, H.; Hippi, I.; Sarkeala, J. and Soukkamäki, J. 2013. 2D hyperspectral frame imager camera data in photogrammetric mosaicking. International Archives of the Photogrammetry,Remote Sensing and Spatial Information Sciences, XL-1/W2, pp.263-67.
  • Nevatia, R. and Babu, K. R. 1980. Linear feature extraction and detection. Computer Vision, Graphics, and Image Processing, 13, pp.257-269.
  • Otsu, N. 1979. A threshold selection method from gray level histograms. IEEE Transactions on Systems, Man, and Cybernetics, 9, pp.62-66.
  • Pajares, G. 2015. Overview and current status of remote sensing applications based on unmanned aerial vehicles (UAVs). Photogrammetric Engineering & Remote Sensing , 81, pp.281-329.
  • Pal, M. K. and Porwal, A. 2015. A Local Brightness Normalization (LBN) algorithm for destriping Hyperion images. International Journal of Remote Sensing , 36, pp. 2674-2696.
  • Rikola Ltd. 2015. Hyperspectral Camera Rikola. Available at: <Available at: http://vespadrones.com/product/hyperspectral-camera-rikola/ > [accessed 25 May 2017].
    » http://vespadrones.com/product/hyperspectral-camera-rikola/
  • Ruy, R.; Tommaselli, A. M. G.; Galo, M.; Hasegawa, J. K. and Reis, T. T. 2009. Evaluation of bundle block adjustment with additional parameters using images acquired by SAAPI system. In: Proceedings of 6th International Symposium on Mobile Mapping Technology. Presidente Prudente, Brazil, 21-24 July 2009.
  • Silva, S. L. A.; Tommaselli, A. M. G. and Artero, A. O. 2014. Utilização de alvos codificados do tipo aruco na automação do processo de calibração de câmaras. Boletim de Ciências Geodésicas , 20, pp.626-646.
  • Tommaselli, A. M. G.; Marcato Jr., J.; Moraes, M. V. A.; Silva, S. L. A. and Artero, A. O. 2014. Calibration of panoramic cameras with coded targets and a 3D calibration field. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, XL-3/W1, pp.137-142.
  • Trinder, J. C.; Jansa, J. and Huang, Y. 1995. An assessment of the precision and accuracy of methods of digital target location. ISPRS Journal of Photogrammetry and Remote Sensing , 50, pp.12-20.
  • Wandresen, R.; Mitishita, E. A. and Andrade, J. B. 2003. Identificação de pontos de apoio pré-sinalizados com o uso de redes neurais artificiais e correlação. Boletim de Ciências Geodésicas , 9, pp. 179-198.
  • Wolf, P. R.; Dewitt, B. A. and Wilkinson, B. E. 2014. Elements of photogrammetry with applications in GIS 4th ed. New York: McGraw-Hill.

Publication Dates

  • Publication in this collection
    June 2018

History

  • Received
    03 June 2017
  • Accepted
    16 Jan 2018
Universidade Federal do Paraná Centro Politécnico, Jardim das Américas, 81531-990 Curitiba - Paraná - Brasil, Tel./Fax: (55 41) 3361-3637 - Curitiba - PR - Brazil
E-mail: bcg_editor@ufpr.br