Acessibilidade / Reportar erro

LINEAR REGRESSION AND LINES INTERSECTING AS A METHOD OF EXTRACTING PUNCTUAL ENTITIES IN A LIDAR POINT CLOUD

Estudo da utilização de Regressão Linear e Interseção de Retas como método de extração de entidade pontual em uma nuvem de pontos LiDAR

Abstract:

The characteristics of data points obtained by laser scanning (LiDAR) and images have been considered complementary in the field of photogrammetric applications, and research to improve their integrated use have recently intensified. This study aim to verify the performance of determining punctual entities in a LiDAR point cloud using linear regression and intersecting lines obtained from buildings with square rooftop containing four planes (hip roof), as well as compare punctual entities three-dimensional coordinates determined by planes intersection. Our results show that the proposed method was more accurate in determining three-dimensional coordinates than plan intersection method. The obtained coordinates were evaluated and framed into the map accuracy standard for digital cartographic products (PEC-PCD), besides being analyzed for trend and precision. Accuracy analysis results frame punctual entities three-dimensional coordinates into the 1/2,000 or lower scale for Class A of PEC-PCD.

Keywords:
Point Cloud LiDAR; Linear Regression; Lines Intersection; PEC-PCD

1. Introduction

Research to improve efficiency, reliability, and reduce the costs of mapping techniques and procedures has always been in several researches. The characteristics of data obtained by LiDAR (Light Detection and Ranging) and photogrammetric technologies have been deemed as complementary and are being harnessed in the field of photogrammetric mapping. Integrated data of LiDAR and photogrammetry have been used in the three-dimensional reconstruction of buildings (Rottensteiner and Briese, 2003Rottensteiner, F., Briese, C. Automatic generation of building models from LIDAR data and the integration of aerial images. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 34(3/W13): 174-180, 2003.; Vosselman et al., 2005Vosselman, G., Kessels, P., Gorte, B. The utilization of airborne laser scanning for mapping. International Journal of Applied Earth Observation and Geoinformation, 6(3-4): 177-186, 2005.), monocular restitution (Mitishita et al., 2004Mitishita, E. A. et al. 3D monocular restitution applied to small format digital airphoto and laser scanner data. In: Proceedings of Commission III, ISPRS Congress, Istanbul. 2004.), city modeling (Zhang et al., 2005Zhang, Y., Zhang, Z., Zhang, J., Wu, J., 3D building modelling with digital map, LiDAR data and video image sequences. Photogrammetric Record, 20(111): 285-302, 2005.; Kocaman et al., 2006Kocaman, S. et al. 3D city modeling from high-resolution satellite images. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, v. 36, n. 1, 2006.), and in true orthophoto generation (Habib et al., 2007Habib, A.; Kim, E.-M.; Kim, C.-J. New methodologies for true orthophoto generation. Photogrammetric Engineering & Remote Sensing , 73(1): 25-36, 2007.). In recent decades, developing methods for integrating these data to extract geoinformation on the Earth’s surface has been an important research topic. For example, Rönnholm (2011Rönnholm, P. Registration quality-towards integration of laser scanning and photogrammetry. Publishing company EuroSDR, 2011.) states that such integration is expected to become increasingly simple and common in applications aimed at automatic object detection and autonomous vegetation classification. Indirect georeferencing has traditionally been used to integrate photogrammetric and LiDAR datasets. Applying this approach while using LiDAR data as a source of positional information requires all primitive geometries (points, lines, and areas) to be extracted, as data sourced from this technology does not directly display these primitives.

Numerous studies have been conducted on this subject. Delara et al. (2004Delara, R.; Mitishita, E. A.; Habib, A. Bundle adjustment of images from non-metric CCD camera using LIDAR data as control points. In: International Archives of XXth ISPRS Congress. 2004. p. 13-19.) conducted a research to extract LiDAR control points (LCP) from LiDAR intensity image for application in aero triangulation using low-cost digital cameras. Habib et al. (2004Habib, A.; Ghanma, M.; Mitishita, E. A. Co-registration of photogrammetric and LIDAR data: Methodology and case study. Revista Brasileira de Cartografia , v. 1, n. 56, 2004.) and Habib et al. (2005)Habib, A.; Ghanma, M.; Kim, E. LIDAR data for photogrammetric georeferencing.Proc. FIG Working Week and GSDI-8, 2005. developed a recording methodology for photogrammetric and LiDAR data using linear features and 3D similarity transformations. Csanyi and Toth (2007Csanyi, N.; Toth, C. K. Improvement of lidar data accuracy using LiDAR-specific ground targets. Photogrammetric Engineering & Remote Sensing, v. 73, n. 4, p. 385-396, 2007.) conducted a research to develop ground-control targets to improve LiDAR data accuracy in mapping projects. Mitishita et al. (2008Mitishita, E. A. et al. Photogrammetric and lidar data integration using the centroid of a rectangular roof as a control point. The Photogrammetric Record , v. 23, n. 121, p. 19-35, 2008.) proposed an approach to extract building roofs centroids from LiDAR point clouds to be used as control points. Wildan et al. (2011Wildan, F.; Aldino, R.; Aji, P. P. Application of LIDAR technology for GCP determination in papua topographic mapping scale 1: 50.000. In:Proceedings of the 10th Annual Asian Conference & Exhibition on Geospatial Information, Technology & Applications, Jakarta, Indonesia. 2011.) used terrain LiDAR control points in the aero triangulation of photogrammetric blocks of aerial analog photography, which mapping reached the accuracy pattern for the 1:50,000 scale. Ju et al. (2012Ju, H.; Toth, C; Grejner-Brzezinska, D. A. A new approach to robust LiDAR/optical imagery registration.Photogrammetrie-Fernerkundung-Geoinformation, v. 2012, n. 5, p. 523-534, 2012.) developed a two-stage hybrid feature recording method using LiDAR and optic techniques (aerial and satellite), exploiting the advantage of both methods based on these features frequency and intensity responses.

Balado et al. (2019Balado, J.; Díaz-Vilariño, L.; Arias, P.; Lorenzo, H. Point clouds for direct pedestrian pathfinding in urban environments. ISPRS Journal of Photogrammetry and Remote Sensing, 148, 184-196, 2019.), present a methodology for the direct use of point clouds for pathfinding in urban environments, solving the excessive number of points is reduced for transformation into nodes on the final graph, urban static elements acting as permanent obstacles, such as furniture and trees, are delimited and differentiated from dynamic elements such as pedestrians, occlusions on ground elements are corrected to enable a complete graph modelling, and navigable space is delimited from free unobstructed space according to two motor skills (pedestrians without reduced mobility and wheelchairs). Hou e Ai (2020Hou, Q.; Ai, C. A network-level sidewalk inventory method using mobile LiDAR and deep learning. Transportation research part C: emerging technologies, 119, 102772, 2020.) propose a network-level sidewalk inventory method is proposed by efficiently segmenting the mobile light detection and ranging (LiDAR) data using a customized deep neural network, PointNet++, and followed by integrating a stripe-based sidewalk extraction algorithm. By extracting the sidewalk locations from the mobile LiDAR point cloud, the corresponding geometry features, e.g., width, grade, cross slope, etc., can be extracted for the ADA compliance and the overall condition assessment. Balado(a), et al. (2020)Balado, J.; González, E.; Arias, P.; Castro, D. Novel approach to automatic traffic sign inventory based on mobile mapping system data and deep learning. Remote Sensing, 12(3), 442, 2020.(a) demonstrate a novel method for mapping traffic signs based on data acquired with MMS (Mobile Mapping System): images and point clouds. The images are faster to process and artificial intelligence techniques, specifically Convolutional Neural Networks, are more optimized than in point clouds, and point clouds allow a more exact positioning than the exclusive use of images. Balado(b), et al. (2020)Balado, J.; Van Oosterom, P.; Díaz-Vilariño, L.; Meijers, M. Mathematical morphology directly applied to point cloud data. ISPRS Journal of Photogrammetry and Remote Sensing , 168, 208-220, 2020.(b), propose the basic operations of mathematical morphology applicable to 3D point cloud data, without the need to transform point clouds to 2D or 3D images and avoiding the associated problems of resolution loss and orientation restrictions. Luaces et al. (2021Luaces, M.R.; Fisteus, J.A.; Sánchez-Fernández, L.; Munoz-Organero, M.; Balado, J.; Díaz-Vilariño, L.; Lorenzo, H. Accessible Routes Integrating Data from Multiple Sources. ISPRS Int. J. Geo-Inf. 10, 7, 2021.), propose and validate the architecture of an information system that creates an accessibility data model for cities by ingesting data from different types of sources and provides an application that can be used by people with different abilities to compute accessible routes.

Gneeniss (2013)Gneeniss, A. S. Integration of LiDAR and photogrammetric data for enhanced aerial triangulation and camera calibration. 2014. Tese de Doutorado. Newcastle University. conducted a study to evaluate the amount and spatial distribution of LCPs to aero-triangulate large photogrammetric blocks. Li et al. (2015Li, N. et al. Registration of aerial imagery and LiDAR data in desert areas using sand ridges. The Photogrammetric Record, v. 30, n. 151, p. 263-278, 2015.) proposed an approach to use sand ridges as registration primitives to solve the problem between LiDAR and photogrammetric data in areas without ground-control points, such as deserted regions.

Most photogrammetric surveys nowadays integrate GNSS/INS to calculate camera position and orientation at the exposure time and are carried out simultaneously to LiDAR. Such procedure can acquire both LiDAR and photogrammetric dataset within the same mapping system, whether geodesic or not. According to Kersting (2011)Kersting, A. B. Quality assurance of multi-sensor systems. Doctoral thesis. University of Calgary, Calgary, AB, doi:10.11575/PRISM/4531, 2011.
https://doi.org/10.11575/PRISM/4531...
, photogrammetric calibration is often performed independently of the LiDAR system, and direct georeferencing depends on local flight conditions, as variations in temperature and atmospheric pressure conditions can change the camera position and orientation in relation to the Inertial Measurement Unit. Geometric entities such as points, lines, and areas are essential for calibrating both photogrammetric and LiDAR systems. Punctual entities precisely defined in the image and object spaces are the most frequently used. Thus, this article presents the study conducted for extracting 3D coordinates from punctual entities of LiDAR point clouds using linear regression and lines intersection, and for comparing results obtained by different traditional procedures, such as planes intersection (Costa, Mitishita and Martins, 2018Costa, F. A. L.; Mitishita, E. A.; Martins, M. The Influence of Sub-Block Position on Performing Integrated Sensor Orientation Using In Situ Camera Calibration and Lidar Control Points.Remote Sensing, v. 10, n. 2, p. 260, 2018.) and conventional topographic surveys.

2. Material and Methods

2.1 Research Area

Located in the municipality of Curitiba, Paraná State, Brazil, the 7,600-ha research area is covered by photogrammetry (20cm ground spatial resolution) and LiDAR (density of 4 pts/m²), obtained simultaneously in August 2012. Figure 1 illustrates the research area and the location of the studied roofs.

Figure 1:
Research area (25°26’48.76” S and 49°14’51.47” W, center of area).

Figure 2 illustrates the process flow chart of the proposed study. LiDAR, photogrammetric, and topographic data were collected by a two-stage field work, processed, and later discussed regarding the applied methods. The main methodological aspects will be addressed next.

Figure 2:
Stages of the proposed study.

In total, 28 building roofs with four slopes were selected for the proposed study. Six punctual entities were identified in each roof, four of which represent the roof edges vertices (B, C, D, F) and two the ridge vertices (A, F), as shown in Figure 3(b). Figure 3(a) exemplifies roof types selected to punctual entities survey, and Figure 3(b) represents the six punctual entities on the roof.

Figure 3:
Hip roofs and punctual entities.

2.2 Linear Regression

Linear regression will be used to determine the coordinates of a punctual entity obtained from a laser scanning point cloud. In general, linear regression is a linear approach to modeling the relationship between a scalar response (dependent variable) and one or more explanatory variables (independent variable). Relationships between variables are modeled using linear prediction functions whose unknown models parameters are directly estimated from the data. Linear regression models are adjusted using the least squares method. The simple linear regression model is described by a straight-line equation, as shown in Equation 1, where α is linear coefficient and β is slope or gradient.

y i = α + β x i (1)

2.2.1 Punctual Entity Extraction Using Linear Regression

Punctual entities were extracted using lines intersection point determined by linear regression. Obtaining lines, planes, and segments is an important task given the context of automatic feature extraction, especially the automatic extraction of building facades (Santos, Galo and Tachibana, 2018Santos, R. C.; Galo, M.; Tachibana, V. M. Classification of LiDAR data over building roofs using k-means and principal component analysis. Boletim de Ciências Geodésicas , v. 24, n. 1, p. 69-84, 2018.). According to Bretar (2009)Bretar, F. Feature extraction from LiDAR data in urban areas. Topographic laser ranging and scanning: principles and processing. CRC Press, Taylor & Francis Group, 2009, 590p., line segments in a three-dimensional space can be extracted in two ways: by the intersection of adjacent planes (Habib et al., 2005Habib, A.; Ghanma, M.; Kim, E. LIDAR data for photogrammetric georeferencing.Proc. FIG Working Week and GSDI-8, 2005.) or directly by LiDAR points (Gross and Thoennessen, 2006Gross, H.; Thoennessen, U. Extraction of lines from laser point clouds. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Istanbul - Turkey, v. 36, p. 86-91, part 3, 2006.). After determining straight segments and their intersection (concurrent lines segments), punctual entities may also be obtained (Santos, 2015Santos, R. C. Extração de feições retas e cálculo de entidades pontuais a partir de dados LASER para o ajustamento relativo de faixas. Dissertação apresentada ao PPGCC - Programa de PósGraduação em Ciências Cartográficas da FCT/UNESP. Presidente Prudente/SP, 2015.). These entities may be considered as edge points, such as those presented in Figure 4. In the point set obtained by the LiDAR system, lines segments can be derived by mathematical procedures and usually represent objects edges, such as ridges, roof edges, buildings, and viaducts (Santos, 2015Santos, R. C. Extração de feições retas e cálculo de entidades pontuais a partir de dados LASER para o ajustamento relativo de faixas. Dissertação apresentada ao PPGCC - Programa de PósGraduação em Ciências Cartográficas da FCT/UNESP. Presidente Prudente/SP, 2015.). The three-dimensional relative position of two-line segments or two lines can be divided into three groups: reverse, parallel, and concurrent. Two lines are considered concurrent when they admit a common plane and intersect each other at a single point. Further information on lines definition and concurrent lines determination can be seen in Santos (2015)Santos, R. C. Extração de feições retas e cálculo de entidades pontuais a partir de dados LASER para o ajustamento relativo de faixas. Dissertação apresentada ao PPGCC - Programa de PósGraduação em Ciências Cartográficas da FCT/UNESP. Presidente Prudente/SP, 2015..

Figure 4:
Corners obtained by the intersection between line segments.

According to Baxes (1994Baxes, G. A. Digital image processing: principles and applications. 1ª Edição. ed. [S.l.]: [s.n.], 1994.), segmentation is an operation that partitions, isolates, or highlights objects with similar characteristics on an image. For Gonzales and Woods (2010)Gonzalez, R. C.; Woods, R. C. Processamento Digital de Imagens. 3ª Edição. ed. São Paulo: Pearson Education, 2010. 624 p., this process is the subdivision of objects or regions that make up an image based on certain conditions. The aim of segmenting 3D punctual data based on region growing (as performed in this study) is to group sub-regions with similar characteristics (Stringhini et al., 2011Stringhini, D.; Souza, I. A.; Silva, L. A.; Marengoni, M. Visão Computacional Usando OpenCV. In: PITERI, M. A.; RODRIGUES, J. C. Fundamentos de visão computacional. 1ª Edição. Presidente Prudente: FCT/UNESP, 2011. p. 113-164.). The region-based segmentation process can be understood in detail in Zucker (1976Zucker, S. W. Region growing: childhood and adolescence. In: Computer Graphics and Image Processing, 3. ed. Academic Press, 1976. v. 15. Cap. 5, p. 382-399.), i.e., to obtain the points that formed the boundary, was used the criterion of difference the values in the Z component. This region growth algorithm seeks groups of points that somehow have similarity properties, characterizing the points of an area by component Z as belonging to the same region. Initially, the method starts from a point (chosen manually) and, from there, examines its neighbors, in sequence, to decide if they have similar Z components. If the neighboring points analyzed are accepted as similar, they will be grouped with the starting point (seed) to form a region. Our work employed a semi-automatic segmentation (region growth algorithm) of the points representing the region of interest. LiDAR points were exported in “.LAS” format and visualized in Microstation and TerraScan. These data were examined at terrascan to evaluate the semiautomatic process for region growth. Checking if the points can represent the straight segments that make up the edges of the roof. The tool (figure 5) allows visual interpretation of the regions of interest. After selection with the semi-automatic method, these points coordinates were read and tabulated. Once the laser scanning points that approximate line segments in the building roof are extracted, linear regression is applied to obtain the adjusted line segment. Then, lines are intersected to determine the punctual entity and its 3D coordinate.

Figure 5:
Semi-automatic selection of border points (white rectangle).

2.3 Punctual Entity Extraction with Planes Intersection Planes

In this stage, punctual entities coordinates, considered roof “ridges”, were obtained by the roof plane intersection method, presented in Costa, Mitishita and Martins (2018Costa, F. A. L.; Mitishita, E. A.; Martins, M. The Influence of Sub-Block Position on Performing Integrated Sensor Orientation Using In Situ Camera Calibration and Lidar Control Points.Remote Sensing, v. 10, n. 2, p. 260, 2018.). The authors describe this method in four main steps: filtering LiDAR points on building roofs, extracting building roofs planes, modeling building roof planes, and intersecting the three planes (characterization of LiDAR control points).

2.4 Punctual Entity Acquisition with Topographic Survey

Punctual entities and their three-dimensional coordinates were acquired by lines intersection, obtained by linear regression and planes intersection. A conventional topographic survey was conducted to verify the quality of these determinations using total station and a pair of GNSS receivers. The primary goal of this survey was to acquire the punctual entities that will be used as ground truth or checkpoints of the three-dimensional coordinates obtained with the proposed method for extracting punctual entities from laser scanning point clouds. Figure 6 illustrates the scheme of the field topographic irradiation survey.

Figure 6:
Sketch of punctual entities by irradiation survey.

During survey, we found the presence of elements such as trees and lighting cables to impair data acquisition, so that roof edges were not always completely surveyed. Likewise, not all ridges were surveyed due to a difficulty in the topographic survey method to properly visualizing or precising the location of a punctual detail. Figures 7 and 8 illustrate the field procedure for surveying points. Figure 7 shows GNSS receiver and total station installment to use the topographic irradiation technique, whereas Figure 8 shows the visualization of punctual entities on roof edges and ridges.

Figure 7:
Conventional topographical survey by irradiation.

Figure 8:
Punctual entities on the edge and ridge.

2.5 Trend and PEC Analyses for Edge Punctual Entities

Quality evaluation is a major step in geospatial data extraction. In Brazil, the cartographic accuracy standard (PEC) is the estimator often used for this purpose. Besides PEC, we also employed the statistical trend analysis proposed by Dalmolin and Leal (2001Dalmolin, Q.; Leal, E. M. Análise da qualidade posicional em bases cartográficas geradas em CAD.Boletim de Ciências Geodésicas, v. 7, n. 1, 2001.).

2.5.1 Trend Analysis

Dalmolin and Leal (2001Dalmolin, Q.; Leal, E. M. Análise da qualidade posicional em bases cartográficas geradas em CAD.Boletim de Ciências Geodésicas, v. 7, n. 1, 2001.), Galo and Camargo (1994Galo, M.; Camargo, P. O. Utilização do GPS no Controle de Qualidade de Cartas. In: COBRAC, I Congresso Brasileiro de Cadastro Técnico Multifinalitário. Florianópolis, SC, 1994. pp.41-48. DOI: 10.13140/RG.2.1.1790.1603.
https://doi.org/10.13140/RG.2.1.1790.160...
) and Tommaselli et al. (1988Tommaselli, A. M. G.; Monico, J. F. G.; Camargo, P. O. Análise da Exatidão Cartográfica da Carta Imagem ‘São Paulo’. In: Anais do V Simpósio Brasileiro de Sensoriamento Remoto. Natal-RN, 1988. pp.253-257.), proposed trend analysis as a complementary evaluation method for PEC analysis. This method aims to verify the accuracy and presence of a trend in the analysis of measurements of geospatial data coordinates. The presence of a trend is verified using the accuracy analysis, which consists in observing whether average discrepancy is equal to zero (De Carvalho and Da Silva, 2018De Carvalho, J. A. B.; da Silva, D. C. Métodos para avaliação da acurácia posicional altimétrica no Brasil.Revista Brasileira de Cartografia, v. 70, n. 2, p. 725-744, 2018. ). In our study, three-dimensional discrepancies are coordinates differences obtained by the proposed method and by the topographic survey field measures, considered ground truth or checkpoints (see item 2.4). Trend analysis was applied using the student’s t-test (t), t - appropriate for comparing means in a hypothesis testing. The following hypotheses will be verified:

H 0 : μ E , N , H = 0 ( N u l l H y p o t h e s i s ) (2)

H 1 : μ E , N , H 0 ( A l t e r n a t i v e H y p o t h e s i s ) (3)

µ∆E,∆N,∆H value correspond to the mean of the sampling discrepancies for the points set analyzed (Ei;Ni;Hi)i=1n, where n is the number of checkpoints in the sample. The student’s t-test is performed by calculating the random variable t (Equation 4) that has a t-distribution when µ has normal distribution. The null hypothesis (Equation 2) is correct when the calculated t module value is less than the t-distribution value for α significance level and n-1 degrees of freedom (Equation 5). Otherwise, the alternative hypothesis is correct (Equation 3).

t = μ E , N , H / S E , N , H . n (4)

t < t ( n - 1 , α / 2 ) (5)

2.5.2 PEC Analysis

The second analysis concerns the accuracy of punctual entities coordinates determined by linear regression and lines intersection. PEC analysis consists of comparing the sample standard deviation (SD) with the standard error (SE), according with the cartographic accuracy standard (PEC), later updated to cartographic accuracy standard for digital cartographic products (PEC-PCD) by the Geographic Service Board (DSG, 2016Diretoria do Serviço Geográfico. ET-PCDG: Especificação Técnica para Produtos de Conjuntos de Dados Geoespaciais. 2 ed. Brasília: DSG, 2016.). This study adopts Class A for PEC-PCD values (Table 1) for planimetry and altimetry.

Table 1:
Class A for PEC-PCD values.

Accuracy was evaluated by comparing variances using the chi-square distribution (Dalmolin and Leal, 2001Dalmolin, Q.; Leal, E. M. Análise da qualidade posicional em bases cartográficas geradas em CAD.Boletim de Ciências Geodésicas, v. 7, n. 1, 2001.). The following hypotheses were verified: the null hypothesis, where the calculated standard deviation (SE,N,H2) is equal to that recommended in the ET-PCDG Decree (σχ2); and the alternative hypothesis, where the calculated standard deviation (σE,N,H2) differs from that recommended (σχ2).

H 0 : S E , N , H 2 = σ χ 2 ( N u l l H y p o t h e s i s ) (6)

H 1 : S E , N , H 2 σ χ 2 ( A l t e r n a t i v e H y p o t h e s i s ) (7)

The test of variances, performed by calculating the random variables (Equation 8), will present a chi-square distribution when the random variable (coordinates discrepancy in the three axes) presents normal distribution. The test variable value is then compared with the tabulated value x 2 (chi-square distribution) with (𝑣 − 1) degree of freedom and α significance level (Inequation 9). The null hypothesis (Equation 6) will be correct if inequality remains true; otherwise, the alternative hypothesis is correct.

χ ² Δ E , Δ N , Δ H = ( n - 1 ) . ( ( S Δ E , Δ N , Δ H 2 . 2 ) / σ Δ E , Δ N , Δ H 2 ) (8)

χ 0 2 χ ( v - 1 , α ) 2 (9)

3. Results and Discussions

Only the results obtained by extracting punctual entities using linear regression and lines intersection will be thoroughly presented and discussed. Those obtained by planes intersection were only compared with ground truth, acquired in topographic survey, and were extensively expatiated on by Costa, Mitishita and Martins (2018Costa, F. A. L.; Mitishita, E. A.; Martins, M. The Influence of Sub-Block Position on Performing Integrated Sensor Orientation Using In Situ Camera Calibration and Lidar Control Points.Remote Sensing, v. 10, n. 2, p. 260, 2018.).

3.1 Results Obtained with Linear Regression and Lines Intersection

Punctual entities determined by linear regression and lines intersection are the roof ridge points A and F, and roof edges points B, C, D, and E, as shown in Figure 9.

Figure 9:
Hip roofs’ line segments.

Laser points near roofs edges were obtained by semi-automatic selection (explained in item 2.2.1). From these points, we calculated the mean of the altimetric coordinates for the four lines segments (BC, CD, DE, EB) using a spreadsheet software. By doing that, we were able to establish the same altitude for the roof edge points and their standard deviation, defining the edges; for example, the standard deviation value of one roof was 0.06 m. Then, we obtained the lines that define the edges by linear regression and estimated the α, β, and the coefficient of determination (R²) of each line. Table 1 shows the estimated values for the four lines of a roof. R² was estimated to evaluate the overall quality of the linear regression or line adjustment to the laser points considered as edge. As shown in Table 2, R² values ranged from 96% to 99%, suggesting a precise adjustment. To determine the punctual entities of the roof edges, lines were intersected using α and β coefficients estimated by the regressions, determiners of the 4 lines. Table 3 shows the edge punctual entities coordinates of a roof used as example in Figure 9.

Table 2:
Calculated coefficients for the edges of the roof.

Table 3:
Points entities’ coordinates at the borders.

After determining the roof edges coordinates, we determined the coordinates of the roof ridge punctual entities (points A and F, Figure 9) using the same procedure - a semi-automatic selection of the laser points near the line connecting the two roof ridges (line A-F, Figure 9). Then, we calculated the mean values of these points altimetric coordinates, enabling us to establish the same altitude for the roof ridges points. Linear regression was used to obtain the lines crossing points A and F, α and β coefficients, and R². With this line and the coordinates of roof edges points (B, C, D, and E), the three-dimensional coordinates of the points defining the two ridges (A-F) are determined by intersection. Table 4 and 5 show α, β, and R² coefficients and the ridge points coordinates. Due to difficulties in the topographic survey, only one ridge point within most building roofs was effectively used in the study.

Table 4:
Calculated coefficients for the ridge ridges of the roof.

Table 5:
Coordinates of the points entities in the ridge.

3.2 Punctual entities of roof edges versus ridges

This section aims to present the study conducted to evaluate the performance of the use of linear regression and intersecting lines to extract punctual entities from roof edges and ridges in an aerial laser scanning point cloud (LiDAR). For that, we will discuss the comparison results of checkpoints discrepancies (DE, DN, DH) for the punctual entities. Figures 10, 11, and 12 show the three-dimensional discrepancies obtained in 28 punctual entities from roof edges and ridges.

Figure 10:
Discrepancies in component E of the points entities borders and ridge.

Figure 11:
Discrepancies in component N of the points entities borders and ridge.

Figure 12:
Discrepancies in component H of the points entities borders and ridge.

Table 6:
Discrepancies of the points entities of borders and ridge.

By analyzing the checkpoints discrepancies graph, we found that E component (Figure 10) values for edge and ridges discrepancies are within the range of approximately -0.50 to 1.00 m. This component (DE) greater discrepancies are related to ridges, as shown by the root mean square error (RMSE) values presented in Table 6 - 0.37 m for ridges and 0.25 m for edges. Regarding discrepancies in the N component (Figure 11), we observed variation within the approximate range of -0.50 to 0.50 m, but the greatest discrepancies are also related to ridges, resulting in a RMSE of 0.27 m for ridges and 0.21 m for edges. As for the H component, discrepancies (DH) are smaller than in planimetric components, ranging from approximately -0.25 m to 0.16 m, as shown in Figure 12. However, greater discrepancies are also related to ridge, with RMSE equal to 0.10 m for ridges and 0.08 m for edges, as shown in Table 5. Based on the results displayed in Figures 10, 11, and 12, and Table 5, we may conclude that punctual entities from roofs edges have higher planimetric accuracy than those of ridges. The difficulties in measuring ridge punctual detail by topographic irradiation (already mentioned in item 2.4) can be deemed as the main cause of greater planimetric discrepancies in this entity.

3.3 Trend and PEC analyses for edge punctual entities

3.3.1 Trend Analysis

The three-dimensional coordinates of roof edges punctual entities obtained with linear regression and plane intersection underwent trend analysis, as described in item 2.5.1. Table 6 presents the obtained results, expressed in Boolean values (yes or no) and indicating whether the coordinate component (axis) shows or not a trend. To verify the trend, calculated values were compared with the threshold value t, which was 1.31 for 90% confidence. Thus, a calculated value greater than 1.31 indicates a trend in the component (axis), returning a “yes” value in the respective result cell (Table 7); values lower than 1.31 indicate a lack of trend, returning a “no “value. The results presented in Table 6 indicate a trend in the N component. Such trend can be closely observed in Figure 11, where most discrepancies (DN) are positive but acceptable when compared to the spatial resolution of LiDAR aerial laser scanning survey, of approximately 0.20 m.

Table 7:
Trend Analysis for discrepancy on borders and ridge.

3.3.2 PEC Analysis

The PEC analysis was performed according to the method presented in item 2.5.2. This analysis consists in comparing calculated statistics values with critical values of the chi-square distribution. Variance threshold for 90% significance level and 27 degrees of freedom was 37.90 (REIS, 1996Reis, E. Estatística descritiva. 3ª ed.- Lisboa: Edições Sílabo, 1996.- 245 p. ISBN 972-618-142-9.). Thus, calculated values (χ²DE,DN,DH) lower than 37.90 indicate that the three-dimensional accuracy of the coordinates of roof edges punctual entities determined with linear regression and lines intersection meets PEC’s stablished accuracy.

Table 8:
Precision analysis (Class A) of the point entities of roof edge corners.

The results obtained in the statistical test may be observed in Table 8, which shows the accuracy analysis (Class A) of edges punctual entities coordinates determined by linear regression and planes intersection. This test employed values of three-dimensional checkpoints discrepancies and reached lower results than expected for the 1/2,000 scale, being classified as class A for this or larger scales.

3.4 Comparison of survey methods for ridge points

To verify linear regression and line intersection performance in obtaining a roof ridge punctual entity, we compared the three-dimensional coordinates obtained by this method with those obtained by planes intersection, used by Costa, Mitishita and Martins (2018Costa, F. A. L.; Mitishita, E. A.; Martins, M. The Influence of Sub-Block Position on Performing Integrated Sensor Orientation Using In Situ Camera Calibration and Lidar Control Points.Remote Sensing, v. 10, n. 2, p. 260, 2018.). It is important to emphasize that our study used the data from the above-mentioned article for comparison. Figures 13, 14 , 15, 16, and 17 show the obtained results.

Figure 13:
Average discrepancies of ridge coordinates.

Figure 14:
RMSE of ridge coordinate discrepancies.

Figure 15:
Discrepancies in the E component in point entities ridge.

Figure 16:
Discrepancies in the N component in point entities ridge.

Figure 17:
Discrepancies in the H component in point entities ridge.

Figure 13 allow us to verify that both methods reach mean values of discrepancies in the three axes close to zero, indicating acceptable trend when compared to the nominal accuracy of an aerial laser scanning survey. However, a more rigorous analysis reveals that the planimetric component (N) in both methods is the furthest from the expected value equal to zero, evidencing a small systematic error within this component, possibly attributable to LiDAR topographic survey inaccuracy. Conversely, the results obtained with linear regression and line intersection show a lower trend in three-dimensional coordinates when comparing mean values. Figure 14 presents the RMSE of three-dimensional discrepancies of the three components for the two methods used. We faced difficulties in accurately measuring the ridge point detail using the topographic survey; however, comparing the RMSE values of the three coordinates enable us to conclude that linear regression and planes intersection has a better three-dimensional accuracy in determining three-dimensional coordinates. Figures 15, 16 and 17 graphically and accurately show the three-dimensional discrepancies in the ridge points of three-dimensional coordinates determined by the linear regression and lines intersection method, and those determined by planes intersection. The results show that both methods present the highest discrepancies values within the E component (axis) of coordinates. As aforementioned, the results obtained in the three components (E, N, and H) show that the highest discrepancies values are related to plane intersection. The main difference between the two methods employed concern the selection procedure - whereas linear regression and line intersection used a semi-automatic selection of laser point near the lines defining the ridges, plane intersection used an autonomous procedure to define planes. This condition may justify the increased accuracy in punctual entities coordinates with linear regression and lines intersection.

3.5 Trend and PEC analyses for ridge punctual entities

3.5.1 Trend Analysis

As discussed in item 3.3.1, the trend analysis of the three-dimensional coordinates obtained by the two methods will also be applied for extracting roof ridges punctual entities from a LiDAR point cloud. The same conditions presented and discussed in the stated item are again employed for this procedure. To verify the trend, calculated values were compared with the threshold value t, which was 1.31 for 90% confidence. Thus, if the t-test result is greater than 1.31, it indicates a trend in the axis, returning a “yes” value in the respective result cell (Table 8); if lower, it indicates a lack of trend, returning a “no” value.

Table 9:
Trend Analysis for discrepancy on ridge.

Table 8 results show that the two methods used for extracting point entities from ridges enabled us to obtain planimetric coordinates without trends. This result differs from that obtained in the trend study with planimetric coordinates of edge point entities obtained with linear regression and lines intersection, which presented a small planimetric trend in the component (N). However, as shown in Table 9, the component (N) value of statistics (1.24) approaches the value that rejects the hypothesis of lack of trend. As for altimetry, the component (H) obtained by planes intersection reached a statistical value that indicates the existence of a trend close to 0.20 m (twice the altimetric accuracy of a LiDAR survey), properly presented in Figures 14 and 17.

3.5.2 PEC Analysis

The procedures used for framing the coordinates scores of roof edge punctual entities determined by linear regression and lines intersection (presented in item 3.3.2) were also applied for the PEC analysis of ridge punctual entities, using the two procedures studied. Likewise, the conditions for rejecting or accepting the null hypothesis presented in the stated item will also be adopted for this procedure. Chi-square threshold for 90% significance level and 27 degrees of freedom was 37.90. Thus, chi-square statistic (X 2 DE,DN,DH ) values lower than 37.90 indicate that the procedure used for extracting the ridge punctual entity meets the PEC established for the analyzed scale. Table 9 presents the analysis results, which show that the accuracy of three-dimensional coordinates of ridge punctual entities obtained by linear regression and lines intersection frame Class A PEC within the scale 1/3,000 and in smaller scales. When using planes intersection, the values of statistic frame Class PEC within the scale 1/5,000 and in smaller scales.

Table 10:
Precision analysis (Class A) of point entities on ridge.

4 Final Considerations

This work reported the study conducted to extract three-dimensional coordinates of punctual entities from a LiDAR point cloud using linear regression and lines intersection in an urban region. Roof edges and ridges of four-plane (hip roof) rectangular buildings identifiable in the ground were the selected punctual entities. We compared the three-dimensional coordinates extracted with the proposed procedure with those obtained by a second method: planes intersection, presented by Costa, Mitishita and Martins (2018Costa, F. A. L.; Mitishita, E. A.; Martins, M. The Influence of Sub-Block Position on Performing Integrated Sensor Orientation Using In Situ Camera Calibration and Lidar Control Points.Remote Sensing, v. 10, n. 2, p. 260, 2018.). Accuracy was verified by checkpoints containing coordinates determined by topographic survey (ground truth). Based on our results, we issue the following conclusions and recommendations: linear regression and planes intersection more accurately provided the three-dimensional coordinates of roof edges than of ridges punctual entities. We faced difficulties in ground-measuring and visualizing punctual details by topographic irradiation, which resulted in inaccurate coordinates of ridge checkpoints. Statistical tests verified a small trend to the N component of roof edge punctual coordinates extracted with the primary procedure; however, such trend is acceptable when compared with the nominal planimetric accuracy of LiDAR aerial laser scanning survey. Three-dimensional coordinates planimetric accuracy of roof edges punctual entities framed Class A PEC-PCD within the scale 1/2,000 and lower scales. We also found ridge point entities three-dimensional coordinates extracted with linear regression and lines intersection to achieve higher planimetric and altimetric accuracy than those extracted with planes intersection. The accuracy of ridges entities coordinates obtained with the primary procedure framed Class A PEC-PCD within the scale 1/3,000 and lower scales; as for those extracted with planes intersection, the calculated values of statistic framed Class PEC-PCD within the scale 1/5,000 and lower scales. We found an inferior accuracy for three-dimensional coordinates obtained with planes intersection, which may be explained by the inaccuracy in establishing the planes that define roof ridges, as Costa, Mitishita and Martins (2018) adopted an autonomous selection method in the respective hip roof or planes. Our results demonstrate the applicability of linear regression and lines intersection to extract punctual entities from a LiDAR point cloud for photogrammetric mapping processes of Class A PEC-PCD 1/2,000 scales and lower scales in several fields of engineering. However, this method has the limitation of using semi-automatic selection of laser scanning points near roofs edges and ridges. Further research will approach methods for automatically selecting these points and automating the proposed method for extracting punctual entities from roofs edges and ridges.

REFERENCES

  • Balado, J.; Díaz-Vilariño, L.; Arias, P.; Lorenzo, H. Point clouds for direct pedestrian pathfinding in urban environments. ISPRS Journal of Photogrammetry and Remote Sensing, 148, 184-196, 2019.
  • Balado, J.; González, E.; Arias, P.; Castro, D. Novel approach to automatic traffic sign inventory based on mobile mapping system data and deep learning. Remote Sensing, 12(3), 442, 2020.(a)
  • Balado, J.; Van Oosterom, P.; Díaz-Vilariño, L.; Meijers, M. Mathematical morphology directly applied to point cloud data. ISPRS Journal of Photogrammetry and Remote Sensing , 168, 208-220, 2020.(b)
  • Baxes, G. A. Digital image processing: principles and applications 1ª Edição. ed. [S.l.]: [s.n.], 1994.
  • Bretar, F. Feature extraction from LiDAR data in urban areas. Topographic laser ranging and scanning: principles and processing CRC Press, Taylor & Francis Group, 2009, 590p.
  • Costa, F. A. L.; Mitishita, E. A.; Martins, M. The Influence of Sub-Block Position on Performing Integrated Sensor Orientation Using In Situ Camera Calibration and Lidar Control Points.Remote Sensing, v. 10, n. 2, p. 260, 2018.
  • Csanyi, N.; Toth, C. K. Improvement of lidar data accuracy using LiDAR-specific ground targets. Photogrammetric Engineering & Remote Sensing, v. 73, n. 4, p. 385-396, 2007.
  • Dalmolin, Q.; Leal, E. M. Análise da qualidade posicional em bases cartográficas geradas em CAD.Boletim de Ciências Geodésicas, v. 7, n. 1, 2001.
  • De Carvalho, J. A. B.; da Silva, D. C. Métodos para avaliação da acurácia posicional altimétrica no Brasil.Revista Brasileira de Cartografia, v. 70, n. 2, p. 725-744, 2018.
  • Delara, R.; Mitishita, E. A.; Habib, A. Bundle adjustment of images from non-metric CCD camera using LIDAR data as control points. In: International Archives of XXth ISPRS Congress 2004. p. 13-19.
  • Diretoria do Serviço Geográfico. ET-PCDG: Especificação Técnica para Produtos de Conjuntos de Dados Geoespaciais 2 ed. Brasília: DSG, 2016.
  • Galo, M.; Camargo, P. O. Utilização do GPS no Controle de Qualidade de Cartas. In: COBRAC, I Congresso Brasileiro de Cadastro Técnico Multifinalitário Florianópolis, SC, 1994. pp.41-48. DOI: 10.13140/RG.2.1.1790.1603.
    » https://doi.org/10.13140/RG.2.1.1790.1603
  • Gneeniss, A. S. Integration of LiDAR and photogrammetric data for enhanced aerial triangulation and camera calibration 2014. Tese de Doutorado. Newcastle University.
  • Gonzalez, R. C.; Woods, R. C. Processamento Digital de Imagens 3ª Edição. ed. São Paulo: Pearson Education, 2010. 624 p.
  • Gross, H.; Thoennessen, U. Extraction of lines from laser point clouds. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Istanbul - Turkey, v. 36, p. 86-91, part 3, 2006.
  • Habib, A.; Ghanma, M.; Mitishita, E. A. Co-registration of photogrammetric and LIDAR data: Methodology and case study. Revista Brasileira de Cartografia , v. 1, n. 56, 2004.
  • Habib, A.; Ghanma, M.; Kim, E. LIDAR data for photogrammetric georeferencing.Proc FIG Working Week and GSDI-8, 2005.
  • Habib, A.; Kim, E.-M.; Kim, C.-J. New methodologies for true orthophoto generation. Photogrammetric Engineering & Remote Sensing , 73(1): 25-36, 2007.
  • Hou, Q.; Ai, C. A network-level sidewalk inventory method using mobile LiDAR and deep learning. Transportation research part C: emerging technologies, 119, 102772, 2020.
  • Kersting, A. B. Quality assurance of multi-sensor systems Doctoral thesis. University of Calgary, Calgary, AB, doi:10.11575/PRISM/4531, 2011.
    » https://doi.org/10.11575/PRISM/4531
  • Kocaman, S. et al. 3D city modeling from high-resolution satellite images. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, v. 36, n. 1, 2006.
  • Ju, H.; Toth, C; Grejner-Brzezinska, D. A. A new approach to robust LiDAR/optical imagery registration.Photogrammetrie-Fernerkundung-Geoinformation, v. 2012, n. 5, p. 523-534, 2012.
  • Li, N. et al. Registration of aerial imagery and LiDAR data in desert areas using sand ridges. The Photogrammetric Record, v. 30, n. 151, p. 263-278, 2015.
  • Luaces, M.R.; Fisteus, J.A.; Sánchez-Fernández, L.; Munoz-Organero, M.; Balado, J.; Díaz-Vilariño, L.; Lorenzo, H. Accessible Routes Integrating Data from Multiple Sources. ISPRS Int. J. Geo-Inf 10, 7, 2021.
  • Mitishita, E. A. et al. 3D monocular restitution applied to small format digital airphoto and laser scanner data. In: Proceedings of Commission III, ISPRS Congress, Istanbul. 2004.
  • Mitishita, E. A. et al. Photogrammetric and lidar data integration using the centroid of a rectangular roof as a control point. The Photogrammetric Record , v. 23, n. 121, p. 19-35, 2008.
  • Reis, E. Estatística descritiva 3ª ed.- Lisboa: Edições Sílabo, 1996.- 245 p. ISBN 972-618-142-9.
  • Rönnholm, P. Registration quality-towards integration of laser scanning and photogrammetry. Publishing company EuroSDR, 2011.
  • Rottensteiner, F., Briese, C. Automatic generation of building models from LIDAR data and the integration of aerial images. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences , 34(3/W13): 174-180, 2003.
  • Santos, R. C. Extração de feições retas e cálculo de entidades pontuais a partir de dados LASER para o ajustamento relativo de faixas Dissertação apresentada ao PPGCC - Programa de PósGraduação em Ciências Cartográficas da FCT/UNESP. Presidente Prudente/SP, 2015.
  • Santos, R. C.; Galo, M.; Tachibana, V. M. Classification of LiDAR data over building roofs using k-means and principal component analysis. Boletim de Ciências Geodésicas , v. 24, n. 1, p. 69-84, 2018.
  • Stringhini, D.; Souza, I. A.; Silva, L. A.; Marengoni, M. Visão Computacional Usando OpenCV. In: PITERI, M. A.; RODRIGUES, J. C. Fundamentos de visão computacional 1ª Edição. Presidente Prudente: FCT/UNESP, 2011. p. 113-164.
  • Tommaselli, A. M. G.; Monico, J. F. G.; Camargo, P. O. Análise da Exatidão Cartográfica da Carta Imagem ‘São Paulo’. In: Anais do V Simpósio Brasileiro de Sensoriamento Remoto Natal-RN, 1988. pp.253-257.
  • Vosselman, G., Kessels, P., Gorte, B. The utilization of airborne laser scanning for mapping. International Journal of Applied Earth Observation and Geoinformation, 6(3-4): 177-186, 2005.
  • Wildan, F.; Aldino, R.; Aji, P. P. Application of LIDAR technology for GCP determination in papua topographic mapping scale 1: 50.000. In:Proceedings of the 10th Annual Asian Conference & Exhibition on Geospatial Information, Technology & Applications, Jakarta, Indonesia. 2011.
  • Zhang, Y., Zhang, Z., Zhang, J., Wu, J., 3D building modelling with digital map, LiDAR data and video image sequences. Photogrammetric Record, 20(111): 285-302, 2005.
  • Zucker, S. W. Region growing: childhood and adolescence. In: Computer Graphics and Image Processing, 3. ed. Academic Press, 1976. v. 15. Cap. 5, p. 382-399.

Publication Dates

  • Publication in this collection
    13 Aug 2021
  • Date of issue
    2021

History

  • Received
    16 Dec 2020
  • Accepted
    25 June 2021
Universidade Federal do Paraná Centro Politécnico, Jardim das Américas, 81531-990 Curitiba - Paraná - Brasil, Tel./Fax: (55 41) 3361-3637 - Curitiba - PR - Brazil
E-mail: bcg_editor@ufpr.br