Acessibilidade / Reportar erro

Counting of shoots of Eucalyptus sp. clones with convolutional neural network

Contagem de brotações de clones de Eucalyptus sp. com rede neural convolucional

Abstract

The objective of this work was to investigate the use of the You Only Look Once (YOLO) convolutional neural network model for the detection and efficient counting of Eucalyptus sp. shoots in stands through aerial photographs captured by unmanned aerial vehicles. For this, the significance of data organization was evaluated during the system-training process. Two datasets were used to train the convolutional neural network: one consisting of images with a single shoot and another with at least ten shoots per image. The results showed high precision and recall rates for both datasets. The convolutional neural network trained with images containing ten shoots per image showed a superior performance when applied to data not used during training. Therefore, the YOLO convolutional neural network can be used for the detection and counting of shoots of Eucalyptus sp. clones from aerial images captured by unmanned aerial vehicles in forest stands. The use of images containing ten shoots is recommended to compose the training dataset for the object detector.

Index terms
artificial intelligence; forest management; machine learning; object detection; silviculture

Resumo

O objetivo deste trabalho foi investigar o uso do modelo de rede neural convolucional You Only Look Once (YOLO) para detecção e contagem eficiente de brotos de Eucalyptus sp. em plantações, por meio de fotografias aéreas capturadas por veículos aéreos não tripulados. Para isso, avaliou-se a importância da organização dos dados durante o processo de treinamento do sistema. Foram utilizados dois conjunto de dados para treinar a rede neural convolucional: um consistindo em imagens com um único broto e o outro com pelo menos dez brotos por imagem. Os resultados mostraram altas taxas de precisão e recall para ambos os conjuntos de dados. A rede neural convolucional treinada com imagens contendo dez brotos por imagem apresentou desempenho superior quando aplicada a dados não utilizados durante o treinamento. Portanto, a rede neural convolucional YOLO pode ser usada para detecção e contagem de brotos de clones de Eucalyptus sp. a partir de imagens aéreas capturadas por veículos aéreos não tripulados em áreas florestais. Recomenda-se o uso de imagens contendo dez brotos para compor o conjunto de dados de treinamento para o detector de objetos.

Termos para indexação
inteligência artificial; manejo florestal; aprendizado de máquina; detecção de objetos; silvicultura

Introduction

Plantations of eucalyptus species can be managed using coppicing, in which wood can be harvested for two or three rotations within the same cycle without the need to carry out stand reform due to the high capacity of shoot sprouting from the existing root structure after harvest (Ferraz Filho et al., 2014FERRAZ FILHO, A.C.; SCOLFORO, J.R.S.; MOLA-YUDEGO, B. The coppice-with-standards silvicultural system as applied to Eucalyptus plantations - a review. Journal of Forestry Research, v.25, p.237-248, 2014. DOI: https://doi.org/10.1007/s11676-014-0455-0.
https://doi.org/10.1007/s11676-014-0455-...
; Silva et al., 2020SILVA, N.F. da; BARROS, N.F. de; NEVES, J.C.L.; SCHULTHAIS, F.; NOVAIS, R.F. de; MATTIELLO, E.M. Yield and nutrient demand and efficiency of eucalyptus under coppicing regime. Forests, v.11, art.852, 2020. DOI: https://doi.org/10.3390/f11080852.
https://doi.org/10.3390/f11080852...
). In this scenario, shoot management becomes advantageous, requiring no costs related to the acquisition of seedlings or to planting after harvest (Rode et al., 2015RODE, R.; LEITE, H.G.; OLIVEIRA, M.L.R. de; BINOTI, D.H.B.; RIBEIRO, C.A.A.S.; SOUZA, A.L. de; SILVA, M.L. da; COSENZA, D.N. Comparação da regulação florestal de projetos de fomento com áreas próprias de empresas florestais. Pesquisa Florestal Brasileira, v.35, p.11-19, 2015. DOI: https://doi.org/10.4336/2015.pfb.35.81.760.
https://doi.org/10.4336/2015.pfb.35.81.7...
), resulting in a profitable investment even if productivity is lower than that in high forest management (Ferraz Filho et al., 2014FERRAZ FILHO, A.C.; SCOLFORO, J.R.S.; MOLA-YUDEGO, B. The coppice-with-standards silvicultural system as applied to Eucalyptus plantations - a review. Journal of Forestry Research, v.25, p.237-248, 2014. DOI: https://doi.org/10.1007/s11676-014-0455-0.
https://doi.org/10.1007/s11676-014-0455-...
).

Several factors affect shoot quantity and quality, such as harvest time, occurrence of water deficit, soil moisture (Souza et al., 2012SOUZA, F.C. de; REIS, G.G. dos; REIS, M. das G.F.; LEITE, H.G.; ALVES, F. de F.; FARIA, R.S. de; PEREIRA, M.M. Sobrevivência e diâmetro de plantas intactas e brotações de clones de eucalipto. Floresta e Ambiente, v.19, p.44-54, 2012. DOI: https://doi.org/10.4322/floram.2012.006.
https://doi.org/10.4322/floram.2012.006...
), stump damages, soil compaction (Silva et al., 2018SILVA, N.; DALLA CORTE, A.P.; PIVA, L.R. de O.; SANQUETTA, C.R. Interpretação de imagens de veículos aéreos não tripulados para avaliação da sobrevivência de mudas em plantios florestais. Enciclopédia Biosfera, v.15, p.608-619, 2018. DOI: https://doi.org/10.18677/EnciBio_2018A56.
https://doi.org/10.18677/EnciBio_2018A56...
), and stump height (Benedito & Freitas, 2022BENEDITO, D.C.D.; FREITAS, L.C. de. Influence of the stump diameter and height on the growth and vigor of eucalyptus sprouts. Pesquisa Agropecuária Tropical, v.52, e70048, 2022. DOI: https://doi.org/10.1590/1983-40632022v5270048.
https://doi.org/10.1590/1983-40632022v52...
). Therefore, for the coppice regime to be economically viable, in addition to an adequate management throughout the stand, there should be a high rate of regeneration in the managed area, with an indication of shoot survival above 80% (Alvares et al., 2013ALVARES, C.A.; STAPE, J.L.; SENTELHAS, P.C.; GONÇALVES, J.L. de M.; SPAROVEK, G. Köppen’s climate classification map for Brazil. Meteorologische Zeitschrift, v.22, p.711-728, 2013. DOI: https://doi.org/10.1127/0941-2948/2013/0507.
https://doi.org/10.1127/0941-2948/2013/0...
).

Shoot survival rate is typically evaluated through a sampling process that includes counting live shoots three months after harvest. Sampling intensity and the walking methodology in the stands may vary according to the characteristics of the population. Almado (2015)ALMADO, R. de P. Manejo de brotação em áreas da ArcelorMittal BioFlorestas. Série Técnica IPEF, v.21, p.34-38, 2015., for example, evaluated a sample stand with 100 stumps every 3.0 ha.

Currently, the use of aerial photographs has been suggested for assessing the survival of shoots (Medauar et al., 2018MEDAUAR, C.C.; SILVA, S. de A.; CARVALHO, L.C.C.; TIBÚRCIO, R.A.S.; LIMA, J.S. de S.; MEDAUAR, P.A.S. Monitoring of eucalyptus sprouts control using digital images obtained by unmanned aerial vehicle. Journal of Sustainable Forestry, v.37, p.739-752, 2018. DOI: https://doi.org/10.1080/10549811.2018.1478309.
https://doi.org/10.1080/10549811.2018.14...
; Bonfatti Júnior et al., 2019BONFATTI JÚNIOR, E.A.; LENGOWSKI, E.C.; REESE, E. Monitoramento da sobrevivência de Eucalyptus spp. por imagens obtidas por VANTs. Revista Engenharia na Agricultura, v.27, p.220-226, 2019. DOI: https://doi.org/10.13083/reveng.v27i3.911.
https://doi.org/10.13083/reveng.v27i3.91...
) and seedlings (Oliveira Sobrinho et al., 2018OLIVEIRA SOBRINHO, M.F. de; DALLA CORTE, A.P.; VASCONCELLOS, B.N. de; SANQUETTA, C.R.; REX, F.E. Uso de veículos aéreos não tripulados (VANT) para mensuração de processos florestais. Enciclopédia Biosfera, v.15, p.117-129, 2018. DOI: https://doi.org/10.18677/EnciBio_2018A80.
https://doi.org/10.18677/EnciBio_2018A80...
; Oliveira et al., 2020OLIVEIRA, W.F. de; VIEIRA, A.W.; SANTOS, S.R. dos. Quality of forest plantations using aerial images and computer vision techniques. Revista Ciência Agronômica, v.51, e20197070, 2020.) and for counting adult individuals in forest stands (Picos et al., 2020PICOS, J.; BASTOS, G.; MÍGUEZ, D.; ALONSO, L.; ARMESTO, J. Individual tree detection in a eucalyptus plantation using unmanned aerial vehicle (UAV)-LiDAR. Remote Sensing, v.12, art.885, 2020. DOI: https://doi.org/10.3390/rs12050885.
https://doi.org/10.3390/rs12050885...
). This shift in methodology makes the evaluation process more efficient and accurate (Pearse et al., 2020PEARSE, G.D.; TAN, A.Y.S.; WATT, M.S.; FRANZ, M.O.; DASH, J.P. Detecting and mapping tree seedlings in UAV imagery using convolutional neural networks and field-verified data. ISPRS Journal of Photogrammetry and Remote Sensing, v.168, p.156-169, 2020. DOI: https://doi.org/10.1016/j.isprsjprs.2020.08.005.
https://doi.org/10.1016/j.isprsjprs.2020...
), in addition to enabling other analyses for checking for the presence of weeds and the occurrence of pest attacks, estimating the canopy area of shoots, and evaluating the quality of the harvesting and wood-extraction processes.

Although there are studies on tree identification using aerial imagery and segmentation (Hentz et al., 2018HENTZ, A.M.K.; DALLA CORTE, A.P.; PÉLLICO NETTO, S.; STRAGER, M.P.; SCHOENINGER, E.R. Treedetection: automatic tree detection using UAV-based data. Floresta, v.48, p.393-402, 2018. DOI: https://doi.org/10.5380/rf.v48i3.
https://doi.org/10.5380/rf.v48i3...
) or classification algorithms (Ruza et al., 2017RUZA, M.S.; DALLA CORTE, A.P.; HENTZ, Â.M.K.; SANQUETTA, C.R.; SILVA, C.A.; SCHOENINGER, E.R. Inventário de sobrevivência de povoamento de Eucalyptus com uso de redes neurais artificiais em fotografias obtidas por VANTs. Advances in Forestry Science, v.4, p.83-88, 2017.; Silva et al., 2018SILVA, N.; DALLA CORTE, A.P.; PIVA, L.R. de O.; SANQUETTA, C.R. Interpretação de imagens de veículos aéreos não tripulados para avaliação da sobrevivência de mudas em plantios florestais. Enciclopédia Biosfera, v.15, p.608-619, 2018. DOI: https://doi.org/10.18677/EnciBio_2018A56.
https://doi.org/10.18677/EnciBio_2018A56...
), the task of counting individuals faces challenges similar to those involved in the use of object detection algorithms in computer vision models, mainly convolutional neural networks. These models have been used for various purposes within forest sciences, such as studying the quality of forest roads, wood cracks (Ma et al., 2022MA, J.; YAN, W.; LIU, G.; XING, S.; NIU, S.; WEI, T. Complex texture contour feature extraction of cracks in timber structures of ancient architecture based on YOLO algorithm. Advances in Civil Engineering, v.1, art.7879302, 2022. DOI: https://doi.org/10.1155/2022/7879302.
https://doi.org/10.1155/2022/7879302...
), presence of knots in wooden boards (Fang et al., 2021FANG, Y.; GUO, X.; CHEN, K.; ZHOU, Z.; YE, Q. Accurate and automated detection of surface knots on sawn timbers using YOLO-V5 model. BioResources, v.16, p.5390-5406, 2021. DOI: https://doi.org/10.15376/biores.16.3.5390-5406.
https://doi.org/10.15376/biores.16.3.539...
), recognition of leaf diseases (Chen et al., 2022CHEN, Z.; WU, R.; LIN, Y.; LI, C.; CHEN, S.; YUAN, Z.; CHEN, S.; ZOU, X. Plant disease recognition model based on improved YOLOv5. Agronomy, v.12, art.365, 2022. DOI: https://doi.org/10.3390/agronomy12020365.
https://doi.org/10.3390/agronomy12020365...
), identification of forest pests (Zhang et al., 2021ZHANG, L.; YIN, L.; LIU, L.; ZHUO, R.; ZHUO, Y. Forestry pests identification and classification based on improved YOLO v5s. In: INTERNATIONAL CONFERENCE ON ELECTRONIC INFORMATION ENGINEERING AND COMPUTER SCIENCE, 1., Changchun, 2021. Proceedings. Changchun: IEEE, 2021. p.670-673. EIECS 2021. DOI: https://doi.org/10.1109/EIECS53707.2021.9588027.
https://doi.org/10.1109/EIECS53707.2021....
; Yun et al., 2022YUN, W.; KUMAR, J.P.; LEE, S.; KIM, D.-S.; CHO, B.-K. Deep learning based system development for black pine bast scale detection. Scientific Reports, v.12, art.606, 2022. DOI: https://doi.org/10.1038/s41598-021-04432-z.
https://doi.org/10.1038/s41598-021-04432...
), fire detection (Hossain et al., 2020HOSSAIN, F.M.A.; ZHANG, Y.M.; TONIMA, M.A. Forest fire flame and smoke detection from UAV-captured images using fire-specific color features and multi-color space local binary pattern. Journal of Unmanned Vehicle Systems, v.8, p.285-309, 2020. DOI: https://doi.org/10.1139/juvs-2020-0009.
https://doi.org/10.1139/juvs-2020-0009...
; Lu et al., 2022LU, K.; HUANG, J.; LI, J.; ZHOU, J.; CHEN, X.; LIU, Y. MTL-FFDET: A multi-task learning-based model for forest fire detection. Forests, v.13, art.1448, 2022. DOI: https://doi.org/10.3390/f13091448.
https://doi.org/10.3390/f13091448...
; Mahdi & Mahmood, 2022MAHDI, A.S.; MAHMOOD, S.A. An edge computing environment for early wildfire detection. Annals of Emerging Technologies in Computing (AETiC), v.6, p.56-68, 2022. DOI: https://doi.org/10.33166/AETiC.2022.03.005.
https://doi.org/10.33166/AETiC.2022.03.0...
; Zhao et al., 2022ZHAO, L.; ZHI, L.; ZHAO, C.; ZHENG, W. Fire-YOLO: a small target object detection method for fire inspection. Sustainability, v.14, art.4930, 2022. DOI: https://doi.org/10.3390/su14094930.
https://doi.org/10.3390/su14094930...
), and identification of dead trees (Li et al., 2022LI, Z.; YANG, R.; CAI, W.; XUE, Y.; HU, Y.; LI, L. LLAM-MDCNet for detecting remote sensing images of dead tree clusters. Remote Sensing, v.14, art.3684, 2022. DOI: https://doi.org/10.3390/rs14153684.
https://doi.org/10.3390/rs14153684...
). All of these studies tested several versions of the You Only Look Once (YOLO) algorithm, originally proposed by Redmon et al. (2016)REDMON, J.; DIVVALA, S.; GIRSHICK, R.; FARHADI, A. You only look once: unified, real-time object detection. In: CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, 29., 2016, Las Vegas. Proceedings. Los Alamitos: IEEE, 2016. p.779-788. CVPR 2016. DOI: https://doi.org/10.1109/CVPR.2016.91.
https://doi.org/10.1109/CVPR.2016.91...
and considered one of the most powerful algorithms for object detection.

However, inadequate datasets to train the algorithm have the potential to lead to missing detections (Dong & Liu, 2021DONG, A.; LIU, X. Automated detection of corona cavities from SDO images with YOLO. In: INTERNATIONAL CONFERENCE ON MULTIMEDIA BIG DATA, 7., Taichung, 2021. Proceedings. Taichung: IEEE, 2021. p.49-56. BIGMM 2021. DOI: https://doi.org/10.1109/BigMM52142.2021.00015.
https://doi.org/10.1109/BigMM52142.2021....
) or result in lower-quality outcomes (Fang et al., 2021FANG, Y.; GUO, X.; CHEN, K.; ZHOU, Z.; YE, Q. Accurate and automated detection of surface knots on sawn timbers using YOLO-V5 model. BioResources, v.16, p.5390-5406, 2021. DOI: https://doi.org/10.15376/biores.16.3.5390-5406.
https://doi.org/10.15376/biores.16.3.539...
). Therefore, studying how to organize the dataset for training a convolutional neural network is crucial for researchers not to need to test various datasets when developing new algorithms.

The objective of this work was to investigate the use of the YOLO convolutional neural network model for the detection and efficient counting of Eucalyptus sp. shoots in stands through aerial photographs captured by unmanned aerial vehicles.

Materials and Methods

The data used in the study were obtained from a Eucalyptus urophylla S.T.Blake x Eucalyptus grandis W.Hill ex Maiden (Eucalyptus urograndis) clonal plantation situated in the municipality of Josenópolis, in the northern region of the state of Minas Gerais, Brazil. According to Köppen-Geiger’s classification, the climate of the region is AS, tropical with a dry winter (Arthur Junior et al., 2015ARTHUR JUNIOR, J.C.; BAZANI, J.H.; HAKAMADA, R.E.; ROCHA, J.H.T.; MELO, E.A.S.C. de; GONÇALVES, J.L. de M. Avanços nas práticas silviculturais no manejo da brotação com enfoque no aumento da produtividade e na redução de custos. Série Técnica IPEF, v.21, p.75-79, 2015.). The predominant soil is classified as CXbd11, comprising Cambissolos Háplicos Tb distróficos (Cambisols), Latossolos Vermelhos distróficos (Ferralsols), and Neossolos Litólicos distróficos (Leptosols) (Santos et al., 2011SANTOS, H.G. dos; CARVALHO JÚNIOR, W. de; DART, R. de O.; ÁGLIO, M.L.D.; SOUZA, J.S. de; PARES, J.G.; FONTANA, A.; MARTINS, A.L. da S.; OLIVEIRA, A.P. de. O novo mapa de solos do Brasil: legenda atualizada. Rio de Janeiro: Embrapa Solos, 2011. 67p. (Embrapa Solos. Documentos, 130).).

The three following stands with E. urograndis clones in coppicing management, spaced at 3.50x2.60 m, were analyzed: stand 225, with 18.21 ha (16°27'57.08"S, 42°31'14.26"W), stand 228 with 28.67 ha (16°27'45.78"S, 42°31'30.15"W), and stand 252 with 15.17 ha (16°27'09.57"S, 42°32'55.87"W). The harvest of the first rotation was carried out at 7.5 years in June 2021, and data were collected in September of the same year.

The imagery of stand 225 was obtained using the Mavic Air 2 drone equipped with the FC3170 RGB camera (DJI, São Paulo, SP, Brazil), positioned at an altitude of approximately 100 m relative to the take-off point. The flight plan included a 60% frontal overlap, a 55% side overlap, and a camera tilt of 90º. Each of the 103 images collected had dimensions of 4,000 by 3,000 pixels, a radiometric resolution of 8 bits, and an imaged area of approximately 1.92 ha. The survey took 20 min to be completed and covered a total distance of 4,500 m along eight flight lines from the first to the last photograph.

Stand 228 was imaged similarly to stand 225, but the flyover was performed at a height of 80 m from the take-off point. A total of 134 photographs were collected, each with dimensions of 4,000 by 3,000 pixels, radiometric resolution of 8 bits, and an imaged area of approximately 1.23 ha. The survey took 19 min to be completed from the first to the last photograph.

The imagery of stand 252 was captured using a Mini 2 drone equipped with the RGB FC7303 camera (DJI, São Paulo, SP, Brazil), positioned at about 100 m above the take-off point. The overflight was manually piloted and tracked through the DJI Fly app, without an automated flight plan. Automatic photo capture was set for every 2 s. A total of 475 photographs were collected, each with dimensions of 4,000 by 2,250 pixels, radiometric resolution of 8 bits, and an imaged area of approximately 1.92 ha. The time required to carry out the survey was approximately 16 min from the first to the last photograph.

Orthomosaics were generated using the Agisoft Metashape software (Agisoft, 2023AGISOFT. Agisoft Metashape. Available at: <https://www.agisoft.com>. Accessed on: Oct. 9 2023.
https://www.agisoft.com...
), with spatial resolutions of 3.7 cm for stand 225, 2.75 cm for stand 228, and 4.05 cm for stand 252.

The orthomosaic of stand 225 was imported into the QGIS software (Qgis.org, 2023QGIS.ORG. QGIS Geographic Information System. Open Source Geospatial Foundation Project. Available at: <https://www.qgis.org>. Accessed on: Oct. 9 2023.
https://www.qgis.org...
) and then cropped to the extent of the planting area, resulting in an image with dimensions of 16,983x12,896 pixels. The shoots were manually vectorized in the QGIS software (Qgis.org, 2023QGIS.ORG. QGIS Geographic Information System. Open Source Geospatial Foundation Project. Available at: <https://www.qgis.org>. Accessed on: Oct. 9 2023.
https://www.qgis.org...
), resulting in a vector dataset with a total of 14,668 individual points representing the shoots.

Initially, 1,000 points were randomly selected to create rectangular polygons using the buffer function with a 2.0 m distance. The polygons were used to clip the orthomosaic, a process that generated 1,000 images with png extension and dimensions approximately equal to 98x98 pixels, representing examples of isolated shoots and composing a first dataset, termed DS01 (Figure 1).

Figure 1
Examples of images from the DS01 (A) and DS10 (B) datasets used to train the convolutional neural network, displaying one Eucalyptus urograndis shoot per image and ten shoots per image, respectively. The red polygons indicate bounding boxes that highlight the presence of eucalyptus shoots in the images.

In sequence, another 100 irregular polygons were randomly created inside the orthomosaic, so that each of them included the visualization of ten shoots. From such polygons, the orthomosaic was cut, and the resulting images were considered as the second dataset, termed DS10 (Figure 1). The height of the images varied from 220 to 378 pixels, while their width varied from 221 to 588 pixels.

The shoots in each image were marked and labeled using the labelImg application (Tzutalin, 2015TZUTALIN. LabelImg: Git code. 2015. Available at: <https://github.com/tzutalin/labelImg>. Accessed on Apr. 6 2023.
https://github.com/tzutalin/labelImg...
). This process generated 1,100 files with an xml extension, containing the labels and coordinates of the shoots in each image. The image files and xml files were uploaded to the Roboflow platform (Roboflow, 2023ROBOFLOW. Available at: <https://app.roboflow.com/>. Accessed on: Oct. 9 2023.
https://app.roboflow.com/...
) and imported into each dataset, i.e., into DS01 and DS10. The standard pre-processing of the platform was applied, including self-orientation and image resizing to 416x416 pixels. Both datasets were separated into training and validation sets, with 70% of the images for training and 30% for validation.

Two customized object detection models, one for each dataset, were trained with YOLO, version 5 (Glenn, 2020GLENN, J. YOLOv5 by Ultralytics. 2020. Available at: <https://github.com/ultralytics/yolov5>. Accessed on: Oct. 9 2023.
https://github.com/ultralytics/yolov5...
), maintaining the standard structure of the model, only changing the number of training epochs to 100.

The training and test results were evaluated through the statistical parameters of precision (P), recall (R), and accuracy (Ac) according to following equations: P = (TP) / (TP + FP), R = (TP) / (TP + FN), and Ac = (TP + TN) / (TP + TN + FP + FN), where TP is the number of true positives, TN is the number of true negatives, FP is the number of false positives, and FN is the number of false negatives. The behavior of precision and recall metrics was also evaluated over the iterations, as well as for the values of the loss of objects and loss of the bounding box functions.

In the three orthomosaics created from the imagery of the stands, 30 samples of varying sizes were created, each containing from 9 to 52 shoots for stand 225, 25 to 48 shoots for stand 228, and 121 to 161 shoots for stand 252. These samples were randomly selected using the QGIS software (Qgis.org, 2023QGIS.ORG. QGIS Geographic Information System. Open Source Geospatial Foundation Project. Available at: <https://www.qgis.org>. Accessed on: Oct. 9 2023.
https://www.qgis.org...
) from various locations within the orthomosaics. The number of shoots detected in each sample was compared with the actual number of shoots present in each sample. Only shoots that were completely visible in each image were considered valid, while the others were discarded. The results were evaluated using the aforementioned statistical parameters.

Results and Discussion

Both object detection models had 232 layers and 7,246,518 parameters as found by Wang et al. (2022b)WANG, Z.; JIN, L.; WANG, S.; XU, H. Apple stem/calyx real-time recognition using YOLO-v5 algorithm for fruit automatic loading system. Postharvest Biology and Technology, v.185, art.111808, 2022b. DOI: https://doi.org/10.1016/j.postharvbio.2021.111808.
https://doi.org/10.1016/j.postharvbio.20...
when using YOLO-v5s. The processing time required for the DS01 and DS10 datasets was 15.52 and 3.67 min, respectively. Although, according to Mahdi & Mahmood (2022)MAHDI, A.S.; MAHMOOD, S.A. An edge computing environment for early wildfire detection. Annals of Emerging Technologies in Computing (AETiC), v.6, p.56-68, 2022. DOI: https://doi.org/10.33166/AETiC.2022.03.005.
https://doi.org/10.33166/AETiC.2022.03.0...
, smaller images contribute to a reduced processing time, the results of the present study may be related to the fact that, when only one shoot per image was considered (DS01), the number of files required was ten times higher than for sampling with ten shoots per image (DS10).

The processing time in both datasets was shorter than that reported in the literature, which may be attributed to the number of training epochs. For more than 200 training epochs, Mahdi & Mahmood (2022)MAHDI, A.S.; MAHMOOD, S.A. An edge computing environment for early wildfire detection. Annals of Emerging Technologies in Computing (AETiC), v.6, p.56-68, 2022. DOI: https://doi.org/10.33166/AETiC.2022.03.005.
https://doi.org/10.33166/AETiC.2022.03.0...
observed a processing time greater than 1 hour, whereas, for 1,000 epochs, Fang et al. (2021)FANG, Y.; GUO, X.; CHEN, K.; ZHOU, Z.; YE, Q. Accurate and automated detection of surface knots on sawn timbers using YOLO-V5 model. BioResources, v.16, p.5390-5406, 2021. DOI: https://doi.org/10.15376/biores.16.3.5390-5406.
https://doi.org/10.15376/biores.16.3.539...
found a time of more than 7 hours for training when performing the process on a computer with 16 GB RAM and 6 GB graphic card. In the present study, 100 epochs were chosen due to preliminary tests, which showed stabilization of the recall and precision values in a lower number of iterations. The number of iterations or training epochs is often used as a stopping criterion in the training of computer vision models.

The relatively short training time was also attributed to the use of Google Colaboratory, which provided graphic processing units (GPUs) for free in its virtual machine in the cloud (Mirhaji et al., 2021MIRHAJI, H.; SOLEYMANI, M.; ASAKEREH, A.; MEHDIZADEH, S.A. Fruit detection and load estimation of an orange orchard using the YOLO models through simple approaches in different imaging and illumination conditions. Computers and Electronics in Agriculture, v.191, art.106533, 2021. DOI: https://doi.org/10.1016/j.compag.2021.106533.
https://doi.org/10.1016/j.compag.2021.10...
), eliminating the need for a high-cost local machine and leading to a significant difference in artificial intelligence applications (Mahdi & Mahmood, 2022MAHDI, A.S.; MAHMOOD, S.A. An edge computing environment for early wildfire detection. Annals of Emerging Technologies in Computing (AETiC), v.6, p.56-68, 2022. DOI: https://doi.org/10.33166/AETiC.2022.03.005.
https://doi.org/10.33166/AETiC.2022.03.0...
). For the present work, a Tesla T4 GPU with 15GB of RAM and 40 processing cores was made available. Dong & Liu (2021)DONG, A.; LIU, X. Automated detection of corona cavities from SDO images with YOLO. In: INTERNATIONAL CONFERENCE ON MULTIMEDIA BIG DATA, 7., Taichung, 2021. Proceedings. Taichung: IEEE, 2021. p.49-56. BIGMM 2021. DOI: https://doi.org/10.1109/BigMM52142.2021.00015.
https://doi.org/10.1109/BigMM52142.2021....
were able to train an object detection model with an average time of 40 min using the platform.

The DS01 dataset resulted in a rapid evolution of precision and recall metrics to a level close to 1, requiring less than 20 iterations for convergence (Figure 2 A and B). However, the DS10 dataset required at least 70 iterations for the algorithm to stabilize metrics close to 1 (Figure 2 A and B). Wang et al. (2022a)WANG, Q.; CHENG, M.; HUANG, S.; CAI, Z.; ZHANG, J.; YUAN, H. A deep learning approach incorporating YOLO v5 and attention mechanisms for field real-time detection of the invasive weed Solanum rostratum Dunal seedlings. Computers and Electronics in Agriculture, v.199, art.107194, 2022a. DOI: https://doi.org/10.1016/j.compag.2022.107194.
https://doi.org/10.1016/j.compag.2022.10...
also observed a rapid convergence during the training of a modified version of YOLO v5 for invasive plant identification. Other studies showed the increasing trend of precision and recall metrics before reaching 100 iterations (Yan et al., 2021YAN, B.; FAN, P.; LEI, X.; LIU, Z.; YANG, F. A real-time apple targets detection method for picking robot based on improved YOLOv5. Remote Sensing, v.13, art.1619, 2021. DOI: https://doi.org/10.3390/rs13091619.
https://doi.org/10.3390/rs13091619...
; Starke & Geiger, 2022STARKE, M.; GEIGER, C. Machine vision based waterlogged area detection for gravel road condition monitoring. International Journal of Forest Engineering, v.33, p.243-249, 2022. DOI: https://doi.org/10.1080/14942119.2022.2064654.
https://doi.org/10.1080/14942119.2022.20...
; Wang et al., 2022bWANG, Z.; JIN, L.; WANG, S.; XU, H. Apple stem/calyx real-time recognition using YOLO-v5 algorithm for fruit automatic loading system. Postharvest Biology and Technology, v.185, art.111808, 2022b. DOI: https://doi.org/10.1016/j.postharvbio.2021.111808.
https://doi.org/10.1016/j.postharvbio.20...
).

Figure 2
Results of the You Only Look Once (YOLO) convolutional neural network trained with two different datasets (DS01 and DS10) for: precision, indicating the proportion of shoots correctly detected (A); recall, showing the number of existing shoots correctly detected (B); box loss metric, presenting the accuracy of the model in centering the shoot in the predicted bounding box (C); and objectness loss, measuring the probability that a shoot exists in a region of interest (D). Black lines represent training with one Eucalyptus urograndis shoot per image (DS01), and red lines represent training with ten shoots per image (DS10).

Since the box loss function indicates the loss of the object’s bounding box coordinates, the lower its value, the better the object identification result (Wang et al., 2022bWANG, Z.; JIN, L.; WANG, S.; XU, H. Apple stem/calyx real-time recognition using YOLO-v5 algorithm for fruit automatic loading system. Postharvest Biology and Technology, v.185, art.111808, 2022b. DOI: https://doi.org/10.1016/j.postharvbio.2021.111808.
https://doi.org/10.1016/j.postharvbio.20...
). A tendency for the values to converge when considering the two datasets was observed, with a quick stabilization in the case of DS01 (Figure 2 C).

Regarding the information “object loss”, which represents the loss of confidence in the recognized object (Wang et al., 2022bWANG, Z.; JIN, L.; WANG, S.; XU, H. Apple stem/calyx real-time recognition using YOLO-v5 algorithm for fruit automatic loading system. Postharvest Biology and Technology, v.185, art.111808, 2022b. DOI: https://doi.org/10.1016/j.postharvbio.2021.111808.
https://doi.org/10.1016/j.postharvbio.20...
), the lower the value of this function, the higher the accuracy of the model (Chen et al., 2022CHEN, Z.; WU, R.; LIN, Y.; LI, C.; CHEN, S.; YUAN, Z.; CHEN, S.; ZOU, X. Plant disease recognition model based on improved YOLOv5. Agronomy, v.12, art.365, 2022. DOI: https://doi.org/10.3390/agronomy12020365.
https://doi.org/10.3390/agronomy12020365...
). Therefore, the degree of certainty in identification tends to increase with each training epoch. The loss function quickly stabilized in the case of the model trained with the DS01 dataset, but only stabilized from the sixtieth training epoch, with a slightly higher level, for DS10 (Figure 2 D). This reinforces the need for more processing time when there are many objects in each image in the training dataset.

Although the models presented similar final results, they differed in their application to the datasets used for testing (Figure 3). The model trained with DS10 showed better results for all considered metrics, except precision (Table 1). In stand 225, from which the training images were sampled to indicate the presence of shoots, there was an increase of 0.42 and 0.46 in the mean values of accuracy and recall, respectively. These results are superior to those found in the literature. Chen et al. (2022)CHEN, Z.; WU, R.; LIN, Y.; LI, C.; CHEN, S.; YUAN, Z.; CHEN, S.; ZOU, X. Plant disease recognition model based on improved YOLOv5. Agronomy, v.12, art.365, 2022. DOI: https://doi.org/10.3390/agronomy12020365.
https://doi.org/10.3390/agronomy12020365...
, for instance, observed that YOLO was able to detect diseases in tree leaves with precision and recall values above 0.79 and 0.66, respectively. Yan et al. (2021)YAN, B.; FAN, P.; LEI, X.; LIU, Z.; YANG, F. A real-time apple targets detection method for picking robot based on improved YOLOv5. Remote Sensing, v.13, art.1619, 2021. DOI: https://doi.org/10.3390/rs13091619.
https://doi.org/10.3390/rs13091619...
found precision and recall values above 0.82 and 0.89, respectively, when applying YOLO to apple fruit detection. In their study, Wang et al. (2022a)WANG, Q.; CHENG, M.; HUANG, S.; CAI, Z.; ZHANG, J.; YUAN, H. A deep learning approach incorporating YOLO v5 and attention mechanisms for field real-time detection of the invasive weed Solanum rostratum Dunal seedlings. Computers and Electronics in Agriculture, v.199, art.107194, 2022a. DOI: https://doi.org/10.1016/j.compag.2022.107194.
https://doi.org/10.1016/j.compag.2022.10...
obtained precision and recall values of 0.90 using a modified version of YOLO v5. Therefore, it can be concluded that the use of the DS10 trained model is recommended for shoot detection, as it showed good results not only for the stand used in training, but also for the stands used for model validation.

Table 1
Precision, recall, and accuracy of two training methods (datasets) applied to 30 samples from three stands of Eucalyptus urograndis clones for shoot detection, in 2023, in the municipality of Josenópolis, in the state of Minas Gerais, Brazil(1).

Figure 3
Shoot detection results using two training methods, i.e., the DS01 and DS10 datasets, applied to image samples from three stands of Eucalyptus urograndis clones, in 2023, in the municipality of Josenópolis, in the state of Minas Gerais, Brazil. DS01, one shoot per image; and DS10, ten shoots per image.

As to the number of true positives and false positives related to precision values, in the case of the model trained with DS01, there was a greater tendency to avoid false positives, which, in part, may result in a lower chance of considering an object as a shoot.

For a successful management of forestry activities, optimizing silvicultural practices and using accurate sampling methods are essential. The average error in relation to the number of shoots per hectare was 54.58% for the model with DS01 and 3.33% for that with DS10 (Table 2). The lowest error is comparable to those found in other studies. Hentz et al. (2018)HENTZ, A.M.K.; DALLA CORTE, A.P.; PÉLLICO NETTO, S.; STRAGER, M.P.; SCHOENINGER, E.R. Treedetection: automatic tree detection using UAV-based data. Floresta, v.48, p.393-402, 2018. DOI: https://doi.org/10.5380/rf.v48i3.
https://doi.org/10.5380/rf.v48i3...
, for example, observed an average error of 2.75% when detecting young trees using watershed-based segmentation, Picos et al. (2020)PICOS, J.; BASTOS, G.; MÍGUEZ, D.; ALONSO, L.; ARMESTO, J. Individual tree detection in a eucalyptus plantation using unmanned aerial vehicle (UAV)-LiDAR. Remote Sensing, v.12, art.885, 2020. DOI: https://doi.org/10.3390/rs12050885.
https://doi.org/10.3390/rs12050885...
found an error of 3.7% for tree counting with a light detection and ranging sensor, and Pearse et al. (2020)PEARSE, G.D.; TAN, A.Y.S.; WATT, M.S.; FRANZ, M.O.; DASH, J.P. Detecting and mapping tree seedlings in UAV imagery using convolutional neural networks and field-verified data. ISPRS Journal of Photogrammetry and Remote Sensing, v.168, p.156-169, 2020. DOI: https://doi.org/10.1016/j.isprsjprs.2020.08.005.
https://doi.org/10.1016/j.isprsjprs.2020...
obtained an underestimate of 2.4% for pine seedling counting. In image classification using artificial neural networks, the error was 1.6% after post-processing (Ruza et al., 2017RUZA, M.S.; DALLA CORTE, A.P.; HENTZ, Â.M.K.; SANQUETTA, C.R.; SILVA, C.A.; SCHOENINGER, E.R. Inventário de sobrevivência de povoamento de Eucalyptus com uso de redes neurais artificiais em fotografias obtidas por VANTs. Advances in Forestry Science, v.4, p.83-88, 2017.).

Table 2
Comparison between the number of shoots of Eucalyptus urograndis clones per hectare observed in each stand and number of shoots per hectare detected by the You Only Look Once (YOLO) convolutional neural network trained with two different datasets (DS01 and DS10), in 2023, in the municipality of Josenópolis, in the state of Minas Gerais, Brazil(1).

The model trained with DS01 showed a lower count of shoots than the values observed in almost all samples from all stands (Figure 4). The worst result was found for stand 252, where the samples had a greater number of identifiable objects, leading to noise in the detection process. A possible explanation for this is that the soil color in this stand is different from the stand used for training YOLO. The samples in dataset DS01 have a smaller area covered by soil pixels, which can be crucial in this case. In contrast, the model with DS10 tended to overestimate the number of shoots in the samples with a lower number of observed individuals, especially in stands 225 and 228, which occurred because the model detected weeds and less vigorous shoots as the reference standard shoots.

Figure 4
Number of shoots per hectare detected by the You Only Look Once (YOLO) convolutional neural network (ordinate), trained with two datasets (DS01 and DS10), compared with the true number of shoots per hectare (abscissa) in three stands of Eucalyptus urograndis clones, in 2023, in the municipality of Josenópolis, in the state of Minas Gerais, Brazil. DS01, one shoot per image; and DS10, ten shoots per image.

The differences obtained with the use of the two datasets for training may be related to a possible overfitting of the model with DS01. This occurs when the network “learns too much” and fails to generalize (Hossain et al., 2020HOSSAIN, F.M.A.; ZHANG, Y.M.; TONIMA, M.A. Forest fire flame and smoke detection from UAV-captured images using fire-specific color features and multi-color space local binary pattern. Journal of Unmanned Vehicle Systems, v.8, p.285-309, 2020. DOI: https://doi.org/10.1139/juvs-2020-0009.
https://doi.org/10.1139/juvs-2020-0009...
). Although the training of the network with the DS01 dataset showed good results for the precision and accuracy metrics, its precision was low when applied to images of stands 228 and 252, indicating that overfitting occurred.

The use of images with more information can improve the generalization of the model, avoiding the occurrence of overfitting (Wang et al., 2022bWANG, Z.; JIN, L.; WANG, S.; XU, H. Apple stem/calyx real-time recognition using YOLO-v5 algorithm for fruit automatic loading system. Postharvest Biology and Technology, v.185, art.111808, 2022b. DOI: https://doi.org/10.1016/j.postharvbio.2021.111808.
https://doi.org/10.1016/j.postharvbio.20...
). Fang et al. (2021)FANG, Y.; GUO, X.; CHEN, K.; ZHOU, Z.; YE, Q. Accurate and automated detection of surface knots on sawn timbers using YOLO-V5 model. BioResources, v.16, p.5390-5406, 2021. DOI: https://doi.org/10.15376/biores.16.3.5390-5406.
https://doi.org/10.15376/biores.16.3.539...
suggested that data augmentation techniques can be effective in preventing overfitting and proposed the use of training image mosaics to create new examples for the network. In fact, the DS01 dataset primarily consists of images with shoot covering most of the area, and creating mosaics could generate examples similar to those in DS10, which includes other objects such as weeds, wood, and anthills that are not mapped by bounding boxes.

Conclusions

  1. The You Only Look Once (YOLO) convolutional neural network can be used for the detection and counting of shoots of Eucalyptus urograndis clones using aerial images captured by unmanned aerial vehicles in forest stands.

  2. The use of images containing ten shoots is recommended, in contrast to those with only one shoot per image, for composing the training dataset for the object detector.

Acknowledgments

To Universidade Federal de Minas Gerais (UFMG), for scholarship; and to Norflor Empreendimentos Florestais, for technical support.

References

  • AGISOFT. Agisoft Metashape Available at: <https://www.agisoft.com>. Accessed on: Oct. 9 2023.
    » https://www.agisoft.com
  • ALMADO, R. de P. Manejo de brotação em áreas da ArcelorMittal BioFlorestas. Série Técnica IPEF, v.21, p.34-38, 2015.
  • ALVARES, C.A.; STAPE, J.L.; SENTELHAS, P.C.; GONÇALVES, J.L. de M.; SPAROVEK, G. Köppen’s climate classification map for Brazil. Meteorologische Zeitschrift, v.22, p.711-728, 2013. DOI: https://doi.org/10.1127/0941-2948/2013/0507
    » https://doi.org/10.1127/0941-2948/2013/0507
  • ARTHUR JUNIOR, J.C.; BAZANI, J.H.; HAKAMADA, R.E.; ROCHA, J.H.T.; MELO, E.A.S.C. de; GONÇALVES, J.L. de M. Avanços nas práticas silviculturais no manejo da brotação com enfoque no aumento da produtividade e na redução de custos. Série Técnica IPEF, v.21, p.75-79, 2015.
  • BENEDITO, D.C.D.; FREITAS, L.C. de. Influence of the stump diameter and height on the growth and vigor of eucalyptus sprouts. Pesquisa Agropecuária Tropical, v.52, e70048, 2022. DOI: https://doi.org/10.1590/1983-40632022v5270048
    » https://doi.org/10.1590/1983-40632022v5270048
  • BONFATTI JÚNIOR, E.A.; LENGOWSKI, E.C.; REESE, E. Monitoramento da sobrevivência de Eucalyptus spp. por imagens obtidas por VANTs. Revista Engenharia na Agricultura, v.27, p.220-226, 2019. DOI: https://doi.org/10.13083/reveng.v27i3.911
    » https://doi.org/10.13083/reveng.v27i3.911
  • CHEN, Z.; WU, R.; LIN, Y.; LI, C.; CHEN, S.; YUAN, Z.; CHEN, S.; ZOU, X. Plant disease recognition model based on improved YOLOv5. Agronomy, v.12, art.365, 2022. DOI: https://doi.org/10.3390/agronomy12020365
    » https://doi.org/10.3390/agronomy12020365
  • DONG, A.; LIU, X. Automated detection of corona cavities from SDO images with YOLO. In: INTERNATIONAL CONFERENCE ON MULTIMEDIA BIG DATA, 7., Taichung, 2021. Proceedings Taichung: IEEE, 2021. p.49-56. BIGMM 2021. DOI: https://doi.org/10.1109/BigMM52142.2021.00015
    » https://doi.org/10.1109/BigMM52142.2021.00015
  • FANG, Y.; GUO, X.; CHEN, K.; ZHOU, Z.; YE, Q. Accurate and automated detection of surface knots on sawn timbers using YOLO-V5 model. BioResources, v.16, p.5390-5406, 2021. DOI: https://doi.org/10.15376/biores.16.3.5390-5406
    » https://doi.org/10.15376/biores.16.3.5390-5406
  • FERRAZ FILHO, A.C.; SCOLFORO, J.R.S.; MOLA-YUDEGO, B. The coppice-with-standards silvicultural system as applied to Eucalyptus plantations - a review. Journal of Forestry Research, v.25, p.237-248, 2014. DOI: https://doi.org/10.1007/s11676-014-0455-0
    » https://doi.org/10.1007/s11676-014-0455-0
  • GLENN, J. YOLOv5 by Ultralytics 2020. Available at: <https://github.com/ultralytics/yolov5>. Accessed on: Oct. 9 2023.
    » https://github.com/ultralytics/yolov5
  • HENTZ, A.M.K.; DALLA CORTE, A.P.; PÉLLICO NETTO, S.; STRAGER, M.P.; SCHOENINGER, E.R. Treedetection: automatic tree detection using UAV-based data. Floresta, v.48, p.393-402, 2018. DOI: https://doi.org/10.5380/rf.v48i3
    » https://doi.org/10.5380/rf.v48i3
  • HOSSAIN, F.M.A.; ZHANG, Y.M.; TONIMA, M.A. Forest fire flame and smoke detection from UAV-captured images using fire-specific color features and multi-color space local binary pattern. Journal of Unmanned Vehicle Systems, v.8, p.285-309, 2020. DOI: https://doi.org/10.1139/juvs-2020-0009
    » https://doi.org/10.1139/juvs-2020-0009
  • LI, Z.; YANG, R.; CAI, W.; XUE, Y.; HU, Y.; LI, L. LLAM-MDCNet for detecting remote sensing images of dead tree clusters. Remote Sensing, v.14, art.3684, 2022. DOI: https://doi.org/10.3390/rs14153684
    » https://doi.org/10.3390/rs14153684
  • LU, K.; HUANG, J.; LI, J.; ZHOU, J.; CHEN, X.; LIU, Y. MTL-FFDET: A multi-task learning-based model for forest fire detection. Forests, v.13, art.1448, 2022. DOI: https://doi.org/10.3390/f13091448
    » https://doi.org/10.3390/f13091448
  • MA, J.; YAN, W.; LIU, G.; XING, S.; NIU, S.; WEI, T. Complex texture contour feature extraction of cracks in timber structures of ancient architecture based on YOLO algorithm. Advances in Civil Engineering, v.1, art.7879302, 2022. DOI: https://doi.org/10.1155/2022/7879302
    » https://doi.org/10.1155/2022/7879302
  • MAHDI, A.S.; MAHMOOD, S.A. An edge computing environment for early wildfire detection. Annals of Emerging Technologies in Computing (AETiC), v.6, p.56-68, 2022. DOI: https://doi.org/10.33166/AETiC.2022.03.005
    » https://doi.org/10.33166/AETiC.2022.03.005
  • MEDAUAR, C.C.; SILVA, S. de A.; CARVALHO, L.C.C.; TIBÚRCIO, R.A.S.; LIMA, J.S. de S.; MEDAUAR, P.A.S. Monitoring of eucalyptus sprouts control using digital images obtained by unmanned aerial vehicle. Journal of Sustainable Forestry, v.37, p.739-752, 2018. DOI: https://doi.org/10.1080/10549811.2018.1478309
    » https://doi.org/10.1080/10549811.2018.1478309
  • MIRHAJI, H.; SOLEYMANI, M.; ASAKEREH, A.; MEHDIZADEH, S.A. Fruit detection and load estimation of an orange orchard using the YOLO models through simple approaches in different imaging and illumination conditions. Computers and Electronics in Agriculture, v.191, art.106533, 2021. DOI: https://doi.org/10.1016/j.compag.2021.106533
    » https://doi.org/10.1016/j.compag.2021.106533
  • OLIVEIRA SOBRINHO, M.F. de; DALLA CORTE, A.P.; VASCONCELLOS, B.N. de; SANQUETTA, C.R.; REX, F.E. Uso de veículos aéreos não tripulados (VANT) para mensuração de processos florestais. Enciclopédia Biosfera, v.15, p.117-129, 2018. DOI: https://doi.org/10.18677/EnciBio_2018A80
    » https://doi.org/10.18677/EnciBio_2018A80
  • OLIVEIRA, W.F. de; VIEIRA, A.W.; SANTOS, S.R. dos. Quality of forest plantations using aerial images and computer vision techniques. Revista Ciência Agronômica, v.51, e20197070, 2020.
  • PEARSE, G.D.; TAN, A.Y.S.; WATT, M.S.; FRANZ, M.O.; DASH, J.P. Detecting and mapping tree seedlings in UAV imagery using convolutional neural networks and field-verified data. ISPRS Journal of Photogrammetry and Remote Sensing, v.168, p.156-169, 2020. DOI: https://doi.org/10.1016/j.isprsjprs.2020.08.005
    » https://doi.org/10.1016/j.isprsjprs.2020.08.005
  • PICOS, J.; BASTOS, G.; MÍGUEZ, D.; ALONSO, L.; ARMESTO, J. Individual tree detection in a eucalyptus plantation using unmanned aerial vehicle (UAV)-LiDAR. Remote Sensing, v.12, art.885, 2020. DOI: https://doi.org/10.3390/rs12050885
    » https://doi.org/10.3390/rs12050885
  • QGIS.ORG. QGIS Geographic Information System. Open Source Geospatial Foundation Project Available at: <https://www.qgis.org>. Accessed on: Oct. 9 2023.
    » https://www.qgis.org
  • REDMON, J.; DIVVALA, S.; GIRSHICK, R.; FARHADI, A. You only look once: unified, real-time object detection. In: CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, 29., 2016, Las Vegas. Proceedings Los Alamitos: IEEE, 2016. p.779-788. CVPR 2016. DOI: https://doi.org/10.1109/CVPR.2016.91
    » https://doi.org/10.1109/CVPR.2016.91
  • ROBOFLOW. Available at: <https://app.roboflow.com/>. Accessed on: Oct. 9 2023.
    » https://app.roboflow.com/
  • RODE, R.; LEITE, H.G.; OLIVEIRA, M.L.R. de; BINOTI, D.H.B.; RIBEIRO, C.A.A.S.; SOUZA, A.L. de; SILVA, M.L. da; COSENZA, D.N. Comparação da regulação florestal de projetos de fomento com áreas próprias de empresas florestais. Pesquisa Florestal Brasileira, v.35, p.11-19, 2015. DOI: https://doi.org/10.4336/2015.pfb.35.81.760
    » https://doi.org/10.4336/2015.pfb.35.81.760
  • RUZA, M.S.; DALLA CORTE, A.P.; HENTZ, Â.M.K.; SANQUETTA, C.R.; SILVA, C.A.; SCHOENINGER, E.R. Inventário de sobrevivência de povoamento de Eucalyptus com uso de redes neurais artificiais em fotografias obtidas por VANTs. Advances in Forestry Science, v.4, p.83-88, 2017.
  • SANTOS, H.G. dos; CARVALHO JÚNIOR, W. de; DART, R. de O.; ÁGLIO, M.L.D.; SOUZA, J.S. de; PARES, J.G.; FONTANA, A.; MARTINS, A.L. da S.; OLIVEIRA, A.P. de. O novo mapa de solos do Brasil: legenda atualizada. Rio de Janeiro: Embrapa Solos, 2011. 67p. (Embrapa Solos. Documentos, 130).
  • SILVA, N.; DALLA CORTE, A.P.; PIVA, L.R. de O.; SANQUETTA, C.R. Interpretação de imagens de veículos aéreos não tripulados para avaliação da sobrevivência de mudas em plantios florestais. Enciclopédia Biosfera, v.15, p.608-619, 2018. DOI: https://doi.org/10.18677/EnciBio_2018A56
    » https://doi.org/10.18677/EnciBio_2018A56
  • SILVA, N.F. da; BARROS, N.F. de; NEVES, J.C.L.; SCHULTHAIS, F.; NOVAIS, R.F. de; MATTIELLO, E.M. Yield and nutrient demand and efficiency of eucalyptus under coppicing regime. Forests, v.11, art.852, 2020. DOI: https://doi.org/10.3390/f11080852
    » https://doi.org/10.3390/f11080852
  • SOUZA, F.C. de; REIS, G.G. dos; REIS, M. das G.F.; LEITE, H.G.; ALVES, F. de F.; FARIA, R.S. de; PEREIRA, M.M. Sobrevivência e diâmetro de plantas intactas e brotações de clones de eucalipto. Floresta e Ambiente, v.19, p.44-54, 2012. DOI: https://doi.org/10.4322/floram.2012.006
    » https://doi.org/10.4322/floram.2012.006
  • STARKE, M.; GEIGER, C. Machine vision based waterlogged area detection for gravel road condition monitoring. International Journal of Forest Engineering, v.33, p.243-249, 2022. DOI: https://doi.org/10.1080/14942119.2022.2064654
    » https://doi.org/10.1080/14942119.2022.2064654
  • TZUTALIN. LabelImg: Git code. 2015. Available at: <https://github.com/tzutalin/labelImg>. Accessed on Apr. 6 2023.
    » https://github.com/tzutalin/labelImg
  • WANG, Q.; CHENG, M.; HUANG, S.; CAI, Z.; ZHANG, J.; YUAN, H. A deep learning approach incorporating YOLO v5 and attention mechanisms for field real-time detection of the invasive weed Solanum rostratum Dunal seedlings. Computers and Electronics in Agriculture, v.199, art.107194, 2022a. DOI: https://doi.org/10.1016/j.compag.2022.107194
    » https://doi.org/10.1016/j.compag.2022.107194
  • WANG, Z.; JIN, L.; WANG, S.; XU, H. Apple stem/calyx real-time recognition using YOLO-v5 algorithm for fruit automatic loading system. Postharvest Biology and Technology, v.185, art.111808, 2022b. DOI: https://doi.org/10.1016/j.postharvbio.2021.111808
    » https://doi.org/10.1016/j.postharvbio.2021.111808
  • YAN, B.; FAN, P.; LEI, X.; LIU, Z.; YANG, F. A real-time apple targets detection method for picking robot based on improved YOLOv5. Remote Sensing, v.13, art.1619, 2021. DOI: https://doi.org/10.3390/rs13091619
    » https://doi.org/10.3390/rs13091619
  • YUN, W.; KUMAR, J.P.; LEE, S.; KIM, D.-S.; CHO, B.-K. Deep learning based system development for black pine bast scale detection. Scientific Reports, v.12, art.606, 2022. DOI: https://doi.org/10.1038/s41598-021-04432-z
    » https://doi.org/10.1038/s41598-021-04432-z
  • ZHANG, L.; YIN, L.; LIU, L.; ZHUO, R.; ZHUO, Y. Forestry pests identification and classification based on improved YOLO v5s. In: INTERNATIONAL CONFERENCE ON ELECTRONIC INFORMATION ENGINEERING AND COMPUTER SCIENCE, 1., Changchun, 2021. Proceedings Changchun: IEEE, 2021. p.670-673. EIECS 2021. DOI: https://doi.org/10.1109/EIECS53707.2021.9588027
    » https://doi.org/10.1109/EIECS53707.2021.9588027
  • ZHAO, L.; ZHI, L.; ZHAO, C.; ZHENG, W. Fire-YOLO: a small target object detection method for fire inspection. Sustainability, v.14, art.4930, 2022. DOI: https://doi.org/10.3390/su14094930
    » https://doi.org/10.3390/su14094930

Publication Dates

  • Publication in this collection
    04 Dec 2023
  • Date of issue
    2023

History

  • Received
    06 May 2023
  • Accepted
    16 Oct 2023
Embrapa Secretaria de Pesquisa e Desenvolvimento; Pesquisa Agropecuária Brasileira Caixa Postal 040315, 70770-901 Brasília DF Brazil, Tel. +55 61 3448-1813, Fax +55 61 3340-5483 - Brasília - DF - Brazil
E-mail: pab@embrapa.br