Acessibilidade / Reportar erro

CLASSIFICATION OF BANANA RIPENING STAGES BY ARTIFICIAL NEURAL NETWORKS AS A FUNCTION OF PLANT PHYSICAL, PHYSICOCHEMICAL, AND BIOCHEMICAL PARAMETERS

ABSTRACT

Brazil is currently the 4th world’s largest banana producer, producing around 7 million tons. In this scenario, several studies have been developed with a large amount of data, such as climatic, morphological, and nutritional data, in an attempt to improve these numbers even further. This study aims to classify banana ripening stages by artificial neural networks (ANN) as a function of plant physical, physicochemical, and biochemical parameters. The used ANN consisted of a three-layer feedforward backpropagation network, with eight neurons in the input layer (physical, physicochemical, and biochemical parameters), ten neurons in the intermediate layer, and two neurons in the output layer (classification of banana ripening stages). The results showed three configurations. ANN presented an excellent result for the training phase, with 100% accuracy in the sample classification for the three configurations. The validation and testing phases, that is, the classification of samples that were not part of the training, showed 91.6% and 94.4% accuracy in the first and second configurations, respectively, and 89.5% accuracy in the third configuration.

KEYWORDS
Artificial intelligence; estimation; mathematical modeling; banana stages

INTRODUCTION

Brazil has stood out on the world stage, with total annual production, on average, of almost 7 million tons (IEA, 2019IEA - Instituto de Economia Agrícola (2019) Características mercadológicas da banana: oferta e consumo na metrópole paulistana em 2019. Available: http://www.iea.agricultura.sp.gov.br/out/TerTexto.php?codTexto=14851. Accessed Mar 18, 2021.
http://www.iea.agricultura.sp.gov.br/out...
). Banana is a climacteric fruit and modifies its organoleptic characteristics such as color, flavor, aroma, and nutritional parameters throughout the ripening period, and the stage at which it is harvested is decisive for its storage, marketing, and pricing. The fruits should reach the market still green, with a fresh appearance and good quality. Early detection of harvest time and management of problems associated with weather, pest attack, and disease occurrence can help increase performance and subsequent profit, thus assisting in making decisions about harvest, transport, storage, and pricing. Currently, several tools have been developed to reduce and even solve these problems through data involving indecisions, estimation, and classifications, such as fuzzy logic, artificial neural networks (ANN), and multivariate analysis. A fuzzy mathematical model was proposed in Putti et al., 2017Putti FF, Gabriel Filho LRA, Gabriel CPC, Bonini Neto A, Bonini CSB, Reis AR (2017) A Fuzzy mathematical model to estimate the effects of global warming on the vitality of Laelia purpurata orchids. Mathematical Biosciences 288: 124-129. DOI: https://doi.org/10.1016/j.mbs.2017.03.005
https://doi.org/10.1016/j.mbs.2017.03.00...
to estimate the effects of global warming on the vitality of orchids, and the developed model allowed observing that an increase in temperature and the lack of adequate shading can reduce plant vitality. Vasconcelos et al., 2020Vasconcelos R, Gabriel CPC, Almeida HJ, Garcia A, Bonini Neto A, Mauad M, Gabriel Filho LRA (2020) Multivariate Behavior of Irrigated Sugarcane with Phosphate Fertilizer and Filter Cake Management: Nutritional State, Biometry, and Agroindustrial Performance. Journal of Soil Science and Plant Nutrition 20: 1625 - 1636. DOI: https://doi.org/10.1007/s42729-020-00234-w
https://doi.org/10.1007/s42729-020-00234...
proposed a sophisticated mathematical method based on multivariate analysis, which explained the variations caused by irrigation application and phosphorus sources and doses throughout the crop cycle.

ANN allowed estimating soil recovery levels as a function of its chemical and physical attributes over the years, with good behavior in the training, and good results were achieved in Bonini et al., 2019Bonini Neto A, Bonini CSB, Reis AR, Piazentin JC, Coletta LFS, Putti FF, Heinrichs R, Moreira A (2019) Automatic Recovery Estimation of Degraded Soils by Artificial Neural Networks in Function of Chemical and Physical Attributes in Brazilian Savannah Soil. Communications in Soil Science and Plant Analysis 50: 1-14. DOI: https://doi.org/10.1080/00103624.2019.1635144
https://doi.org/10.1080/00103624.2019.16...
. Souza et al., 2019Souza AV, Bonini Neto A, Piazentin JC, Dainese Junior BJ, Gomes EP, Bonini CSB, Putti FF (2019) Artificial neural network modelling in the prediction of Banana’s Harvest. Scientia Horticulturae 257: 108724. DOI: https://doi.org/10.1016/j.scienta.2019.108724
https://doi.org/10.1016/j.scienta.2019.1...
proposed an artificial neural network to estimate the ideal day for banana harvesting as a function of climate data. The authors could estimate whether the days for harvesting increased or decreased with a variation in the input data (rainfall, minimum, maximum, and mean temperature, photoperiod, and relative humidity). Several other studies using ANN in agriculture have been published (Rocha Neto et al., 2015Rocha Neto OC, Teixeira AS, Braga APS, Santos CC, Leão RAO (2015) Application of artificial neural networks as an alternative to volumetric water balance in drip irrigation management in watermelon crop. Engenharia Agrícola 36(1): 1 - 12. DOI: https://doi.org/10.1590/1809-4430-Eng.Agric.v35n2p266-279/2015
https://doi.org/10.1590/1809-4430-Eng.Ag...
), (Adebayo et al., 2017Adebayo SE, Hashim N, Abdan K, Hanafi M, Zude-Sasse M (2017) Prediction of banana quality attributes and ripeness classification using artificial neural network. Acta Horticulturae 1152: 335 - 344. DOI: https://doi.org/10.17660/ActaHortic.2017.1152.45
https://doi.org/10.17660/ActaHortic.2017...
), (Swietlicka et al., 2017Swietlicka I, Sujak A, Muszynski S, Swietlicki M (2017) The application of artificial neural networks to the problem of reservoir classification and land use determination on the basis of water sediment composition. Ecological Indicators 72: 759 - 765. DOI: https://doi.org/10.1016/j.ecolind.2016.09.012
https://doi.org/10.1016/j.ecolind.2016.0...
) and (Pentos & Pieczarka, 2017Pentos K, Pieczarka K (2017) Applying an artificial neural network approach to the analysis of tractive properties in changing soil conditions. Soil & Tillage Research 165: 113 - 120. DOI: https://doi.org/10.1016/j.still.2016.08.005
https://doi.org/10.1016/j.still.2016.08....
).

One of the main advantages of neural networks is the possibility of efficient manipulation of large amounts of data and their ability to generalize, but the main reason for their use in data classification is that neural networks do not assume any type of data distribution, unlike traditional parametric statistics, which assume that the data have a normal distribution (Atkinson & Tatnall, 1997Atkinson PM, Tatnall ARL (1997) Introduction Neural networks in remote sensing. International Journal of Remote Sensing 18(4): 699-709. DOI: https://doi.org/10.1080/014311697218700
https://doi.org/10.1080/014311697218700...
).

According to Yool (1998)Yool SR (1998) Land cover classification in rugged areas using simulated moderateresolution remote sensor data and an artificial neural network, International Journal of Remote Sensing 19(1): 85 - 96. DOI: https://doi.org/10.1080/014311698216440
https://doi.org/10.1080/014311698216440...
, the achieved results suggest that neural networks can be robust when spectral data are indistinct or sparse, being capable of producing accuracies that exceed most pattern recognition methods that use conventional statistics.

In this context, this study aims to classify banana ripening stages (underripe, barely ripe, ripe, and overripe) using ANN as a function of plant physical, physicochemical, and biochemical parameters.

MATERIAL AND METHODS

The experiment was conducted in Tupã, SP, Brazil, using a total of 120 samples. Fruits from a commercial banana plantation were selected. The cultivar Nanicão, a triploid of Musa acuminata (AAA) from the Cavendish subgroup, used was. More information on the conduction and analysis of the physical, physicochemical, and biochemical parameters of the experiment can be found in Souza et al., 2021Souza AV, Mello JM, Favaro VFS, Santos TGF, Santos GP, Sartori DL, Putti FF (2021) Metabolism of Bioactive Compounds and Antioxidant Activity in Bananas During Ripening. Journal of Food Processing and Preservation 45: 15959. DOI: https://doi.org/10.1111/jfpp.15696
https://doi.org/10.1111/jfpp.15696...
. Each sample for the network training consisted of 10 data, that is, the ANN input data (eight data), representing physical (apical, median, and basal texture), physicochemical (pH, soluble solids [°Brix], and titratable acidity [g citric acid 100 g−1]), and biochemical parameters (total sugar [g 100 g−1] and ascorbic acid [mg 100 g−1]) of the plant (Souza et al., 2021Souza AV, Mello JM, Favaro VFS, Santos TGF, Santos GP, Sartori DL, Putti FF (2021) Metabolism of Bioactive Compounds and Antioxidant Activity in Bananas During Ripening. Journal of Food Processing and Preservation 45: 15959. DOI: https://doi.org/10.1111/jfpp.15696
https://doi.org/10.1111/jfpp.15696...
), and the desired output binary data (two data), representing the banana ripening stages (0, 0) underripe; (0, 1) barely ripe; (1, 0) ripe; and (1, 1) overripe. The two desired output data (binary) were used only for the network training phase, that is, each binary output (two data) corresponds to an input (eight data). The network must be able to learn and classify the data that were not part of the training. After training, the network is considered ready in the validation and testing phase to classify the data, that is, present outputs (underripe, barely ripe, ripe, and overripe) of input data that were not part of the training and then compare with the desired output. The used ANN consisted of a three-layer feedforward backpropagation network, with eight neurons in the input layer, ten neurons in the intermediate layer, and two neurons in the output layer (ripening stages) (Figure 1). The software MATLAB® was used.

FIGURE 1
ANN used in this work.

Non-recurrent or feedforward networks do not have memory, and their output is exclusively determined as a function of the input and weight values, that is, they do not have feedback loops. Backpropagation is a type of network training, which can be unsupervised (consists of adjusting the weights of a neural network, considering only the set of input patterns, self-organizing training) or supervised (consisting of adjusting the weights of a neural network to provide desired outputs, considering the set of input patterns), which is the case of this study (Widrow & Lehr, 1990Widrow B, Lehr M (1990) A. 30 Years of adaptive neural networks: perceptron, madaline, and backpropagation. Proceedings of the IEEE 78(9): 1415 - 1442. DOI: https://doi.org/10.1109/5.58323
https://doi.org/10.1109/5.58323...
).

The ability to learn is the most important property of an ANN and thus, improve its performance. In this case, the training process corresponds to an iterative process of adjustments applied to its weights. A well-defined set of rules for solving a training problem is called a training algorithm. There are many types of training algorithms specific to particular neural network models. These algorithms differ, mainly, by the way the weights are modified.

A weight adjustment procedure based on the squared error of neurons in the neural network output was used in this study. The error is propagated in the opposite direction (from the output to the input). Weight variations are determined using the gradient descent algorithm (Widrow & Lehr, 1990Widrow B, Lehr M (1990) A. 30 Years of adaptive neural networks: perceptron, madaline, and backpropagation. Proceedings of the IEEE 78(9): 1415 - 1442. DOI: https://doi.org/10.1109/5.58323
https://doi.org/10.1109/5.58323...
).

Training, via backpropagation, is started by presenting a pattern X to the network, which will produce an output Yob. Subsequently, the error of each output (difference between the desired value Ydes and the output Yob) is calculated. The next step consists of determining the error propagated in the reverse direction through the network associated with the partial derivative of the quadratic error of each element with respect to the weights and, finally, adjusting the weights of each element (Widrow & Lehr, 1990Widrow B, Lehr M (1990) A. 30 Years of adaptive neural networks: perceptron, madaline, and backpropagation. Proceedings of the IEEE 78(9): 1415 - 1442. DOI: https://doi.org/10.1109/5.58323
https://doi.org/10.1109/5.58323...
). A new pattern is presented. Thus, the process is repeated for all standards until convergence (the error is lower than a pre-established tolerance). Initial weights are normally adopted as random numbers (Widrow & Lehr, 1990Widrow B, Lehr M (1990) A. 30 Years of adaptive neural networks: perceptron, madaline, and backpropagation. Proceedings of the IEEE 78(9): 1415 - 1442. DOI: https://doi.org/10.1109/5.58323
https://doi.org/10.1109/5.58323...
). The backpropagation algorithm consists of adapting weights, such that the mean squared error (MSE) of the network is minimized, according to (1).

(1) M i n .  Error  ( MSE ) = ( Y o b Y d e s ) 2

RESULTS AND DISCUSSION

For each input (8 data), there is a desired output (2 data) corresponding to that input. In the training phase, these output data are used as a target, which is essential in this phase in ANN with supervised training, and later they can also be used to compare the results. As for the validation and testing phases, these output data are not essential, they are only used to compare the results, as presented in this work.

Table 1 and Figures 2, 3, and 4 show the results of the 120 samples for training, validation, and testing in a configuration of 96 samples for training (80%), 12 samples for validation (10%), and 12 samples for the testing phase (10%). Figure 2(a) shows the mean squared error (MSE) of training, validation, and testing in this configuration. The iterative process stops when one of the values specified in Table 1 is reached, that is, the 7th iteration in this case, with a training value of 0.0000632 for MSE. However, the values provided by the network and shown in the graphs presented the best validation, which occurred in the 5th iteration. In this case, the samples from the validation phase are not used in the training phase. MSE for training in the 5th iteration (best validation) was 0.002327, showing that the network had good training. Figure 2(b) shows the histogram of the error, that is, the obtained output (Yob) relative to the desired output (Ydes), with 20 intervals for the 120 samples in the training, validation, and testing phases in the 5th iteration. The training samples were closer to zero relative to the validation and testing samples, explaining the performance shown in Figure 2(a).

TABLE 1
Values specified and achieved in the training, validation and test phases of the ANN.
FIGURE 2
Training, validation and test of the ANN, (a) performance (MSE), (b) error histogram (Ydes - Yob) with 20 intervals for the 120 output samples (2 x 120 data).
FIGURE 3
Regression analysis between variables: desired output (Ydes) and obtained output (Yob), (a) training with 80% of the samples, (b) validation with 10% of the samples, (c) test with 10% of the samples and (d) all samples (100%).
FIGURE 4
ANN performance, output obtained (Yob) vs desired output (Ydes), (a) in the training phase (80% of samples), 96 samples, (b) in the validation and test phase (20% of samples that were not part of the training), 24 samples, (c) in three phases (100% of samples), 120 samples.

Figure 3 shows the regression lines (fits) and R values for the three phases of the network. Figure 4 shows the desired and obtained outputs (0, 0 – underripe; 0, 1 – barely ripe; 1, 0 – ripe; and 1, 1 – overripe) also for the three phases of the network. In Figure 3(a), the fit line Yob ≅ 0.95 Ydes + 0.025 was very close to the expected (Yob = Ydes), with an R value of 0.9961, showing that the network was well trained, with no error in the classification of the 96 samples in the training phase (100% accuracy), as shown in Figure 4(a). Figure 3(b) and (c) shows the results obtained in the validation and testing phases of the remaining 24 samples, which were not part of the training (12 samples or 10% for each phase). Good fits and R values of 0.9281 and 0.92183 were observed, respectively, but lower than the training due to errors. Only two samples were classified wrong: one (error 1) for validation and one (error 2) for testing, that is, 91.7% accuracy, error 1 with obtained output overripe (1, 1) and error 2 with obtained output underripe (0, 0) instead of both ripe (1, 0), as shown in Figure 4(b). Figures 3(d) and 4(c) show the regression line, the R value, and the classification for 100% of the samples in the three phases of the network.

Results and discussion on configuration (70%, 15% and 15%)

Table 2 and Figure 5 show the results of training, validation, and testing for another network configuration, with 84 samples (70%) for training, 18 samples for validation (15%), and 18 samples for testing (15%). Validation and testing samples did not undergo training. Table 2 shows that the iterative process stopped in the 8th iteration with an MSE of 0.0000375 and 3 seconds of processing in the training. There were two validation checks and the best one occurred in the 8th iteration, with MSE for validation and testing of 0.059475 and 0.002833, respectively. Similar results to those of the first configuration (80%, 10%, and 10%) were obtained, with no errors in the training phase (Figure 5(a)), and errors 1 and 2 occurring in the validation phase (Figure 5(b)), with an R value of 0.88148 due to the errors. No errors occurred in the testing phase (Figure 5(c)), which explains the R value close to 1.0 (0.99682). Figure 5(d) shows classification results for 100% of the samples in the three network phases. The errors for the two configurations occurred in the ripe stage, in which errors 1 and 2 presented an obtained output of overripe (1, 1) instead of ripe (1, 0). The hit percentage was 88.8% in the validation phase and 100% in the testing phase.

TABLE 2
Values specified and achieved in the training, validation and test phases with different proportions 70%, 15% and 15%, respectively.
FIGURE 5
ANN performance, output obtained (Yob) vs desired output (Ydes), (a) in the training phase (70% of samples), 84 samples, (b) in the validation phase (15% of samples that were not part of the training), 18 samples, (c) in the test phase (15% of samples that were not part of the training), 18 samples, (d) in three phases (100% of samples), 120 samples.

Results and discussion on configuration (60%, 20% and 20%)

This configuration is quite risky because only 60% of the samples, that is, 72 samples, are part of the training. In this case, ANN may not correctly classify the samples that were not part of the training, validation, and testing phases. Table 3 and Figure 6 prove this fact. The purpose of presenting these results was just to show that depending on the network configuration, even for excellent training, the results are worse when classifying samples that were not part of the training, which is the operation phase or network diagnosis.

TABLE 3
Values specified and achieved in the training, validation and test phases with different proportions 60%, 20% and 20%, respectively.
FIGURE 6
ANN performance, output obtained (Yob) vs desired output (Ydes), (a) in the training phase (60% of samples), 72 samples, (b) in the validation phase (20% of samples that were not part of the training), 24 samples, (c) in the test phase (20% of samples that were not part of the training), 24 samples, (d) in three phases (100% of samples), 120 samples.

Table 3 and Figure 6 show the results of training, validation, and testing for the network configuration 60%, 20%, and 20%, with 72 samples for training (60%), 24 samples for validation (20%), and 24 samples for testing (20%). Validation and testing samples did not undergo training. Table 3 shows that the iterative process stopped in the 9th iteration, with an MSE of 0.0000486 and 4 seconds of processing in the training. There was no validation check. In this case, the validation is observed in the best training result, that is, the 8th iteration, which had MSE values of 0.054704 and 0.024751 for validation and testing, respectively. No errors occurred in the training phase, which showed an excellent process (Figure 6(a)). The errors occurred in the validation and testing phases, that is, three errors (1, 2, and 3) in the validation phase (Figure 6(b)), with an R of 0.8867, and two errors (4 and 5) in the testing phase (Figure 6(c)), with an R of 0.9448. Figure 6(d) shows classification results for 100% of the samples in the three phases of the network. The errors for this configuration occurred at the ripe, underripe, and barely ripe banana stages. In this case, errors 1, 2, and 3 presented obtained outputs of underripe (0, 0), overripe (1, 1), and overripe (1, 1), respectively, instead of ripe (1, 0); error 4 showed an obtained output of barely ripe (0, 1) instead of underripe (0, 0); and error 5 had an obtained output of underripe (0, 0) instead of barely ripe (0, 1). The hit percentage reached 87.5% in the validation phase and 91.6% in the testing phase.

Tables 4 and 5 show the hit percentages and errors in the classifications of the 120 samples for the three configurations (80%, 10%, and 10%; 70%, 15%, and 15%; and 60%, 20%, and 20%). ANN with the first and second configurations showed better results, with 98.3% accuracy for the 120 samples. A higher number of errors was expected for the third configuration, as only 60% of the samples (72 of the 120) were part of the training phase, as the other samples (48 out of 120) in the testing and validation phases were not part of the training phase, which may lead to an increase in the error. The number of errors even for this configuration in the classification was only 5 of the 48 samples (10.5%), as shown in Table 5.

TABLE 4
Percentage of correct answers (CA) in the classification of three presented configurations.
TABLE 5
Errors in the classification of three configurations presented.

CONCLUSIONS

The network presented a good performance in the three configurations for the training phase, with 100% accuracy. The validation and testing phases (samples that were not part of training) had only two samples classified wrong in the first and second configuration, with 91.6% and 94.4% accuracy, respectively, while the third configuration presented five errors in the classification of the samples although the training was excellent, with 89.5% accuracy in the validation and testing phases. In this context, the first and second configurations showed better results. In general, the mean accuracy reached 97.5%.

ACKNOWLEDGMENTS

The authors gratefully acknowledge and thank the financial support from the Sao Paulo Research Foundation for the Regular Project (FAPESP 2020/14166-1).

REFERENCES

  • Adebayo SE, Hashim N, Abdan K, Hanafi M, Zude-Sasse M (2017) Prediction of banana quality attributes and ripeness classification using artificial neural network. Acta Horticulturae 1152: 335 - 344. DOI: https://doi.org/10.17660/ActaHortic.2017.1152.45
    » https://doi.org/10.17660/ActaHortic.2017.1152.45
  • Atkinson PM, Tatnall ARL (1997) Introduction Neural networks in remote sensing. International Journal of Remote Sensing 18(4): 699-709. DOI: https://doi.org/10.1080/014311697218700
    » https://doi.org/10.1080/014311697218700
  • Bonini Neto A, Bonini CSB, Reis AR, Piazentin JC, Coletta LFS, Putti FF, Heinrichs R, Moreira A (2019) Automatic Recovery Estimation of Degraded Soils by Artificial Neural Networks in Function of Chemical and Physical Attributes in Brazilian Savannah Soil. Communications in Soil Science and Plant Analysis 50: 1-14. DOI: https://doi.org/10.1080/00103624.2019.1635144
    » https://doi.org/10.1080/00103624.2019.1635144
  • IEA - Instituto de Economia Agrícola (2019) Características mercadológicas da banana: oferta e consumo na metrópole paulistana em 2019. Available: http://www.iea.agricultura.sp.gov.br/out/TerTexto.php?codTexto=14851 Accessed Mar 18, 2021.
    » http://www.iea.agricultura.sp.gov.br/out/TerTexto.php?codTexto=14851
  • Pentos K, Pieczarka K (2017) Applying an artificial neural network approach to the analysis of tractive properties in changing soil conditions. Soil & Tillage Research 165: 113 - 120. DOI: https://doi.org/10.1016/j.still.2016.08.005
    » https://doi.org/10.1016/j.still.2016.08.005
  • Putti FF, Gabriel Filho LRA, Gabriel CPC, Bonini Neto A, Bonini CSB, Reis AR (2017) A Fuzzy mathematical model to estimate the effects of global warming on the vitality of Laelia purpurata orchids. Mathematical Biosciences 288: 124-129. DOI: https://doi.org/10.1016/j.mbs.2017.03.005
    » https://doi.org/10.1016/j.mbs.2017.03.005
  • Rocha Neto OC, Teixeira AS, Braga APS, Santos CC, Leão RAO (2015) Application of artificial neural networks as an alternative to volumetric water balance in drip irrigation management in watermelon crop. Engenharia Agrícola 36(1): 1 - 12. DOI: https://doi.org/10.1590/1809-4430-Eng.Agric.v35n2p266-279/2015
    » https://doi.org/10.1590/1809-4430-Eng.Agric.v35n2p266-279/2015
  • Souza AV, Bonini Neto A, Piazentin JC, Dainese Junior BJ, Gomes EP, Bonini CSB, Putti FF (2019) Artificial neural network modelling in the prediction of Banana’s Harvest. Scientia Horticulturae 257: 108724. DOI: https://doi.org/10.1016/j.scienta.2019.108724
    » https://doi.org/10.1016/j.scienta.2019.108724
  • Souza AV, Mello JM, Favaro VFS, Santos TGF, Santos GP, Sartori DL, Putti FF (2021) Metabolism of Bioactive Compounds and Antioxidant Activity in Bananas During Ripening. Journal of Food Processing and Preservation 45: 15959. DOI: https://doi.org/10.1111/jfpp.15696
    » https://doi.org/10.1111/jfpp.15696
  • Swietlicka I, Sujak A, Muszynski S, Swietlicki M (2017) The application of artificial neural networks to the problem of reservoir classification and land use determination on the basis of water sediment composition. Ecological Indicators 72: 759 - 765. DOI: https://doi.org/10.1016/j.ecolind.2016.09.012
    » https://doi.org/10.1016/j.ecolind.2016.09.012
  • Vasconcelos R, Gabriel CPC, Almeida HJ, Garcia A, Bonini Neto A, Mauad M, Gabriel Filho LRA (2020) Multivariate Behavior of Irrigated Sugarcane with Phosphate Fertilizer and Filter Cake Management: Nutritional State, Biometry, and Agroindustrial Performance. Journal of Soil Science and Plant Nutrition 20: 1625 - 1636. DOI: https://doi.org/10.1007/s42729-020-00234-w
    » https://doi.org/10.1007/s42729-020-00234-w
  • Widrow B, Lehr M (1990) A. 30 Years of adaptive neural networks: perceptron, madaline, and backpropagation. Proceedings of the IEEE 78(9): 1415 - 1442. DOI: https://doi.org/10.1109/5.58323
    » https://doi.org/10.1109/5.58323
  • Yool SR (1998) Land cover classification in rugged areas using simulated moderateresolution remote sensor data and an artificial neural network, International Journal of Remote Sensing 19(1): 85 - 96. DOI: https://doi.org/10.1080/014311698216440
    » https://doi.org/10.1080/014311698216440

Edited by

Area Editor: Gizele Ingrid Gadotti

Publication Dates

  • Publication in this collection
    06 June 2022
  • Date of issue
    2022

History

  • Received
    27 Oct 2021
  • Accepted
    29 Mar 2022
Associação Brasileira de Engenharia Agrícola SBEA - Associação Brasileira de Engenharia Agrícola, Departamento de Engenharia e Ciências Exatas FCAV/UNESP, Prof. Paulo Donato Castellane, km 5, 14884.900 | Jaboticabal - SP, Tel./Fax: +55 16 3209 7619 - Jaboticabal - SP - Brazil
E-mail: revistasbea@sbea.org.br