SciELO - Scientific Electronic Library Online

 
vol.11A Precise Algorithm for Computing Sun Position on a SatelliteTheoretical and Experimental Heat Transfer in Solid Propellant Rocket Engine author indexsubject indexarticles search
Home Pagealphabetic serial listing  

Services on Demand

Journal

Article

Indicators

Related links

Share


Journal of Aerospace Technology and Management

On-line version ISSN 2175-9146

J. Aerosp. Technol. Manag. vol.11  São José dos Campos  2019  Epub Aug 26, 2019

https://doi.org/10.5028/jatm.v11.1055 

ORIGINAL PAPER

Simulation and Prediction for a Satellite Temperature Sensors Based on Artificial Neural Network

Hamdy Soltan Abdelkhalek1  * 
http://orcid.org/0000-0003-4542-6673

Haitham Medhat2 

Ibrahim Ziedan1 

Mohamed Amal1 

1Zagazig University - Faculty of Engineering - Computer and System Engineering - Zagazig/Sharkia - Egypt

2National Authority for Remote Sensing and Space Sciences - El-Nozha/El-Gedida - Egypt


ABSTRACT:

Spacecrafts in space environment are exposed to several kinds of thermal sources such as radiation, albedo and emitted IR from the earth. The thermal control subsystem in spacecraft is used to keep all parts operating within allowable temperature ranges. A failure in one or many temperature sensors could lead to abnormal operation. Consequently, a prediction process must be performed to replace the missing data with estimated values to prevent abnormal behavior. The goal of the proposed model is to predict the failed or missing sensor readings based on artificial neural networks (ANN). It has been applied to EgyptSat-1 satellite. A backpropagation algorithm called Levenberg-Marquardt is used to train the neural networks (NN). The proposed model has been tested by one and two hidden layers. Practical metrics such as mean square error, mean absolute error and the maximum error are used to measure the performance of the proposed network. The results showed that the proposed model predicted the values of one failed sensor with adequate accuracy. It has been employed for predicting the values of two failed sensors with an acceptable mean square and mean absolute errors; whereas the maximum error for the two failed sensors exceeded the acceptable limits.

KEYWORDS: Artificial neural networks; Thermal control subsystem; Thermal control simulation; Sensor values prediction; Levenberg-Marquardt algorithm

INTRODUCTION

Spacecrafts in space environment are exposed to different thermal conditions such as Sun radiation, albedo (reflected sunlight from earth) and emitted IR from the earth (Fig. 1). The amount of external heat absorbed directly from solar energy is a function of the satellite material properties and its orientation with respect to the sun (Bulut et al. 2008b). The satellite must be protected from all thermal effects. The thermal control subsystem is used to maintain the temperature of all satellite components within acceptable ranges during all phases.

Figure 1 Heat exchange between the satellite and the space environment (Pngtree.com and freepik.com). 

The temperature may exceed the acceptable values due to unexpected interferences, which may result in serious damage to the satellite parts or decreasing the lifetime of the mission (Boato et al. 2017). Also, some temperature readings may be missed due to sensor breakdown, or it may encounter wrong values due to abnormal heat from one or more of the electronic components. Predicting the missing values is very essential to keep the satellite working normally, and can be used to deal with some of the satellite parts problems.

Many algorithms for predicting the missing data have been developed in the last decades (Schafer and Graham 2002). The first solution is to discard the samples with missing values. This method is known as listwise deletion or complete case analysis (Van Buuren 2018). It is not suitable for high rate missing data or temperature sensor failure in the satellite as it may result in mission failure. The second solution is to replace the missing data with estimated values, which is known as data imputation. A variety of imputation methods have been developed. Such methods include last value carried forward, mean imputation, spectral analysis, kernel methods, matrix factorization, EM algorithm, matrix completion and multiple imputations (Che et al. 2018).

Since the early 2000s, a new paradigm of thinking has emerged where missing values are treated as unknown values to be learned through a machine-learning model. In this framework, complete data samples are used as training set for a machine-learning model, which is then applied to the data samples with missing values to impute them. Both clustering (unsupervised) and classification (supervised) algorithms can be adapted for imputation (Liu and Gopalakrishnan 2017).

Using popular imputation methods lead to a time-consuming prediction procedure and may impair the prediction performance. This paper proposes a model based on the ANN to predict the failed sensor readings. An ANN can represent any linear or nonlinear system, even when no physical model or mathematical equations are needed. The ANN training establishes the correlation between output and input. The proposed networks have been trained using the Levenberg-Marquardt algorithm with the EgyptSat-1 (Fig. 2) satellite temperature data to estimate the failed or missing sensor values.

Figure 2 EgyptSat-1 Satellite (NARSS Foundation http://www.narss.sci.eg/). 

One of the advantages of using a NN to predict missing values over other imputation methods is that the trained network can simulate the new scenarios of the satellite.

RELATED WORK

Schafer and Graham (2002) surveyed the methods for missing data treatment and described their strengths and limitations. Batista and Monard (2003) analyzed four missing data imputation methods for supervised learning. Graham (2009) presents a practical summary of the missing data literature, including a sketch of missing data theory and descriptions of normal model multiple imputation (MI) and maximum likelihood methods. Fung (2006) reviewed many approaches for modeling time series data with missing values such as time series decomposition, least squares approximation, and numerical interpolation methods. Avinash et al. (2015) predicted satisfactorily the next data of wireless sensor networks using Kalman filter. Sutagundar et al. (2016) presented the ANN modeling of Micro-Electro-Mechanical cantilever resonator using the Levenberg-Marquardt algorithm.

Neural networks have been used in many applications to treat the missing data problem, especially in the medicine. Tresp and Briegel (1998) presented a solution using the recurrent NN to predict the glucose/insulin metabolism of a diabetic patient where blood glucose measurements are only available few times a day at irregular intervals. Pan et al. (2017) successfully predicted the relapse in pediatric acute lymphoblastic leukemia in the context of machine learning algorithms.

This paper focuses on predicting the values of a failed or missed sensor. But to detect the failed sensor, other techniques were proposed by Gilmore (2002) and Napolitano et al. (1995). Some works in the literature like Reis Junior et al. (2017), Girimonte and Izzo (2007) and López-Martínez et al. (2015) proposed the use of the ANN with thermal subsystems or others. Song and Zhang (2014) made a comparison between four algorithms for training the NN Quasi-Newton algorithm: the gradient descent algorithm with adaptive learning rate back-propagation algorithm, the gradient descent with momentum and adaptive learning rate back-propagation algorithm, and the Levenberg-Marquardt algorithm. Results of these algorithms demonstrated that the best one is the Levenberg-Marquardt one. There is a high dependency of the ANN weights, biases, learning rate, number of hidden layers and the number of neurons of each hidden layer on its performance. There is no specific technique or algorithm to find the optimum number of hidden layers or number of neurons in each layer for a certain problem. However, Kamiński (2016) and An et al. (2016) presented methods for optimizing the weights, biases and learning rate. To control the temperature of the satellite, passive and active control systems are used. The passive control is applied to most of the nano-satellites because of its simplicity, cost reliability, mass, and power. Passive control can be performed by Multi-Layer Insulation (MLI), Optical solar reflector (OSR), paints, heat sinks, heat Pipes (HP) and thermal insulation spacer. Active temperature control can be designed by heaters and temperature sensors. Bulut and Sozbir (2015) and Liu et al. (2008) provide more information about active and passive control methods.

Further information about forecasting using Levenberg-Marquardt algorithms is accomplished by Saini and Soni (2002). Bulut et al. (2008a) presented spacecraft thermal control design using passive and active approaches. They demonstrated that the thermal control of the satellite is affected by the thermal properties of passive approaches. Gilmore (2002) includes valuable information about the space environment and the thermal control system design, testing and analysis. Li-Ping and Hongquan (2008) demonstrate a Particle Filtering (PF) algorithm based on a double lumped thermal model to identify the heat flux dynamically and predict the temperature more correctly.

MATHEMATICAL MODELING FOR SATELLITE HEAT BALANCE

The overall thermal control of a satellite is usually achieved by balancing the energy emitted by the spacecraft as IR radiation against the energy absorbed from the environment. Equation 1 represents the mathematical model for the satellite heat balance.

mcpidTidt=QaQd+Ql (1)

where i refers to the node number; m is the mass; and cp is the specific heat at constant pressure; Qa, Qd and Ql are the heat absorbed from the space, the heat dissipated to space by the radiator and the heat developed by the thermal control subsystem for heat balance respectively. This mathematical model cannot be used to simulate and predict the behavior of new scenarios because the calculations of Qa, Qd and Ql require some assumptions for their parameters according to the satellite attitude, orientation and its position with respect to the sun. So, the ANN may be used to predict and simulate the behavior without assumptions and prior information.

THE NEURAL NETWORK

Artificial neural networks are mathematical models inspired by biological NN. The ANN consists of several neurons interconnected together. A neuron is a mathematical model for information processing. It comprises a summation unit that receives multiple inputs and a transfer function, which is also known as the activation function. ANNs have the advantage to perform nonlinear mapping of multidimensional functions and predicting system outputs by limited experimental data. The main advantage of utilizing an ANN in the present case is that it does not require any presumptions for the physical components of the satellite or space parameters. It just requires the data received from the live satellite to predict and simulate the failed sensor values.

A backpropagation neural network (BPNN) with one hidden layer can approximate any system with nonlinear characteristics, such as satellite thermal control subsystem. The BPNN as well as the feedforward NN are consisting of inputs, hidden layers and outputs, but the BPNN returns the error values back to modify the weights and bias values. This gives the BPNN the ability to minimize the error between the NN output and target values (Fig. 3). So in this model the BPNN may be used to predict the failed sensor values. The estimated output may be calculated by Eq. 2.

Y=fΣj=1kw1,jfΣi=1nwi,jxi+bj+b2 (2)

Figure 3 The structure of the backpropagation neural network. 

where Ý is the estimated output; wi,j denotes the weights between the inputs and the hidden layer; w1,j denotes the weights between the hidden and output layers; bj refers to the biases of the hidden layer neurons; and b2 is the biases of the output layer neuron.

DATASET APPROACHES

There are two approaches for the satellite temperature data to be used for training the network. The first approach is to predict the failed temperature sensor values depending on all parameters in the telemetry (the overall data for the satellite) such as sun sensor, earth sensor, altitude values, and the temperature readings. In this approach, the NN output represents the failed temperature sensor values, and the other parameters represent the network input. The second approach uses only the temperature sensors readings. The network output is the failed sensor values, and the network input is the other temperature sensors readings. In this paper, the second approach is adopted.

DATASET FOR EGYPTSAT-1

EgyptSat-1 is the first Egyptian satellite for remote sensing purposes. It had been launched in 2007. The satellite has twenty temperature sensors distributed on all plates. Each side on the satellite has three sensors, but the top and the base have four sensors each, as shown in Fig. 4. The sample values are gathered from the readings of two years, 2009 and 2010. The total number of samples is 9027. Table 1 presents part of the samples.

Figure 4 Location of the temperature sensors on the satellite plates. 

Table 1 Part of the EgyptSat-1 Satellite data. 

Date T1 T2 T3 T4 T5 T6 T7 T19 T20
24/06/2009-20:27:20 7.61 6.47 2.61 -18.6 -13.5 -24.1 -2.39 ..... -31.5 -37.0
24/06/2009-20:27:21 7.04 6.13 2.8 -17.9 -13.2 -23.4 -1.59 ..... -32.38 -37.38
24/06/2009-20:27:22 7.38 5.91 2.61 -18.07 -13.3 -24.2 -1.82 ..... -33.18 -37.38
24/06/2009-20:27:23 7.5 6.59 2.61 -18.18 -13.6 -25.2 -2.16 ..... -32.95 -37.27
20/04/2010-09:11:48 19.88 9.2 9.77 31.13 24.2 26.36 18.97 ..... 11.47 66.35
20/04/2010-09:11:49 18.18 8.97 10.11 30.9 24.54 26.47 18.97 ..... 11.81 66.46
20/04/2010-09:11:50 18.52 8.97 10 30.79 24.31 26.58 18.97 ..... 11.7 66.35
20/04/2010-09:11:51 19.09 9.09 9.77 30.67 24.31 26.58 18.97 ..... 11.7 66.12

The temperatures unit is ºC.

TESTING THE PERFORMANCE

The performance is measured by: (I) The Mean Square Error (MSE), which is frequently used to measure the difference between the values predicted by a model and the values actually observed; (II) Mean Absolute Error (MAE), which measures the average error for the overall dataset; and (III) the maximum error for each network. A better model often provides the least maximum error value and acceptable MSE and MAE, where (Eqs. 3 and 4)

MSE=1ni=1nY˜iYi2 (3)

MAE=1ni=1nY˜iYi=1ni=1nei (4)

where Ỹi is the predicted values; Yi is the actual values; and n represents the number of samples.

THE PROPOSED NEURAL NETWORKS

Training the proposed networks was performed using the BPNN Levenberg-Marquardt algorithm (ALM). It is widely accepted as the most efficient algorithm in the sense of realizing accuracy because it interpolates between the Gauss-Newton algorithm (GNA) and the method of gradient descent. Also, it is the fastest backpropagation algorithm in Matlab toolbox and is highly recommended as a first-choice of the supervised algorithms.

It is well known that many parameters affect the performance of the neural networks such as the number of hidden layers, number of neurons for the hidden layers, initial weights and biases. But no certain approach or algorithm could be used to find the optimum number of neurons for a certain problem. So for choosing the best-trained network to predict one or two failed sensor values, many networks have been designed and tested. Considering constant weights and biases, Fig. 5 shows the procedures of the proposed model for the prediction operation.

Figure 5 The procedures performed for the prediction operation. 

Two procedures for designing the neural networks are proposed. The first procedure is used to predict the missing values of one failed sensor. The output layer of the designed network consists of one neuron for the estimated values of the failed sensor. The input layer consists of 19 neurons corresponding to the correct readings of the 19 sensors in addition to one hidden layer as in Fig. 6. In the second procedure, two failed sensors were considered. In this case, two neural networks were designed. Each network is used to predict one of the two missed sensors and consists of one hidden layer, 18 input neurons for the correct 18 sensor readings and one output for the generation of the estimated values of the failed sensor (Fig. 7).

Figure 6 The first neural network model, 19 inputs and one output. 

Figure 7 (a) Predicting values of the T3 sensor using 18 sensor readings. (b) Predicting the values of T5 using the other 18 sensors. 

Three processes for predicting the failed sensors have been performed. The first is the training process in which the network is trained with seventy percent (70%) of the overall data set. The second process is the validation process, used to choose the best network among all other designed networks by testing each one of them with the second part of the data, which is 15%. The last process is the testing process in which the third part of the data set is used to measure the performance of the chosen network (15%). The three parts of the data set were chosen randomly.

SIMULATION AND RESULTS

PREDICTING THE VALUES OF ONE FAILED SENSOR

As shown in Fig. 5, two networks have been trained and tested to predict the missing values of one failed sensor. The first network has only one hidden layer and the second contains two hidden layers. Considering the third sensor (T3) to be failed, Fig. 8 shows the effect of the number of neurons on the performance of one and two hidden layer networks regarding the maximum error, MSE, and MAE. The results showed that the MSE and MAE for two hidden layers network are better than the one hidden layer network. The maximum error value for one hidden layer in most cases is less than the values of two hidden layers. But two hidden layers network with ten neurons provides the minimum value for the maximum error, which is 2.1 °C. So the two hidden layers network with ten neurons has been chosen and tested with the third part of the dataset (testing set) (Fig. 9). The testing results demonstrated that the maximum error is 2.75 °C, the MSE equals 0.08, and the MAE equals 0.17. The allowable tolerance is ±5 °C, according to references (Boato et al. 2017; Gilmore and Donabedian 2002).

Figure 8 Effect of the number of neurons on the performance of the first network: (a) on the maximum error; (b, c) on the MSE and MAE. 

Figure 9 The two hidden layers network with ten neurons for each. 

In Fig. 10 the estimated values of the sensor values (ANN output) and the actual values of the T3 sensor have been compared for the first hundred samples. The figure shows that the estimated values and the target ones are almost identical. The error, which is the difference between the estimated values and the target, is plotted in Fig. 11. It is shown that the ANN can predict the failed sensor values with high accuracy.

Figure 10 Estimated values (ANN output) and target for the first hundred samples for the T3 sensor. 

Figure 11 Errors for all testing samples for the T3 sensor. 

The previous results consider only the third sensor T3 to be failed. The same procedures have been performed on another ten sensors considering each one of them failing individually. The prediction results are shown in Table 2.

Table 2 Performance of the networks regarding failure for each sensor. 

Failure Number of neurons MSE MAE Maximum error (ºC) Tolerance
Regarding T3 failed 10 0.12 0.2 2.7 Accepted
Regarding T1 failed 50 0.16 0.23 2.4 Accepted
Regarding T2 failed 35 0.15 0.22 3.4 Accepted
Regarding T4 failed 10 0.26 0.33 3.4 Accepted
Regarding T5 failed 5 0.23 0.32 3.6 Accepted
Regarding T6 failed 30 0.16 0.25 2.9 Accepted
Regarding T7 failed 10 0.14 0.25 2.3 Accepted
Regarding T8 failed 25 0.17 0.26 3.5 Accepted
Regarding T9 failed 10 0.26 0.33 4.4 Accepted
Regarding T10 failed 45 0.27 0.25 4.2 Accepted

PREDICTING THE VALUES OF TWO FAILED SENSOR

Two networks performed the values prediction of two failed sensors. Each network is used to predict the values of one failed sensor. It consists of 18 neurons in the input layer corresponding to the correct 18 sensors and one neuron in the output layer to estimate one of the two failed sensors, as indicated in Fig. 7. The number of hidden layers and the number of neurons has been changed according to the procedure shown in Fig. 5.

As shown in Fig. 12, one hidden layer network cannot be used to predict the values of the first sensor because the maximum error exceeds the acceptable limits (±5 °C) for all cases. The two hidden layers network with 30 neurons (Fig. 13) provide acceptable values for the maximum error, MSE, and MAE. This network has been tested by the third part of the dataset (testing dataset), and the results demonstrated that the maximum error is 6.5 °C, MSE is 0.44 and the MAE equals to 0.34.

Figure 12 Performance of the first sensor: (a) effect of neurons on the maximum error; (b) effect of neurons on the mean square error; (c) effect of neurons on the mean absolute error. 

Figure 13 The Chosen network for predicting the first sensor values. 

The results for the prediction of he second sensor values are as shown in Fig. 14. The maximum error values for one hidden layer are better than those for two hidden layers. The one hidden layer network with ten neurons has been chosen because it provides the least value for the maximum error 3.85 °C.

Figure 14 Performance of the second sensor: (a) effect of the number of neurons on the maximum error; (b) effect of the number of neurons on the mean square error; (c) effect of the number of neurons on the mean absolute error. 

After testing the chosen network with the testing dataset, the maximum error value is 5.7 °C.

As shown in Table 3, the MSE and MAE values indicate that the system predicts the two failed sensor values with good accuracy. But the maximum error is 6.5 °C for the first sensor and 7.6 °C for the second and these values exceed the acceptable limits.

Table 3 Performance of the two neural networks for two failed sensors. 

Neural network The first sensor The second sensor
MSE 0.44 0.34
MAE 0.34 0.27
Max error (ºC) 6.5 5.7

Many factors could impact the performance of the proposed model such as number of both hidden layers and neurons as well as percentage of data division. Therefore, k-fold cross-validation (CV) is utilized to evaluate the average performance. The cross validation (CV) is a technique for evaluating the performance of predictive models by showing how the model would generalize to an independent data set.

In K-fold cross-validation mechanism, the dataset is randomly divided into k disjoint subsets (folds) of equal size. The mechanism procedures can be listed as follow: one of the k subsets is used as the testing set, whereas the training set is constructed by other k-1 folds; this process is repeated for using another fold as a testing set and the other folds as a training set; then, the performance metrics are measured to evaluate the whole performance by averaging the k accuracy estimates. Results of 10-fold CV for predicting one and two failed sensor values are listed in Tables 4 and 5.

Table 4 Performance of the model using k-fold CV for one failed sensor. 

Number of iterations 1 2 3 4 5 6 7 8 9 10 Average
Max error ME (ºC) 1.9 3.5 2.8 3.0 2.4 2.7 2.0 3.2 2.3 2.1 2.59
MSE 0.07 0.25 0.13 0.11 0.1 0.1 0.09 0.15 0.09 0.12 0.12
MAE 0.16 0.16 0.22 0.19 0.18 0.19 0.18 0.23 0.17 0.22 0.19

Table 5 Performance of the model using k-fold CV for two failed sensors. 

Number of iterations 1 2 3 4 5 6 7 8 9 10 Average
First ME 5.1 5.2 5.6 6.4 3.9 5.4 4.6 4.1 6.7 4.7 5.17
MSE 0.25 0.24 0.26 0.41 0.25 0.3 0.27 0.28 0.3 0.29 0.285
MAE 0.28 0.28 0.31 0.36 0.29 0.3 0.31 0.32 0.29 0.3 0.3
Second ME 5.5 4.7 6.4 4.3 5.7 3.3 4.6 4.4 4.9 5.0 4.88
MSE 0.56 0.41 0.34 0.29 0.54 0.39 0.29 0.43 0.38 0.36 0.4
MAE 0.43 0.36 0.33 0.3 0.42 0.36 0.29 0.37 0.36 0.33 0.36

The average values of the maximum error, MSE and MAE from Table 4 affirm the previous results that the model is able to predict the values of one failed sensor. It cannot be used to predict the values of two failed sensors as the average of the maximum error exceeded the temperature limit for the first sensor. Furthermore it provides a value close to the temperature limit for the second sensor.

CONCLUSION

Predicting the failed temperature sensor values protects the satellite from abnormal behavior. In this paper, a prediction model of a spacecraft’s temperature sensor values based on ANN is proposed and applied to the EgyptSat-1satellite, which contains twenty temperature sensors. The proposed model was utilized to predict the values of one and two failed sensors. The simulation results depict the outstanding ability of the ANN modeling to predict the values of missing values of one failed sensor with good accuracy. The model has been applied to most of the sensors considering them to be failed individually and predicted the values of each of them without exceeding the acceptable temperature limits.

Using the proposed model to predict the values of two failed sensors, it was observed that the model is able to estimate the two sensor values with acceptable MSE and MAE. But the maximum error exceeded the acceptable limit, which is not tolerable on the satellite.

The trained network provided a real-time diagnostics by predicting the missing/faulty readings without delay time regarding the high-speed capability of the ANN. This is very important for the satellite as real-time diagnostics in the spacecrafts, especially for fault detection and prediction, is an essential issue.

FUNDING

There are no funders to report.

REFERENCES

An R, Li WJ, Han HG, Qiao JF (2016) An improved Levenberg-Marquardt algorithm with adaptive learning rate for RBF neural network. Presented at: 35th Chinese Control Conference; Chengdu, China. https://doi.org/10.1109/ChiCC.2016.7553917Links ]

Avinash RA, Janardhan HR, Adiga S, Vijeth B, Manjunath S, Jayashree S, Shivashankarappa N (2015) Data prediction in wireless sensor networks using Kalman filter. Presented at: International Conference on Smart Sensors and Systems; Bangalore, India. https://doi.org/10.1109/SMARTSENS.2015.7873603Links ]

Batista GE, Monard MC (2003) An analysis of four missing data treatment methods for supervised learning. Applied Artificial Intelligence 17(5-6):519-533. https://doi.org/10.1080/713827181Links ]

Boato MG, Garcia EC, Santos MBD, Beloto AF (2017) Assembly and testing of a thermal control component developed in Brazil. Journal of Aerospace Technology and Management 9(2):249-256. https://doi.org/10.5028/jatm.v9i2.650Links ]

Bulut M, Demirel S, Gulgonul S, Sozbir N (2008a) Battery thermal design conception of Turkish satellite. Presented at: 6th International Energy Conversion Engineering Conference; Cleveland, USA. https://doi.org/10.2514/6.2008-5787Links ]

Bulut M, Sozbir N, Gulgonul S (2008b) Thermal control design of TUSAT. Presented at: 6th International Energy Conversion Engineering Conference; Cleveland, USA. https://doi.org/10.2514/6.2008-5751Links ]

Bulut M, Sozbir N (2015) Analytical investigation of a nanosatellite panel surface temperatures for different altitudes and panel combinations. Applied Thermal Engineering 75:1076-1083. https://doi.org/10.1016/j.applthermaleng.2014.10.059Links ]

Che Z, Purushotham S, Cho K, Sontag D, Liu Y (2018) Recurrent neural networks for multivariate time series with missing values. Scientific reports 8(1):6085. https://doi.org/10.1038/s41598-018-24271-9Links ]

Fung DS (2006) Methods for the estimation of missing values in time series (Master’s Thesis). Joondalup: Edith Cowan University. [ Links ]

Gilmore D (2002) Spacecraft thermal control handbook: fundamental technologies. v. 1. Reston: American Institute of Aeronautics and Astronautics. https://doi.org/10.2514/4.989117Links ]

Gilmore DG, Donabedian M (2002) Spacecraft thermal control handbook: cryogenics. v. 2. Reston: American Institute of Aeronautics and Astronautics. https://doi.org/10.2514/4.989148Links ]

Girimonte D, Izzo D (2007) Artificial intelligence for space applications. In: Schuster AJ, editor. Intelligent Computing Everywhere. London: Springer. p. 235-253. https://doi.org/10.1007/978-1-84628-943-9_12Links ]

Graham JW (2009) Missing data analysis: making it work in the real world. Annual review of psychology 60:549-576. https://doi.org/10.1146/annurev.psych.58.110405.085530Links ]

Reis Junior JD, Ambrosio AM, Sousa FL (2017) Real-time cubesat thermal simulation using artificial neural networks. Journal of Computation Interdisciplinary Sciences 8(2):99-108. https://doi.org/10.6062/jcis.2017.08.02.0126Links ]

Kamiński M (2016) Neural estimators of two-mass system optimized using the Levenberg-Marquardt training and genetic algorithm. Presented at: 21st International Conference on Methods and Models in Automation and Robotics; Miedzyzdroje, Poland. https://doi.org/10.1109/MMAR.2016.7575197Links ]

Li-Ping P, Hongquan Q (2008) Particle Filtering approach to parameter estimate and temperature prediction of satellite. Presented at: 7th World Congress on Intelligent Control and Automation; Chongqing, China. https://doi.org/10.1109/WCICA.2008.4593401Links ]

Liu Y, Gopalakrishnan V (2017) An overview and evaluation of recent machine learning imputation methods using cardiac imaging data. Data 2(1):8. https://doi.org/10.3390/data2010008Links ]

Liu J, Li Y, Wang J (2008) Modeling and analysis of MEMS-based cooling system for nano-satellite active thermal control. Presented at: 2nd International Symposium on Systems and Control in Aerospace and Astronautics; Shenzhen, China. https://doi.org/10.1109/ISSCAA.2008.4776207Links ]

López-Martínez E, Vergara-Hernández HJ, Serna S, Campillo B (2015) Artificial neural networks to estimate the thermal properties of an experimental micro-alloyed steel and their application to the welding thermal analysis. Strojniški vestnik - Journal of Mechanical Engineering 61(12):741-750. https://doi.org/10.5545/sv-jme.2015.2610Links ]

Napolitano MR, Neppach C, Casdorph V, Naylor S, Innocenti M, Silvestri G (1995) Neural-network-based scheme for sensor failure detection, identification, and accommodation. Journal of Guidance, Control, and Dynamics 18(6):1280-1286. https://doi.org/10.2514/3.21542Links ]

Pan L, Liu G, Lin F, Zhong S, Xia H, Sun X, Liang H (2017) Machine learning applications for prediction of relapse in childhood acute lymphoblastic leukemia. Scientific reports 7(1):7402. https://doi.org/10.1038/s41598-017-07408-0Links ]

Saini L, Soni M (2002) Artificial neural network based peak load forecasting using Levenberg-Marquardt and quasi-Newton methods. IEE Proceedings-Generation, Transmission and Distribution 149(5):578-584. https://doi.org/10.1049/ip-gtd:20020462Links ]

Schafer JL, Graham JW (2002) Missing data: our view of the state of the art. Psychological methods 7(2):147-177. [ Links ]

Song L, Zhang W (2014) Temperature error compensation for open-loop fiber optical gyro using back-propagation neural networks with optimal structure. Presented at: 2014 IEEE Chinese Guidance, Navigation and Control Conference; Yantai, China. https://doi.org/10.1109/CGNCC.2014.7007249Links ]

Sutagundar M, Nirosha H, Sheeparamatti B (2016) Artificial neural network modeling of MEMS cantilever resonator using Levenberg Marquardt algorithm. Presented at: 2nd International Conference on Applied and Theoretical Computing and Communication Technology; Bangalore, India. https://doi.org/10.1109/ICATCCT.2016.7912110Links ]

Tresp V, Briegel T (1998) A solution for missing data in recurrent neural networks with an application to blood glucose prediction. In: Jordan MI, Kears MJ, Solla SA, editors. Advances in Neural Information Processing Systems 10. Cambridge: MIT Press. [ Links ]

Van Buuren S (2018) Flexible imputation of missing data. 2nd ed. New York: Chapman and Hall/CRC. https://doi.org/10.1201/9780429492259Links ]

Received: April 30, 2018; Accepted: November 03, 2018

*Correspondence author: hamdy.engineer@yahoo.com

Section Editor: Antonio Mazzaracchio

AUTHORS’ CONTRIBUTION

Conceptualization, Abdelkhalek HS and Medhat H; Methodology, Abdelkhalek HS; Investigation, Abdelkhalek HS, Medhat H and Ziedan I; Writing - Original Draft, Abdelkhalek HS, Amal M and Ziedan I; Writing - Review and Editing, Abdelkhalek HS; Funding Acquisition, Abdelkhalek HS and Medhat H; Resources, Amal M ; Supervision, Medhat H.

Creative Commons License This is an Open Access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.