Acessibilidade / Reportar erro

Novelty detection on a laboratory benchmark slender structure using an unsupervised deep learning algorithm

Abstract

The process that involves the continuous monitoring and analysis of a structure's behavior and performance is known as Structural Health Monitoring (SHM). SHM typically involves the use of sensors and other monitoring devices to collect data, such as displacements, strains, accelerations, among others. These data are analyzed using advanced algorithms and machine learning techniques to identify any signs of abnormal behavior or deterioration. This paper presents a numerical and experimental study of a slender frame subjected to five levels of structural changes under impact loading. The dynamic responses were recorded by four accelerometers and used to build models based on Sparse Auto-Encoders (SAE). Such models can identify each of the five structural states in an unsupervised way. A new strategy to define the hyperparameters of the SAE is presented, which proved to be adequate in the experiments conducted. Finally, the experimental data set is made available to the scientific community to serve as a benchmark for validating SHM methodologies to identify structural changes from dynamic measurements.

Keywords
Structural Health Monitoring; Damage Detection; Vibration Signals; Unsupervised Deep Learning; Sparse Auto-Encoder

Graphical Abstract

INTRODUCTION

The proper functioning of structural systems and the safety of users are among the main concerns of engineers throughout the life cycle of any civil engineering construction. To avoid catastrophic failures, it is important to continuously monitor the structure’s condition and detect any abnormal behavior at an early stage, especially when dealing with large structures, such as bridges, viaducts, tall buildings, towers, and geotechnical constructions. The process of identifying structural failures or malfunctions through continuous monitoring is commonly known as Structural Health Monitoring (SHM).

After an extensive present-day literature review, Avci et al. (2021)Avci, O., Abdeljaber, O., Kiranyaz, S., Hussein, M., Gabbouj, M., Inman, D. J. (2021). A review of vibration-based damage detection in civil structures: From traditional methods to Machine Learning and Deep Learning applications. Mechanical systems and signal processing 147: 107077. states that the most critical component of SHM is damage detection, which is defined as a systematic and automatic process of identifying the existence of damage, followed by its localization and its severity assessment. Recent articles specifically focused on damage detection reinforce this assertion (Nunes et al. 2021Nunes, L. A., Amaral, R. P. F., Barbosa, F. S., Cury, A. C. (2021). A hybrid learning strategy for structural damage detection. Structural Health Monitoring 20(4):2143-2160., Dan et al. 2021Dan, J., Feng, W., Huang, X., Wang, Y. (2021). Global bridge damage detection using multi-sensor data based on optimized functional echo state networks. Structural Health Monitoring 20(4):1924-1937., Finotti et al. 2022aFinotti, R. P., Barbosa, F. S., Cury, A. A., Pimentel, R. L. (2022a). Novelty Detection Using Sparse Auto-Encoders to Characterize Structural Vibration Responses. Arabian Journal for Science and Engineering 47(10):13049-13062., Liu et al. 2022Liu, G., Niu, Y., Zhao, W., Duan, Y., Shu, J. (2022). Data anomaly detection for structural health monitoring using a combination network of GANomaly and CNN. Smart Structures and Systems 29(1):53-62., Rosso et al. 2023Rosso, M. M., Aloisio, A., Cirrincione, G., Marano, G. C. (2023). Subspace features and statistical indicators for neural network-based damage detection. Structures 56:104792.). Such an idea seems logical since the location and quantification of the damage extension presuppose correctly identifying its occurrence beforehand.

At the same time, SHM strategies that are successful in locating or quantifying damage are often supervised (Avci et al. 2021Avci, O., Abdeljaber, O., Kiranyaz, S., Hussein, M., Gabbouj, M., Inman, D. J. (2021). A review of vibration-based damage detection in civil structures: From traditional methods to Machine Learning and Deep Learning applications. Mechanical systems and signal processing 147: 107077., Eltouny et al. 2023Eltouny, K., Gomaa, M., Liang, X. (2023). Unsupervised learning methods for data-driven vibration-based structural health monitoring: A review. Sensors 23(6):3290.). These strategies can be very useful when two or more baseline structural behaviors are known. However, this specific situation may hinder their use because, in real structures, only the current structural state is usually known. Hence, defining a robust and unsupervised model for detecting structural damage still presents many challenges.

Among the different methodologies available to identify the occurrence of structural anomalies, vibration-based techniques have been extensively investigated lately (Hou and Xia 2021Hou, R., Xia, Y. (2021). Review on the new development of vibration-based damage identification for civil engineering structures: 2010–2019. Journal of Sound and Vibration 491:115741., Avci et al. 2021Avci, O., Abdeljaber, O., Kiranyaz, S., Hussein, M., Gabbouj, M., Inman, D. J. (2021). A review of vibration-based damage detection in civil structures: From traditional methods to Machine Learning and Deep Learning applications. Mechanical systems and signal processing 147: 107077., Eltouny et al. 2023Eltouny, K., Gomaa, M., Liang, X. (2023). Unsupervised learning methods for data-driven vibration-based structural health monitoring: A review. Sensors 23(6):3290.). Such strategies are capable of handling significant structural complexity with relatively low computational cost, depending only on the acquired signals and their subsequent processing, which fundamentally consists in two steps: feature extraction/data fusion and feature classification to infer if a structural change has occurred.

Over the past few years, the investigation of structural integrity has been performed by employing modal analysis/tracking-based methods, which consist in the evaluation of changes in the modal parameters over time (Doebling et al. 1998Doebling, S. W., Farrar, C. R., Prime, M. B. (1998). A summary review of vibration-based damage identification methods. Shock and Vibration Digest 30(2):91-105., Huang et al. 2020Huang, M., Li, X., Lei, Y., Gu, J. (2020). Structural damage identification based on modal frequency strain energy assurance criterion and flexibility using enhanced Moth-Flame optimization. Structures 28:1119-1136., Zhan et al. 2021Zhan, J., Zhang, F., Siahkouhi, M. (2021). A Step-by-Step Damage Identification Method Based on Frequency Response Function and Cross Signature Assurance Criterion. Sensors 21(4):1029.). The basic assumption is that the degrading process alters the physical properties of the structure, such as mass and stiffness, which influence its natural frequencies, mode shapes, and damping ratios. However, the effect of environmental/operational factors (uncertainties about loadings, temperature variation, etc.) may also alter the structure’s dynamic characteristics, thus adversely impacting the damage diagnosis made by these methods (Morales et al. 2019Morales, F. A. O., Cury, A., Peixoto, R.A.F. (2019). Analysis of thermal and damage effects over structural modal parameters, Structural Engineering and Mechanics 65(1):43-51., Anastasopoulos et al. 2021Anastasopoulos, D., De Roeck, G., Reynders, E. P. (2021). One-year operational modal analysis of a steel bridge from high-resolution macrostrain monitoring: Influence of temperature vs. retrofitting. Mechanical Systems and Signal Processing 161:107951., Wah et al. 2021Wah, W. S. L., Chen, Y. T., Owen, J. S. (2021). A regression-based damage detection method for structures subjected to changing environmental and operational conditions. Engineering Structures 228:111462., Corbally and Malekjafarian 2022Corbally, R., Malekjafarian, A. (2022). A data-driven approach for drive-by damage detection in bridges considering the influence of temperature change. Engineering Structures 253:113783., Wang et al. 2022Wang, Z., Yang, D. H., Yi, T. H., Zhang, G. H., Han, J. G. (2022). Eliminating environmental and operational effects on structural modal frequency: A comprehensive review. Structural Control and Health Monitoring 29(11):e3073.). To circumvent this issue, more accurate measurements or higher levels of degradation are required so that the changes caused by damage are not mixed with external effects and vice-versa, leading to false alarms.

Notwithstanding the many procedures that can reduce ambient effects, further efforts are still needed to improve the damage assessment tools for practical applications (Wang et al. 2022Wang, Z., Yang, D. H., Yi, T. H., Zhang, G. H., Han, J. G. (2022). Eliminating environmental and operational effects on structural modal frequency: A comprehensive review. Structural Control and Health Monitoring 29(11):e3073.). Therefore, several contributions have been proposed by researchers, including different signal processing techniques to analyze and extract relevant information from raw dynamic data (Cardoso et al. 2019aCardoso, R. A., Cury, A., Barbosa, F. (2019a). Automated real-time damage detection strategy using raw dynamic measurements. Engineering Structures 196:109364., Mousavi et al. 2022Mousavi, A. A., Zhang, C., Masri, S. F., Gholipour, G. (2022). Structural damage detection method based on the complete ensemble empirical mode decomposition with adaptive noise: A model steel truss bridge case study. Structural Health Monitoring 21(3):887-912., Zhang et al. 2022Zhang, C., Mousavi, A. A., Masri, S. F., Gholipour, G., Yan, K., Li, X. (2022). Vibration feature extraction using signal processing techniques for structural health monitoring: A review. Mechanical Systems and Signal Processing 177:109175., Alves and Cury 2023Alves, V., Cury, A. (2023). An automated vibration-based structural damage localization strategy using filter-type feature selection. Mechanical Systems and Signal Processing 190:110145.).

More recently, due to the evolution of computer and information technologies, increasing attention has been given to the application of Machine Learning (ML) in structural novelty identification (Finotti et al. 2017Finotti, R. P., Gentile, C., Barbosa, F. S., Cury, A. A. (2017). A novel natural frequency-based technique to detect structural changes using computational intelligence, Procedia engineering, 199, 3314-3319., Cardoso et al. 2019bCardoso, R.A., Cury, A., Barbosa, F., Gentile, C. (2019b). Unsupervised real-time SHM technique based on novelty indexes. Structural Control and Health Monitoring 26:e2364., Sun et al. 2020Sun, L., Shang, Z., Xia, Y., Bhowmick, S., Nagarajaiah, S. (2020). Review of bridge structural health monitoring aided by big data and artificial intelligence: From condition assessment to damage detection. Journal of Structural Engineering 146(5):04020073., Umar et al. 2021Umar, S., Vafaei, M., Alih, S. C. (2021). Sensor clustering-based approach for structural damage identification under ambient vibration. Automation in Construction 121:103433., Lei et al. 2022Lei, J., Cui, Y., Shi, W. (2022). Structural damage identification method based on vibration statistical indicators and support vector machine. Advances in Structural Engineering 25(6):1310-1322.). Among all possible ML-based algorithms, those based on deep learning appear as promising alternatives to traditional techniques. In this paper, a special highlight is given to the Sparse Auto-Encoder (SAE), a deep neural network algorithm that automatically extracts features from data. The SAE reconstructs its inputs through an internal coding - modeled by linear and nonlinear functions - that transforms them into a new group of variables (features) (Goodfellow et al. 2016Goodfellow, I., Bengio, Y., Courville, A. (2016). Deep Learning, MIT press.). Autoencoders are known not only for their ability to deal with large volumes of data, but also for their capability to provide compelling solutions, particularly for nonlinear problems, such as structural anomaly detection. Therefore, the SAE may be a suitable method for handling vibration signals.

Considering that SAE deep learning algorithms are recent tools in the SHM area, studies focused on evaluating them to solve novelty detection problems are still welcome, as noticed by the considerable number of recent works that address this topic (Wang and Cha 2021Wang, Z., Cha, Y. J. (2021). Unsupervised deep learning approach using a deep auto-encoder with a one-class support vector machine to detect structural damage. Structural Health Monitoring 20(1):406-425., Finotti et al. 2022aFinotti, R. P., Barbosa, F. S., Cury, A. A., Pimentel, R. L. (2022a). Novelty Detection Using Sparse Auto-Encoders to Characterize Structural Vibration Responses. Arabian Journal for Science and Engineering 47(10):13049-13062., Abbas et al. 2023Abbas, N., Umar, T., Salih, R., Akbar, M., Hussain, Z., Haibei, X. (2023). Structural Health Monitoring of Underground Metro Tunnel by Identifying Damage Using ANN Deep Learning Auto-Encoder. Applied Sciences 13(3):1332.). However, most studies already published focused on numerical models (Wang and Cha 2021Wang, Z., Cha, Y. J. (2021). Unsupervised deep learning approach using a deep auto-encoder with a one-class support vector machine to detect structural damage. Structural Health Monitoring 20(1):406-425., Abbas et al. 2023Abbas, N., Umar, T., Salih, R., Akbar, M., Hussain, Z., Haibei, X. (2023). Structural Health Monitoring of Underground Metro Tunnel by Identifying Damage Using ANN Deep Learning Auto-Encoder. Applied Sciences 13(3):1332., Eltouny et al. 2023Eltouny, K., Gomaa, M., Liang, X. (2023). Unsupervised learning methods for data-driven vibration-based structural health monitoring: A review. Sensors 23(6):3290.). Although some applications are found in practical and unsupervised applications of SHM systems, as reported by Avci et al. (2021)Avci, O., Abdeljaber, O., Kiranyaz, S., Hussein, M., Gabbouj, M., Inman, D. J. (2021). A review of vibration-based damage detection in civil structures: From traditional methods to Machine Learning and Deep Learning applications. Mechanical systems and signal processing 147: 107077., the SHM solution is not universal for different types of structures. A set of techniques that might work for one structure might not work for another. It is also observed that most analyses already carried out with SAE are applied to the detection of machine failure components, problems that are much better behaved than civil engineering structures ones (Yang et al. 2022Yang, Z., Xu, B., Luo, W., Chen, F. (2022). Autoencoder-based representation learning and its application in intelligent fault diagnosis: A review. Measurement 189:110460.). For such reasons, the authors understand that there is still a lack of studies to consolidate the application of the SAE in SHM systems.

In this context, this paper investigates the performance of the SAE algorithm to identify structural alterations in a slender frame tested in laboratory subjected to five distinct structural scenarios. The experimental program includes 1500 experimental tests, being 300 for each damage scenario. Moreover, a new strategy is proposed to define SAE’s best hyperparameters based on the analysis of the well-known Shewhart T control chart (T2-statistic) (Montgomery 2009Montgomery, D. (2009). Introduction to Statistical Quality Control. John Wiley & Sons.), calculated with SAE extracted features, allowing a correct identification of the five structural scenarios in an unsupervised way. Also, for comparison purposes, the authors propose the use of PCA (Principal Component Analysis) for feature extraction.

An advantage of the proposed methodology is that data are processed directly in the time domain, avoiding modal parameters estimation and tracking (Cardoso et al. 2019bCardoso, R.A., Cury, A., Barbosa, F., Gentile, C. (2019b). Unsupervised real-time SHM technique based on novelty indexes. Structural Control and Health Monitoring 26:e2364.). Moreover, the influence of external factors (such as temperature) is implicitly modeled by SAE. Differently from modal-based detection methods, the SAE/T2 approach is less susceptible to uncertainties and errors related to data processing since no other algorithm is necessary to filter operational and/or environmental effects. Thus, such an approach stands for a major advantage in terms of computational cost and time of analysis. Another interesting aspect of the proposed methodology is its unsupervised analysis nature, meaning that it does not need previously labeled observations or desired output variables. This last aspect is fundamental for real SHM applications since in practice there is hardly any prior knowledge of the structure’s health condition in current SHM systems.

2 THEORETICAL BACKGROUND

The strategy used to identify structural novelties is based on two algorithms:

  1. The Sparse Autoencoder (SE). In this paper, the SAE is used to extract features capable of characterizing the monitored dynamic signals, working as a parameter reducer.

  2. The Shewhart T Control Chart (T²). This graphic viewer of the statistical metric T², calculated from the reduced parameters obtained by the SAE, is used in this paper to objectively indicate the occurrence of structural changes.

For this reason, authors defined the computational model used to identify structural novelties as “SAE-T2 model”. A brief discussion of these two methods is presented below.

2.1 Sparse Auto-Encoder

Due to its ability to represent data at a high level, concisely and satisfactorily, one of the best known and most used deep machine learning algorithms is the Sparse Auto-Encoder (SAE) (Goodfellow et al. 2016Goodfellow, I., Bengio, Y., Courville, A. (2016). Deep Learning, MIT press.). Auto-Encoders consist of a neural network that is trained to produce an output that approximates its respective input. Internally, this network has a layer h that describes a code used to represent its output y, which can be understood in two parts: an encoding function h=f(x); and a decoding function y=g(h) that reconstructs the input, as illustrated by the architecture presented in Figure 1.

Figure 1
General structure of an Auto-encoder, mapping an input x to an output y (called as reconstruction).

Nevertheless, the main interest is not in the replicated output of the AE, but in exploring its ability to extract characteristics from raw data. If the AE is used for data mapping only, i.e., without feature reconstruction, it is designed to produce h with a smaller dimension than x and is known as Undercomplete Auto-Encoder. The great advantage of reducing the data from vector x’s dimension to vector h’s dimension is the identification of relevant parameters at a high-level of abstraction, helpful for recognizing patterns in datasets. In this context, the learning of the AE network is accomplished by minimizing a function Zx,gfx that penalizes the differences between x and gfx. In cases where the coding function is linear and Z is the mean quadratic error, the Undercomplete Auto-Encoder behaves like the Principal Component Analysis (PCA) (Baldi and Hornik 1989Baldi, P., Hornik, K. (1989) Neural networks and principal component analysis: learning from examples without local minima. Neural Networks 2(1):53-58.). Conversely, when employing nonlinear functions for f and g, the Undercomplete Auto-Encoder may be more powerful than PCA to reduce the dimensionality of a problem (Meng et al. 2017Meng, Q., Catchpoole, D., Skillicom, D., Kennedy, P. J. (2017). Relational autoencoder for feature extraction. in Proceedings of IJCNN 2017, International Joint Conference on Neural Networks. Anchorage.). Despite their efficiency in characterizing data, the encoders and decoders may acquire an excessive ability to approximate yx, resulting in a vector h with high dimensionality, which many times cannot provide interesting parameters for modeling the problem. To improve the performance of this deep machine learning technique, the Sparse Auto-Encoder (SAE) was proposed. The SAE is an undercomplete auto-encoder where a sparse penalty Γfx is incorporated into the function Z in the training process - that is, Zx,gfx+Γfx. In summary, this penalty allows the AE model to represent large datasets with a small number of h components, controlling the number of active neurons in the layers (most weights are equal to zero). Consequently, the addition of sparsity-inducing term to auto-encoders usually leads to an increase in the model's performance and to a reduction of processing time. According to Ng (2011)Ng, A.S (2011). Sparse autoencoder. CS294A Lecture Notes. Stanford University., the penalty term can be obtained by calculating the Kullback-Leibler (KL) divergence (Kullback and Leibler 1951Kullback, S., Leibler, R. A. (1951). On information and sufficiency. The Annals of Mathematical Statistics 22(1):79-86.) as:

Γ = m = 1 M K L ( ρ | | ρ ^ m ) = m = 1 M ρ l o g ρ ρ ^ m + ( 1 - ρ ) l o g 1 - ρ 1 - ρ ^ m ( 1 )

where || operator indicates the statistical divergence of ρ^m from ρ, M is the number of neurons in the intermediate layer, ρ is the desired average activation value, also called sparsity proportion (previously defined near to the lower limit of the considered activation function), and ρ^m represents the average activation value of the respective neuron m, which is given by:

ρ ^ m = 1 I i = 1 I f ( W m 1 T x i + b m 1 ) . ( 2 )

where I relates to the number of observations (available data). If ρ^m=ρ, the function Γ assumes KL=0; otherwise, as ρ^m diverges from ρ, KL presents monotonically increasing values. This formulation avoids the non-propagation problem of gradients to the furthest layers of the network. Apart from the sparsity term Γ, another strategy to control the “dispersion” of gradients is to insert a weight regularization term Γ2. The Γ2 acts by increasing the values of W and decreasing the values of h, conforming equation below:

Γ 2 = 1 2 l = 1 L i = 1 I j = 1 J w i j ( l ) ( 3 )

in which L is related to the number of encoder layers and J to the number of input parameters. Taking into consideration Eq. 1 and Eq. 3, and admitting the function Z as the mean square error, the cost function to be optimized in the training process of an SAE model may be defined by:

E = 1 I i = 1 I j = 1 J ( x i j - y i j ) 2 + β Γ + λ Γ 2 ( 4 )

where β and λ are coefficients that control the influence of the sparsity and regularization terms, respectively. Obtaining the SAE parameters through the minimization of Equation 1 goes through an optimization process. In the present work, the algorithm used to solve this problem is the SCG (Scaled Conjugate Gradient) algorithm. The Scaled Conjugate Gradient (SCG) algorithm is an optimization technique used to find the minimum of a multivariable function, commonly applied in neural network training and non-linear optimization problems. It belongs to the family of conjugate gradient methods but stands out due to its adaptive step size adjustment along conjugate directions, leading to faster convergence and improved performance (Mφller 1993Mφller, M. F. (1993). A scaled conjugate gradient algorithm for fast supervised learning. Neural Networks 6(4):525-533.). The SCG algorithm iteratively minimizes the objective function through the following steps: initializing the search direction, calculating the gradient, updating the search direction, performing a line search to determine the step size, updating the parameters, adjusting the search direction based on gradients and step sizes, and checking for convergence. The algorithm dynamically adapts the step sizes along conjugate directions based on the function's curvature, resulting in efficient updates and better convergence properties. Overall, SCG has shown effectiveness in optimizing various functions, especially in neural network training where it accelerates convergence and reduces the number of iterations required (Mφller 1993Mφller, M. F. (1993). A scaled conjugate gradient algorithm for fast supervised learning. Neural Networks 6(4):525-533.).

In summary, for the proposed approach, SAE models provide the parameters (vector h) that can characterize each respective monitored dynamic signal x.

2.2 T²-Control Chart

Shewhart T Control Chart are graphical statistical tools used to monitor the variability of a problem's parameters over time. The charts usually depict several data points, which are formed by a specific statistical characteristic and horizontal lines (control limits) responsible for indicating the extreme values of such characteristic when the problem is in-control state. Any point beyond these predetermined limits, on the other hand, reveals unusual sources of variability, suggesting an out-of-control situation (Montgomery, 2009Montgomery, D. (2009). Introduction to Statistical Quality Control. John Wiley & Sons.). In the present work, it means a novelty detection. Due to their relatively simple implementation, intuitive interpretation, and effective results, control charts are considered suitable tools for structural on-line monitoring and anomaly detection. The multivariate control technique used here is the Shewhart T Control Chart. The characteristic plotted in this chart is the Hotelling's T²-statistic. The T²-statistic represents the distance between a new data observation and the corresponding sample mean vector - the higher the T² value, the greater the distance of the new data from the mean. This metric is based on the relationship among the variables and on the scatter of data (covariance matrix). By assuming that matrix HNxM represents a dataset during a certain period time (which in this paper are the SAE extracted features), the T²-statistic may be calculated as follows:

T ² = R ( h - - h ̿ ) T S - 1 ( h - - h ̿ ) ( 5 )

where h- is the sample mean vector of the M available features, obtained from a submatrix of H with R observations (HRxM, R<N); h̿ and S are the vector of reference averages and the mean of the reference covariance matrices, respectively, both estimated using s preliminary submatrices collected during the in-control state of the problem. In this work, the Upper Control Limit (UCL) is defined as the 95th percentile of the T² values of the training data (values greater than UCL may be observed only 5% of the time by chance). The Lower Control Limit (LCL) is zero. It is noteworthy that the development of Hotelling's T²-statistic assumes that the data follow a normal distribution.

A widespread type of control chart is called T-Shewhart (Montgomery, 2009Montgomery, D. (2009). Introduction to Statistical Quality Control. John Wiley & Sons.), where the statistic T², calculated from the vectors h, is plotted over time. The statistic T² represents the distance between a new data observation and the corresponding sample mean vector. The larger the value of T², the greater the distance of the new data from the mean. This metric is based on the relationship between observations and data dispersion (covariance matrix).

3 EXPERIMENTAL PROGRAM

The tested structure is shown in Figure 2. It is a two-story slender aluminum frame. All bars are connected by screws, washers, and nuts, having the same dimensions: 300 mm of length; 15.875 mm of width; and 1.587 mm of thickness.

Figure 2
The two-story slender aluminum frame

Figure 3 shows a sketch of the experimental test. Four unidirectional Bruel© piezoelectric IEPE (Integrated Electronics Piezoelectric) accelerometers (100 mV/g) were placed on the structure on the marked positions, measuring horizontal accelerations. A low-pass filter was set at 100 Hz. An impact load was applied by using a pendulum with a mass of 14 g, fixed at the top. The structural loading consisted in releasing the pendulum mass (from rest) from the position indicated in Figure 3. Then, the mass was only subjected to the action of gravity until its collision to the frame at the indicated point. Five structural scenarios were analyzed:

Figure 3
The testing procedure
  • scenario #1 - No additional mass was added to the structure. In this case, masses m1 and m2, (indicated in Figure 3) are zero.

  • scenario #2 - One additional mass was added to the structure. In this case, masses m1=7.81g and m2=0.

  • scenario #3 - One additional mass was added to the structure. In this case, masses m1=15.62g and m2=0.

  • scenario #4 - Two additional masses were added to the structure. In this case, masses m1=15.62g and m2=7.81g.

  • scenario #5 - Two additional masses were added to the structure. In this case, masses m1=15.62g and m2=15.62g.

For each scenario, at least 300 tests were performed. Each test lasted for 8.192 s with an acquisition frequency of 500 Hz, yielding 4096 sampled points per accelerometer. The recordings started 7.2ms before the impact of the pendulum mass. A typical result of a test is presented in Figure 4. The data acquisition rate was set after a preliminary analysis involving the signal bandwidth, the structure’s dynamic behavior, the accuracy requirements, and data storage limitations. Hence, to comply with these aspects, the acquisition rate was set to 500 Hz according to both temporal and spectral analysis depicted in Figure 5. A report with further details, as well as the files containing the dynamic responses of each test performed are available for download at http://bit.ly/SHM-UFJF. For citations concerning these experimental data, please refer to this paper.

Figure 4
Typical result of an experimental test
Figure 5
Typical frequency response of an experimental test

4 METHODOLOGY FOR STRUCTURAL NOVELTY ASSESSMENT

The SAE-T² model depends on many hyperparameters that must be previously adjusted for each analysis. However, the present paper focuses on the following parameters:

  • sparsity proportion: ρ

  • sparsity regularization: β

  • weight regularization: λ

since they usually play a key role in terms of classification results (Finotti et al. 2019Finotti, R. P., Cury, A. A., Barbosa, F. S. (2019). An SHM approach using machine learning and statistical indicators extracted from raw dynamic measurements. Latin American Journal of Solids and Structures 16(2):e165., Finotti et al., 2021Finotti, R. P., Barbosa, F. S., Cury, A. A., Pimentel, R. L. (2021). Numerical and Experimental Evaluation of Structural Changes Using Sparse Auto-Encoders and SVM Applied to Dynamic Responses. Applied Sciences 11(24): 11965.). Other parameters such as the number of hidden layers, the number of epochs, the type of transfer function may also be important. However, preliminary sensitivity analyses indicated that satisfactory results could be achieved for one hidden layer, 1000 epochs to minimize Equation 1 using the positive saturating linear transfer function for the encoder (see Equation 6) and the linear transfer function for the decoder (see Equation 7). Thus, these parameters were fixed for all analyses.

f z = 0 , i f z 0 z , i f z > 0 ( 6 )

fz=z ( 7 )

Therefore, a crucial question that must be addressed is: How to set the best values for the hyperparameters ρ, β and λ for a specific problem of structural novelty detection? In the present paper, this evaluation is performed by a grid search of the analyzed hyperparameters, involving both the SAE and the T², simultaneously, in four phases:

Training phase. In this step, a dataset extracted from the same structural state is used to train the SAE model. This dataset is called training data. In this paper, the quality of the SAE model is measured by the Reconstruction Error Index using the training data (et), as defined in Equation 8. The smaller the et, the higher the quality of the SAE model is observed.

e t = 1 J I j = 1 J i = 1 I x i j t - y i j t , ( 8 )

where I is the number of sampling points for each signal; J is the number of signals used in the training phase; I and J are respectively related to the jth signal and the ith discrete time; and the superscript t is related to the training phase.

Model definition phase. Frequently, low reconstruction errors indicate low generalization capacity (overfitting). Thus, a new dataset also extracted from the structural state of the previous phase and that has not been used during the training phase, is applied to define the optimized SAE- T² model. This new dataset is labeled as definition data. It is expected that the SAE- T² models provide T² values for training and definition data statistically similar, since they have the same data category. Hence, to achieve the best SAE- T² model, one defines: hqt and hqd the q (q=1th, 2ndor 3rd) quantile of the distribution of values T² obtained for the training and definition data, respectively. The Generalization Index (GI) of a given SAE- T² model is defined in Equation 9. The smaller the GI, the higher the quality of the SAE- T² model is observed.

G I = q = 1 3 h q t - h q d ( 9 )

The best SAE- T² model is defined as the one that that better minimizes et and GI values at the same time. The strategy presented in this paper first establishes ept and GIp as the sets with the smallest p% of et and GI, respectively, generated by the analyzed models during the grid search process. The SAE- T² hyperparameters are defined through the ith model that produces:

e i t e p t ( 10 )

and

G I i G I p ( 11 )

For a given value of p, if one has more than one model that produces et and GI, which fulfil the conditions described in equations 10 and 11, respectively, the model with the smallest et is adopted as the SAE- T² best model. If no model satisfies equations 10 and 11, the analysis is restarted by increasing the percentage p considered for the subsets ept and GIp, until the best model is obtained. The initial p value adopted in this work is 1% and incremented also by 1%. The Reconstruction Error Index for the definition data (ed) is also calculated for further comparisons:

e d = j = 1 J i = 1 N ( x i j d - y i j d ) ² ( 12 )

where, N is the number of sampling points for each signal; J is the number of signals used in the training phase; j is related to the jth signal; and the superscript d is related to the model definition phase. The smaller the values of et and ed, the better the SAE-T2 model according to this criterion used for comparison purposes. This criterion was named comparative criterion.

Validation phase. During this phase, another dataset (validation data), also extracted from the same structural state of the training and definition phases, is applied to the best model obtained in the previous step. The goal is to check the model's ability to classify new data. It is expected that the optimized SAE-T2 model leads to T2 values statistically similar for training, definition, and validation datasets since they belong to the same structural state. For future comparisons in this paper, one defines the Reconstruction Error Index for validation data (ev):

e v = 1 J I j = 1 J i = 1 I x i j v - y i j v , ( 13 )

Monitoring phase. At this point, datasets (monitoring data) extracted from different structural states than the ones used in the training, definition and validation phases are presented to the SAE-T2 best model. It is expected that the achieved model leads to T2 values different from those obtained in the previous phases. Figure 6 shows a hypothetical T2 control chart evaluated for a SAE-T2 model in which two structural states were analyzed. In Figure 6a, the model failed in identifying the two data categories. On the other hand, Figure 6b correctly showed those two structural states.

Figure 6
Typical control charts. (a) One single structural state was detected; (b) Two structural states were detected.

5 APPLICATION OF THE PROPOSED SHM MODEL TO EXPERIMENTAL DATA

A subset of the experimental data was used to validate the proposed methodology. From each tested scenario, 600 signals containing 1 second of monitoring (500 time steps) equally distributed among the accelerometers were used. The amplitudes of all signals were normalized between 0 and 1. Equation 14 summarizes the dataset organization:

X ( 500 × 800 ) s = A c 1 ( 1,1 ) A c 2 ( 1,1 ) A c 3 ( 1,1 ) A c 4 ( 1,1 ) A c 1 ( 1,200 ) A c 2 ( 1,200 ) A c 3 ( 1,200 ) A c 4 ( 1,200 ) A c 1 ( 2,1 ) A c 2 ( 2,1 ) A c 3 ( 2,1 ) A c 4 ( 2,1 ) A c 1 ( 2,200 ) A c 2 ( 2,200 ) A c 3 ( 2,200 ) A c 4 ( 2,200 ) A c 1 ( 3,1 ) A c 2 ( 3,1 ) A c 3 ( 3,1 ) A c 4 ( 3,1 ) A c 1 ( 3,200 ) A c 2 ( 3,200 ) A c 3 ( 3,200 ) A c 4 ( 3,200 ) A c 1 ( 500,1 ) A c 2 ( 500,1 ) A c 3 ( 500,1 ) A c 4 ( 500,1 ) A c 1 ( 500,200 ) A c 2 ( 500,200 ) A c 3 ( 500,200 ) A c 4 ( 500,200 ) , ( 14 )

where the subscript s is related to each tested scenario (from #0 to #4); Ack(i,j) (k=14; i=1500; k=1200) is the acceleration recorded by the accelerometer Ack at a discrete time i for the jth experimental test. These data were organized into five different analyses defined as cases A to E. For each case, one scenario was used for training, definition, and validation phases, while the other scenarios were applied for monitoring data. Table 1 summarizes each analyzed case.

Table 1
Definition of data applied in each Case.

5.1 The SAE’s network architecture

In the present work, the SAE’s network architecture was established by specifying a single hidden layer characterized by vector h (see Section 2.1). That layer is commonly referred to as the latent space. Figure 7 illustrates the configuration of the employed architecture, comprising 500*dim(h) (input to hidden layer) + 500*dim(h) (hidden to output layer) weights and respective biases (see Eq. 2 and Fig. 7), resulting in a total of 2000*dim(h) design parameters. Here, dim(h) represents the dimension of vector h. For the application under analysis, a single hidden layer was enough to obtain satisfactory results. Subsection 5.3. discusses the choice for the dimension of vector h. Other parameters related to the SAE model were defined as: maximum gradient value = 1x10-6; encoding activation function = satlin; decoding activation function = purelin; maximum number of epochs for training = 1000.

Figure 7
SAE’s network architecture (p indicates the pth signal and K = dim(h)).

5.2 The grid search process

The grid search is a technique often used in machine learning to find the best combination of hyperparameters for a given model. It involves creating a grid of possible values for each hyperparameter and then systematically evaluating the model performance for each combination of hyperparameters on a validation set. The model performance is typically measured using a performance metric such as accuracy, precision, recall, among others. By exhaustively searching through all possible combinations, grid search helps identify the best set of hyperparameters that yield the best performance for a given machine learning algorithm (Raschka 2018Raschka, S. (2018). Model evaluation, model selection, and algorithm selection in machine learning. arXiv preprint arXiv:1811.12808.).

Equations 15 to 17 presents the discrete values used in the grid search process for the present analyzed example.

ρ = [ 0.0063 ; 0.0125 ; 0.0250 ; 0.0500 ; 0.1000 ; 0.2000 ; 0.4000 ; 0.8000 ] ( 15 )
β = [ 0.5000 ; 1.0000 ; 2.0000 ; 4.0000 ; 8.0000 ] ( 16 )
λ = [ 0.0001 ; 0.0010 ; 0.0100 ; 0.1000 ] ( 17 )

These values were adopted based on the work of Touati et al. (2020)Touati, R., Mignotte, M., Dahmane, M. (2020). Anomaly feature learning for unsupervised change detection in heterogeneous images: A deep sparse residual model. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 13:588-600., which affirms that these values ensure good coverage of the search space for the considered parameters and corroborated in the studies of Finotti et al. (2021)Finotti, R. P., Barbosa, F. S., Cury, A. A., Pimentel, R. L. (2021). Numerical and Experimental Evaluation of Structural Changes Using Sparse Auto-Encoders and SVM Applied to Dynamic Responses. Applied Sciences 11(24): 11965. and Finotti et al. (2022a)Finotti, R. P., Barbosa, F. S., Cury, A. A., Pimentel, R. L. (2022a). Novelty Detection Using Sparse Auto-Encoders to Characterize Structural Vibration Responses. Arabian Journal for Science and Engineering 47(10):13049-13062.. Considering that the results obtained were quite satisfactory, it is assumed that the application of an optimization procedure to determine the optimal hyperparameters of the SAE can be studied in another work, with the potential to further improve the results achieved.

5.3 The best SAE-T² model

Before defining the best SAE- T² model, it is necessary to set up the dimension of h vector (dim(h)). h vectors with a low dimension may not be able to fairly characterize the dynamic signals. On the other hand, a high dimension for h can hinder the generalizability of the model. In previous works (Finotti et al. 2021Finotti, R. P., Barbosa, F. S., Cury, A. A., Pimentel, R. L. (2021). Numerical and Experimental Evaluation of Structural Changes Using Sparse Auto-Encoders and SVM Applied to Dynamic Responses. Applied Sciences 11(24): 11965. and Finotti et al. 2022aFinotti, R. P., Barbosa, F. S., Cury, A. A., Pimentel, R. L. (2022a). Novelty Detection Using Sparse Auto-Encoders to Characterize Structural Vibration Responses. Arabian Journal for Science and Engineering 47(10):13049-13062.), the dimension of h was adopted as 10% of the signal length. Therefore, starting from these reference, dim(h) values equal to 30, 50 and 70 were also evaluated. Figure 8 shows the results related to the models with smaller et according to the proposed selection criterion.

Figure 8
Evaluation of etas a function of dim(h)

In Figure 8, the values et obtained for each Case and the respective average values are shown. One observes that the decay rate for the mean values of et as dim(h) increases is very low (around -0.0000175 et/dim(h)). Thus, to also reduce the computational effort, and keeping a low Reconstruction Error Index et, the dimension of h was set to 30. The best SAE- T² models obtained for each case analyzed with the proposed strategy and their respective values of ρ, β and λ are shown in Table 2. These models are labeled as “Proposed” and are also presented with respective Reconstruction Error Indexes and Generalization Index (GI).

Table 2
Best model for each case of analysis.

To compare the performance of the proposed criterion, the smallest Reconstruction Error Index for the validation data (ev) was also applied to select the best model. This criterion is intuitively adopted to evaluate the performance of the SAE. In this case, the best models would be the ones presented as “Comparison” in Table 2.

To complete the comparisons shown in Table 2, a PCA (Principal Component Analysis) model was defined by using the training data. After the definition of the PCA model, the first 30 principal components were applied to reconstruct the monitoring, definition, and validation signals, leading to the respective et, ed and ev, and allowing the calculation of the respective GI. In this case, by using the first 30 principal components, 99.99% of the total variance is explained, playing an important role of parameter reducer similar to the SAE, where a dim(h) = 30. These results are also presented in Table 2 labeled as PCA.

Table 2 shows that the Reconstruction Error Indexes appear in descending order from the proposed methodology (“Proposed”) to the results of the PCA. On the other hand, the GI, when calculated by the proposed strategy, are the smallest, reaching more significant values for the “Comparison” methodology and even higher for the PCA.

However, lower Reconstruction Error Indexes are not necessarily associated with a better to identify structural novelties, as it will be discussed further. Figures 9 to 13 present the Shewhart T Control Charts for Cases A to E, respectively. The following discussions are also supported by the results of Table 3, where the number of outliers detected by each strategy for each type of data are presented.

Figure 9
Shewhart T Control Charts for Case A. (a) Proposed methodology; (b) Comparison methodology; (c) PCA
Figure 13
Shewhart T Control Charts for Case E. (a) Proposed methodology; (b) Comparison methodology; (c) PCA
Table 3
Percentage of outliers detected by each methodology.

For Case A (Figure 9), all analyzed methodologies were able to show that the monitoring data differ from those obtained in scenario #0. However, only the proposed strategy clearly confirms that training, definition, and validation data belong to the same category (Figure 9a). Both “Comparison” and PCA (Figures 9b and 9c) methodologies have T² values predominantly below the UCL1 1 By definition, 95% of the training data are below of the UCL. for the definition data, even though they present a distribution slightly shifted upwards when compared to ones for the training data. As for the validation data, the “Comparison” methodology provided T² values predominantly higher than the UCL, which may wrongly indicate that these data do not belong to scenario #0. The PCA clearly failed, as it did not identify the validation data as belonging to scenario #0. The values for IG presented in table 2 for both methodologies confirm these observations. In addition, PCA apparently mistakenly identified two groups of data for scenarios #4 and #5.

Regarding Case B (Figure 10), the evaluated methodologies successfully identified all monitoring data (T² values greater than UCL). However, once again, only the approach proposed in this work identified the training, definition and validation data as belonging to scenario #2 (Figure 10a), despite a few outliers with more significant magnitudes. The strategy defined by “Comparison” correctly classified the definition data, but most of the validation data had T² values greater than ULC. In addition, for scenario #5, two sets of data were erroneously detected. The PCA, on the other hand, presented an erratic behavior identical to the earlier case (Figure 10c).

Figure 10
Shewhart T Control Charts for Case B. (a) Proposed methodology; (b) Comparison methodology; (c) PCA

The results achieved by the proposed strategy for Case C (Figure 11) can also be considered satisfactory (see Figure11a), as only one outlier for the monitoring data was detected. Also, the T² value distributions for training, definition and validation data are reasonably similar, although the validation values are slightly shifted upwards. The other strategies behaved similarly to each other, clearly classifying the training, definition, and validation data as three distinct classes, which is obviously false (Figure 11b and 11c).

Figure 11
Shewhart T Control Charts for Case C. (a) Proposed methodology; (b) Comparison methodology; (c) PCA

For Case D (Figure 12), as in the earlier cases, the proposed methodology correctly identified the monitoring data and calculated T² values that provide similar distributions for all data in scenario #4 (Figure 12a). On the other hand, the “Comparison” methodology, despite having also achieved a good performance in the identification of monitoring data, provided a distribution of definition data with practically 50% of outliers (Figure 12b). The performance of PCA can be considered inferior to the others as it provided 100% of outliers for the definition data (Figure 12c).

Figure 12
Shewhart T Control Charts for Case D. (a) Proposed methodology; (b) Comparison methodology; (c) PCA

Finally, for Case E, behaviors like the preceding ones were observed (Figure 13): All methodologies were successful in detecting the monitoring data but only the proposed methodology correctly classified the training, definition, and validation data.

To assess the variability of responses obtained for each SAE- T² based methodology, 20 models were randomly generated with the same respective hyperparameters (β, λ, ρ) presented in Table 2. Since the PCA, for a given dataset, provides unique results, it was not included in these analyses. Table 4 summarizes the results obtained. Figures 14 to 18 show the respective Shewhart T² Control Charts for Cases A to E. The UCLs plotted in these figures are the respective mean values for the 20 analyzed models. In view of the graphs shown in figures 9 to 13, as well as the data presented in Table 4, one observes that the results obtained for the 20 random models are similar to those previously shown for a single SAE- T² model (Figures 9 to 13 and Table 3), showing that the influence of randomness on the results presented are not sufficient to invalidate the analyses performed.

Table 4
Percentage of outliers detected by each methodology considering 20 models per methodology
Figure 14
Shewhart T Control Charts for 20 random models - Case A
Figure 18
Shewhart T Control Charts for 20 random models - Case E
Figure 15
Shewhart T Control Charts for 20 random models - Case B
Figure 16
Shewhart T Control Charts for 20 random models - Case C
Figure 17
Shewhart T Control Charts for 20 random models - Case D

6 FINAL REMARKS

This paper presented numerical and experimental analyses applied to the detection of structural novelties in a slender aluminum frame. An experimental program applied to the tested structure, including 5 structural states, was extensively explored. The dynamic responses of the tested structure were recorded using accelerometers and are available for the scientific community for validation of strategies for SHM. In the present work, part of these dynamic responses was used as input data to evaluate the performance of a SHM model based on SAE and T². The proposed SAE- T² model presented a performance in the identification of the structural states tested significantly superior when compared to classical models present in the literature. More specifically, the strategy of adopting and the smaller validation error or the PCA showed to have inferior performances and greater generalization difficulties, yielding the best results achieved through the proposed SAE- T². Finally, the experimental datasets presented is of fundamental importance for the validation of the proposed methodology, and may help the validation of other strategies, as they will be available to the academic community. With the advances in the application of the technique, knowledge of the behavior mechanisms can be increased, making it possible to predict the performance of the structure over time, aiming at decision-making to reduce risk.

ACKNOWLEDGMENTS

This study was financed by: UFJF (Universidade Federal de Juiz de Fora - Programa de Pós-Graduação em Modelagem Computacional e Programa de Pós-Graduação em Engenharia Civil), Conselho Nacional de Desenvolvimento Científico e Tecnológico - CNPq (Brazil) - grants CNPq/ FNDCT/ MCTI 407256/2022-9, 303982/2022-5 and 308008/2021-9; Fundação de Amparo à Pesquisa do Estado de Minas Gerais - FAPEMIG - grants TEC PPM 00106-17, TEC PPM-00001-18 and BPD 00080-22.

Data availability statement:

The dataset for the two-story slender aluminum frame is available at the SHM-UFJF repository (http://bit.ly/SHM-UFJF).

References

  • Abbas, N., Umar, T., Salih, R., Akbar, M., Hussain, Z., Haibei, X. (2023). Structural Health Monitoring of Underground Metro Tunnel by Identifying Damage Using ANN Deep Learning Auto-Encoder. Applied Sciences 13(3):1332.
  • Alves, V., Cury, A. (2023). An automated vibration-based structural damage localization strategy using filter-type feature selection. Mechanical Systems and Signal Processing 190:110145.
  • Anastasopoulos, D., De Roeck, G., Reynders, E. P. (2021). One-year operational modal analysis of a steel bridge from high-resolution macrostrain monitoring: Influence of temperature vs. retrofitting. Mechanical Systems and Signal Processing 161:107951.
  • Avci, O., Abdeljaber, O., Kiranyaz, S., Hussein, M., Gabbouj, M., Inman, D. J. (2021). A review of vibration-based damage detection in civil structures: From traditional methods to Machine Learning and Deep Learning applications. Mechanical systems and signal processing 147: 107077.
  • Baldi, P., Hornik, K. (1989) Neural networks and principal component analysis: learning from examples without local minima. Neural Networks 2(1):53-58.
  • Cardoso, R. A., Cury, A., Barbosa, F. (2019a). Automated real-time damage detection strategy using raw dynamic measurements. Engineering Structures 196:109364.
  • Cardoso, R.A., Cury, A., Barbosa, F., Gentile, C. (2019b). Unsupervised real-time SHM technique based on novelty indexes. Structural Control and Health Monitoring 26:e2364.
  • Corbally, R., Malekjafarian, A. (2022). A data-driven approach for drive-by damage detection in bridges considering the influence of temperature change. Engineering Structures 253:113783.
  • Dan, J., Feng, W., Huang, X., Wang, Y. (2021). Global bridge damage detection using multi-sensor data based on optimized functional echo state networks. Structural Health Monitoring 20(4):1924-1937.
  • Doebling, S. W., Farrar, C. R., Prime, M. B. (1998). A summary review of vibration-based damage identification methods. Shock and Vibration Digest 30(2):91-105.
  • Eltouny, K., Gomaa, M., Liang, X. (2023). Unsupervised learning methods for data-driven vibration-based structural health monitoring: A review. Sensors 23(6):3290.
  • Finotti, R. P., Gentile, C., Barbosa, F. S., Cury, A. A. (2017). A novel natural frequency-based technique to detect structural changes using computational intelligence, Procedia engineering, 199, 3314-3319.
  • Finotti, R. P., Cury, A. A., Barbosa, F. S. (2019). An SHM approach using machine learning and statistical indicators extracted from raw dynamic measurements. Latin American Journal of Solids and Structures 16(2):e165.
  • Finotti, R. P., Barbosa, F. S., Cury, A. A., Pimentel, R. L. (2021). Numerical and Experimental Evaluation of Structural Changes Using Sparse Auto-Encoders and SVM Applied to Dynamic Responses. Applied Sciences 11(24): 11965.
  • Finotti, R. P., Barbosa, F. S., Cury, A. A., Pimentel, R. L. (2022a). Novelty Detection Using Sparse Auto-Encoders to Characterize Structural Vibration Responses. Arabian Journal for Science and Engineering 47(10):13049-13062.
  • Goodfellow, I., Bengio, Y., Courville, A. (2016). Deep Learning, MIT press.
  • Hou, R., Xia, Y. (2021). Review on the new development of vibration-based damage identification for civil engineering structures: 2010–2019. Journal of Sound and Vibration 491:115741.
  • Huang, M., Li, X., Lei, Y., Gu, J. (2020). Structural damage identification based on modal frequency strain energy assurance criterion and flexibility using enhanced Moth-Flame optimization. Structures 28:1119-1136.
  • Kullback, S., Leibler, R. A. (1951). On information and sufficiency. The Annals of Mathematical Statistics 22(1):79-86.
  • Lei, J., Cui, Y., Shi, W. (2022). Structural damage identification method based on vibration statistical indicators and support vector machine. Advances in Structural Engineering 25(6):1310-1322.
  • Liu, G., Niu, Y., Zhao, W., Duan, Y., Shu, J. (2022). Data anomaly detection for structural health monitoring using a combination network of GANomaly and CNN. Smart Structures and Systems 29(1):53-62.
  • Meng, Q., Catchpoole, D., Skillicom, D., Kennedy, P. J. (2017). Relational autoencoder for feature extraction. in Proceedings of IJCNN 2017, International Joint Conference on Neural Networks. Anchorage.
  • Montgomery, D. (2009). Introduction to Statistical Quality Control. John Wiley & Sons.
  • Morales, F. A. O., Cury, A., Peixoto, R.A.F. (2019). Analysis of thermal and damage effects over structural modal parameters, Structural Engineering and Mechanics 65(1):43-51.
  • Mousavi, A. A., Zhang, C., Masri, S. F., Gholipour, G. (2022). Structural damage detection method based on the complete ensemble empirical mode decomposition with adaptive noise: A model steel truss bridge case study. Structural Health Monitoring 21(3):887-912.
  • Mφller, M. F. (1993). A scaled conjugate gradient algorithm for fast supervised learning. Neural Networks 6(4):525-533.
  • Ng, A.S (2011). Sparse autoencoder. CS294A Lecture Notes. Stanford University.
  • Nunes, L. A., Amaral, R. P. F., Barbosa, F. S., Cury, A. C. (2021). A hybrid learning strategy for structural damage detection. Structural Health Monitoring 20(4):2143-2160.
  • Raschka, S. (2018). Model evaluation, model selection, and algorithm selection in machine learning. arXiv preprint arXiv:1811.12808.
  • Rosso, M. M., Aloisio, A., Cirrincione, G., Marano, G. C. (2023). Subspace features and statistical indicators for neural network-based damage detection. Structures 56:104792.
  • Sun, L., Shang, Z., Xia, Y., Bhowmick, S., Nagarajaiah, S. (2020). Review of bridge structural health monitoring aided by big data and artificial intelligence: From condition assessment to damage detection. Journal of Structural Engineering 146(5):04020073.
  • Touati, R., Mignotte, M., Dahmane, M. (2020). Anomaly feature learning for unsupervised change detection in heterogeneous images: A deep sparse residual model. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 13:588-600.
  • Umar, S., Vafaei, M., Alih, S. C. (2021). Sensor clustering-based approach for structural damage identification under ambient vibration. Automation in Construction 121:103433.
  • Wah, W. S. L., Chen, Y. T., Owen, J. S. (2021). A regression-based damage detection method for structures subjected to changing environmental and operational conditions. Engineering Structures 228:111462.
  • Wang, Z., Cha, Y. J. (2021). Unsupervised deep learning approach using a deep auto-encoder with a one-class support vector machine to detect structural damage. Structural Health Monitoring 20(1):406-425.
  • Wang, Z., Yang, D. H., Yi, T. H., Zhang, G. H., Han, J. G. (2022). Eliminating environmental and operational effects on structural modal frequency: A comprehensive review. Structural Control and Health Monitoring 29(11):e3073.
  • Yang, Z., Xu, B., Luo, W., Chen, F. (2022). Autoencoder-based representation learning and its application in intelligent fault diagnosis: A review. Measurement 189:110460.
  • Zhan, J., Zhang, F., Siahkouhi, M. (2021). A Step-by-Step Damage Identification Method Based on Frequency Response Function and Cross Signature Assurance Criterion. Sensors 21(4):1029.
  • Zhang, C., Mousavi, A. A., Masri, S. F., Gholipour, G., Yan, K., Li, X. (2022). Vibration feature extraction using signal processing techniques for structural health monitoring: A review. Mechanical Systems and Signal Processing 177:109175.

Edited by

Editor: Pablo Andrés Muñoz Rojas

Publication Dates

  • Publication in this collection
    17 Nov 2023
  • Date of issue
    2023

History

  • Received
    22 Mar 2023
  • Reviewed
    18 Sept 2023
  • Accepted
    03 Oct 2023
  • Published
    04 Oct 2023
Individual owner www.lajss.org - São Paulo - SP - Brazil
E-mail: lajsssecretary@gmsie.usp.br