1 INTRODUCTION

Laminated composite materials are made from layers which usually consist of two distinct phases: matrix and fibers. The fibers (e.g. carbon, glass, aramid) are embedded in a polymeric matrix. In a single layer (or ply) the fibers are oriented in a certain direction and when many layers are stacked together, making a laminate, each one can be tailored with a different orientation with each other. Specific properties and mechanical behavior of laminated composites can be found in ^{Jones (1999)}, ^{Staab (1999)} or ^{Mendonça (2005)}. The definition of the best orientation angle for each ply usually demands a great amount of computational time for complex structures, even when optimization algorithms are used. One option, in order to reduce the computational cost in the optimization process, is to use metamodels. A metamodel (also called surrogate model) is a mathematical model trained to represent or simulate a phenomenon. The training process is performed based on input-desired output training pairs, that is, the metamodel receives inputs and desired outputs and its parameters are adjusted in order to reduce the error between the metamodel output and the desired output. The selection of the metamodel training inputs can be performed randomly or planned aiming to explore a representative set of values of the design variables. The latter is called designs of experiments (DOE).

This works aims to analyze the laminated composites buckling using lamination parameters, neural networks and support vector regression. The knowledge of critical buckling load is important in the design of thin structures such as laminated composites. The first step of our approach is to apply the Latin hypercube DOE technique (^{Meyers and Montgomery, 2002}) to obtain the stacking sequence samples for the training input laminates. The second step is to convert the laminate stacking sequences (i.e., the set of ply angle orientations) into lamination parameters. This is done in order to have a constant number of inputs for the metamodel. Then, the buckling load for each laminate configuration is computed with the commercial finite element code Abaqus. This is the third step and provides the outputs for training the metamodel. Once the training pairs are obtained, the final step is to train the metamodels and evaluate their performances in a test case. Hence, the objective of this study is to compare the ability of neural network and support vector regression metamodels in estimating buckling load. The buckling load response computed by finite element and its approximation by neural network and by support vector regression are presented in order to verify which of these metamodels is the most suitable for the present application. Statistical analysis using the correlation factor is adopted to compare the performance of the different metamodels.

2 BUCKLING OF LAMINATED COMPOSITES

As the structures of laminated composites are usually thin and subject to compressive loads, they are susceptible to buckling. The analyses of buckling behavior are, in general, conducted to maximize the buckling load in order to obtain a reliable and lightweight safe structure. General buckling theory for laminated composites can be found in ^{Jones (1999)}. More specifically, ^{Leissa (1983)} presented classical buckling studies with mathematical and physical approaches. The subject is treated with classical bifurcation buckling analysis, plate equations and their solutions. Analytical analysis and development prediction of buckling and postbuckling is stated by ^{Arnold and Mayers (1984)}. ^{Sundaresan et al. (1996)} studied the buckling of thick laminated rectangular plates adopting the Mindlin's first-order shear deformation theory and von Karman's theory. Critical buckling loads of rectangular laminated composite plates were also studied by ^{Shukla et al. (2005)}, using Chebyshev series for spatial discretization, reporting results for different boundary conditions. ^{Geier et al. (2002)} presented the influence of the stacking sequence on the buckling. ^{Baba (2007)} studied the buckling behavior of laminated composite plates focusing on the influence of the boundary conditions and the studies are based on experimental and numerical analyses. Since finding the best stacking sequence is complex because it is a large scale problem and has a high computational cost, optimization techniques are usually implemented. For example, ^{Erdal and Sonmez (2005)} used the simulated annealing algorithm to find the optimum design of composite laminates for maximum buckling load. The ant colony algorithm was applied to an optimum buckling design by ^{Wang et al. (2010)}. ^{Kalnins et al. (2009)} used the metamodel approach for optimization of postbuckling response. Artificial neural networks have been used by ^{Bezerra et al. (2007)} to approximate the shear mechanical properties of reinforced composites. ^{Reddy et al. (2011)} optimized the stacking sequence of laminated composite plates applying neural network. Their work included experimental and finite element analysis for minimizing the deflections and stresses. In the step of validation tests, they evaluated the quality of the results of the predicted outputs and the experimental measured outputs using a regression coefficient. The high correlation between the predicted values by neural network and finite element analysis validated the metamodel. ^{Reddy et al. (2012)} studied the D-optimal design sampling plan and artificial neural network in the natural frequency prediction of laminated composite plates. ^{Todoroki et al. (2011)} proposed a Kriging surrogate model in order to predict and maximize the fracture load of laminated composites using lamination parameters. Lamination parameters were also adopted by ^{Liu et al. (2010)} in the optimization of blended composite wing panels using smeared stiffness technique and by ^{Herencia et al. (2007)} in the optimization of long anisotropic fiber-reinforced laminate composite panels with T-shaped stiffeners.

This work presents artificial neural network (ANN or simply NN) and support vector machine (SVM) metamodels of a laminated composite plate. The metamodels are trained with lamination parameters as inputs. The desired outputs are the composite plate buckling loads, which are computed by a finite element model. The goal is to investigate which metamodel performs better in this application. The methodology of design of experiments and metamodeling is applied in the numeric case to obtain the buckling response and the results are presented and discussed. A summary about the learning formulation, lamination parameters and Latin hypercube sampling technique are presented.

2.1 Lamination parameters

Lamination parameters are the extension of the invariant concepts for a lamina to a laminate proposed by ^{Tsai and Pagano (1968)} and are described here based on ^{Jones (1999)} and ^{Foldager et al. (1998)}.

The transformed stiffnessess of an orthotropic composite ply is written as

Where

Looking at (Eq. (1)), it is difficult to understand what happens to a laminate when it is rotated. Motivated by that fact, ^{Tsai and Pagano (1968)} recasted the transformed stiffnessess and obtained what is called invariants, which are written as

The invariants can be used to rewrite the transformed stiffnesses of an orthotropic lamina

Looking at Eq. (4), it is possible to note that the transformed stiffnesses' elements _{11}, _{22}, _{12} and _{66} computed with the invariants have as first terms in their equations the invariants *U _{1}, U_{4} * and

*U*, which depend only on the material properties. The use of invariants makes easier to understand how the lamina stiffness is composed. For example,

_{5}_{11 }is determined by

*U*, plus a second term of low-frequency variation with θ, and another term of higher frequency. Hence,

_{1}*U*represents an effective measure of lamina stiffness because it is not influenced by orientation (

_{1}^{Jones, 1999}).

The laminate stiffness matrices [*A*], [*B*] and [*D*] in terms of a matrix of invariants [*U*] and the lamination parameters {ξ}*A,B,D *can be written in a vector form as

where *t *is the total thickness of the laminate and [*U*] is the matrix of invariants set and it is given by

The lamination parameters are written as

The matrix [*A*] and the vector {*A*} has the following correspondence:

and the same is done for matrices [*B*] and [*D*].

In the context of metamodels, the main advantage of using lamination parameters is that an arbitrary number of layers with different orientations can be converted in just twelve lamination parameters. That is, the number of inputs of the metamodel becomes constant and it is not necessary to train a different metamodel when the number of layers is changed. Looking at Eq. (7), it is possible to note that the lamination parameters depend on the total thickness of the laminate, which means that for metamodels purposes the number of layers can vary, but the total thickness must remain constant.

3 DESIGN OF EXPERIMENTS AND METAMODELING

As already mentioned, the technique to plan the number and location of the sampling points in the design space is called design of experiments (DOE). A metamodel or surrogate model is generated from experimental tests or computer simulations. It uses mathematical functions to approximate highly complex objective in the design problems (^{Liao et al., 2008}). Metamodeling approach (sometimes also called response surface methodology) has been used for the development and the improvement of designs and processes. ^{Meyers and Montgomery (2002)} defined the response surface as a collection of statistical and mathematical techniques for development, improving and optimizing processes. ^{Simpson et al. (2008)} and ^{Wang and Shan (2007)} reviewed this subject, focusing on sampling, model fitting, model validation, design space exploration and optimization methods. An approximate function or response is searched based on sequential exploration of the region of interest. The basic metamodeling technique has the following steps: sample the design space, build a metamodel and validate the metamodel (^{Wang and Shan, 2007}). There are many experimental designs plans to sample the space such as central composite, box-behnken, Latin hypercube, Monte Carlo (^{Meyers and Montgomery, 2002}). A review study and comparison of several sampling criteria are presented by ^{Janouchova and Kucerova (2013)}. There are also many metamodels techniques such as Kriging, radial basis functions, artificial neural network, decision tree and support vector machine (^{Wang and Shan, 2007} and ^{Vapnik, 2000}). ^{Suttorp and Igel (2006)} defined support vector machine as a learning machine strategy based on a learning algorithm and on a specific kernel that computes the inner product of an input set of points in a feature space. In this work, Latin hypercube design is applied and also two metamodels based on learning machine, neural network and support vector regression (SVR), are used. In this case, learning is a problem of function estimation based on empirical data as explained ^{Vapnik (2000)}. Vapnik proposed this approach in the 1970s. In the analysis of learning processes, inductive principle is applied with high ability and the machine learning algorithm is generated by this principle. The learning machine model is usually represented by three components (^{Vapnik, 2000}). First, a generator of random vector G(**x**) or the input data; second, a supervisor (S) which returns an output value **y**; and third, a learning machine (LM) that implements a set of functions that approximates the supervisor's response **ỹ**. Figure 1 shows a schematic representation of a learning model.

3.1 Latin Hypercube Design

Latin hypercube (LH) design is a DOE strategy to choose the sampling points. ^{Pan et al. (2010)} stated that LH is one of the "space-filling" methods in which all regions of design space are considered equal, and the sampled points fill the entire design space uniformly. ^{Forrester et al. (2008)} explained that LH sampling generates points by stratification of sampling plan on all of its dimensions. Random points are generated by projections onto the variable axes in a uniform distribution. A Latin square or square matrix n x n is built filling every column and every line with permutation of {1, 2, ... , *n*} as stated by ^{Forrester et al. (2008)}. Following in this way, every number must be selected only once in all axes. A simple example of a Latin hypercube with *n* = 4 is presented in Table 1, where each line represents one sample that constructs the input sampling data with uniform distribution.

The variables are uniformly distributed in the range [0,1]. The normalization is used for multidimensional hypercube, where the samples of size *n x m* variables are randomly selected considering the *m* permutations of the sequence of integers 1, 2, ... , *n* and assigning them to each of the *m* columns of the table (^{Cheng and Druzdzel, 2000}). In the multidimensional Latin hypercube the design space is divided into an equal sized hypercube and a point is placed in each one (^{Forrester et al., 2008}). As an example, a ten-point Latin hypercube sampling plan for a laminated of two layers is shown in Figure 2. The design based on two variables, the lamination angles, ranged from 0º to 90º, is selected randomly with the uniform distribution. If the problem has numerous design variables, the computational demand also increases. However, using some DOE technique, for example, Latin hypercube, data sampling can be generated, which is able to better represent the design space and, as a consequence, more accurate models can be obtained with less points, and subsequently decrease the computational time.

In this work, the ply angle orientations are converted into lamination parameters in order to keep the number of inputs constant for the metamodels, independently of the number of layers of the laminate.

Figure 3 shows the design space of a four-layer symmetric laminate, which has only two design variables (θ1 and θ2). This design space considers the plies angular orientations varying in the range [0 90] with increments of one degree (blue points). The red points are the laminates selected by the LHS. Figure 4 shows the design space of the same laminate, but considering lamination parameters. It means that the lamination parameters are computed for each laminate of Fig. 3 and showed as the blue dots in Fig. 4. The laminates selected by the LHS are also highlighted in lamination parameters design space as the red dots. It is possible to see that they are uniformly distributed in both design space.

These figures also show that the angular orientation design space is completely filled and each laminate has corresponding lamination parameter. On the other hand, the lamination parameters design space ranges from -1 to 1, but there is some empty regions in the space. It means that some lamination parameters do not have a correspondent laminate. In this way, it can be concluded that for optimization purposes it is not possible to considers only the lamination parameters design space because the result can be a non existent laminate. ^{Foldager et al. (1998)} presented an approach to deal with this fact.

3.2 Neural network

Neural network studies have started by the end of the 19th and the beginning of the 20th century. By this time researchers have started to think about general learning theories, vision, conditioning, etc. The mathematical model of a neuron was created in the 1940's and the first application of neural networks was about ten years later with the perceptron and the ADALINE (ADAptive Linear Neuron). The ADALINE and the perceptron are similar neural networks with a single layer. The first one uses the transfer linear function and the second one uses hard limiting function. The perceptron is trained with the perceptron learning rule and the ADALINE with the Least Mean Squared (LMS) algorithm. They both have the same limitation that is to only solve linearly separable problems. This limitation was overcome in the 1980's with the development of the backpropagation algorithm. It is a generalization of the LMS algorithm used to train multilayer networks. As the LMS, the backpropagation is an approximate steepest descent algorithm. The difference is that in the ADALINE the error is a linear explicit function of the network weights and its derivatives with respect to the weights can be easily computed. The backpropagation, by its time, is for multilayer networks with nonlinear transfer functions and the computation of the derivatives with respect the weights require chain rule. This means that the error must be backpropagated by the multilayer neural network in the process of updating the weights (^{Hagan et al., 1996}).

Figure 5 shows the model of a neuron used in a multilayer neural network.

It receives some inputs (x_{i}) that are multiplied by synaptic weights (w_{ki}). The bias (b_{k}) are weights with constant input equal to one. The sum of these weighted inputs (v_{k}) has its amplitude limited by the activation function (φ (.)). The neuron output (y_{k}) is given by

One multilayer network is represented in Figure 6. It has one input layer, one layer of hidden neurons and one layer of output neurons. By adding one or more hidden layers, the network is able to extract higher-order statistics, which is important when the size of the input layer is large (^{Haykin, 1999}).

The neural network can be trained by supervised or unsupervised learning. The unsupervised learning uses competitive learning rule and it is not the subject of this work. Supervised learning needs a desired output to be compared with the neural network output and return an error. The learning process is based on error correction. Error measurement is a nonlinear function of the network parameters (weights and bias) and numerical analysis methods for the minimization of functions may be applied. The error minimization process is performed with the weights and bias adjustment using the backpropagation algorithm or one of its variations. It is described here based on ^{Haykin (1999)}.

The function to minimize is the mean squared error, which is written as

Where {*W _{k} *} is a vector with the weights and bias of the neural network neuron k and {

*e*} is the error vector at the

_{k}*n-*th iteration.

The weights and bias are updated according to the following rules

where [*W*] is a matrix with the weights of the *m*-th neural network layer , *γ* is the rate of learning and {*S*}*m* is the vector of the sensitivities of layer m which is given by

*Rm *is the number of neurons at layer *m*. The sensitivity of layer *m *is computed by the sensitivity of layer *m+1*. This defines the recurrence relationship where the neurons in hidden layers are charged by the neural network error.

In order to explain the recurrence relationship, consider the following Jacobian matrix,

Considering, for example, the element *ij* of this Jacobian matrix,

In matrix form:

where,

The recurrence relationship is written as

in which is possible to see that the sensitivity at layer *m* is computed using the sensitivity at layer *m*+1.

This work uses one of the variations of the backpropagation algorithm known as the Levenberg-Marquardt method, which was proposed by ^{Levenberg (1944)} and it is a variation of Newton's method that does not require the calculation of second derivatives and, in general, provides a good rate of convergence and a low computational effort.

3.2 Support Vector Regression

The support vector machine theory is described in ^{Vapnik (2000)}. An overview of statistical learning theory, including function estimation model, problems of risk minimization, the learning problem, an empirical risk principle that inspired the support vector machine are reported in ^{Vapnik (1999)} and the learning machine capacities was stated by ^{Vapnik (1993)}. A background about the SVM or SVR can be found in Vapnik and ^{Vashist (2009)} and a tutorial in ^{Smola and Schölkopf (2004)}, additional explanation in ^{Sánchez A. (2003)}, ^{Pan et al. (2010)}, ^{Üstün et al. (2007)}, ^{Suttorp and Igel (2006)}, ^{Che (2013)}, ^{Basak et al. (2007)}, ^{Ben-Hur et al. (2001)}.

Support vector machine is based on learning method using training procedures (^{Sánchez A., 2003}). The learning machine is employed for solving classification or regression problems since it can model nonlinear data in high dimensional feature space applying kernel functions, as stated by ^{Üstün et al. (2007)}. Kernel methods have the capacity to transform the original input space into a high dimensional feature space. The decision function of support vector classification or support vector regression is determined by support vector as reported by ^{Guo and Zhang (2007)} and ^{Boser et al. (1992)}. The difference between classification and regression is that support vectors generate the hyperplane in classification, i.e., a function that classifies some set of samples, for example pattern recognition. In the regression case, the support vector determines the approximation function. A short review of SVR, which is a variation of SVM, is described below. The SVR technique search the multivariate regression function f(x) based on the input data set, i.e., the training set X to predict the output data (^{Vapnik, 1993}, ^{Vapnik, 1999}, ^{Vapnik, 2000}, ^{Smola and Schölkopf, 2004}, ^{Üstün et al., 2007}, ^{Guo and Zhang, 2007}). The equation of the regression function is given by

where *K*(*x _{i}, x*)is a kernel,

*n*is the number of training data,

*b*is an offset parameter of the model and

*α*

_{i}, α_{i}^{* }are Lagrange multipliers of the primal-dual formulation of the problem (

^{Smola and Schölkopf, 2004}). The vector of the training data set is written as

where *x _{i} * is the

*i*-th input vector for

*i*-th training sample and

*y*is the target value or output vector for

_{i}*i*-th training sample. The fitness function or the approximation model is considered good if the output of SVR regression

*f*(

*x*) is quite similar to the required output vector

*y*. The kernel

_{i}*K*represents an inner product of the kernel function

*ϕ*. Polynomials, splines, radial basis are examples of kernel functions. The kernel is, in general, a nonlinear mapping from an input space onto a characteristic space formulated as

Kernel transformation works with non-linear relationships in the data in an easier way (^{Üstün et al., 2007}). As described by ^{Che (2013)}, for nonlinear regression problems, the kernel function represents the extension of a linear regression of support machine or the linear regression on the higher dimensional space. The SVR for nonlinear functions is based on the dual formulation utilizing Lagrange multipliers. The parameter b can be obtained through the so called Karush-Kuhn-Tucker conditions (^{Smola and Schölkopf, 2004}, ^{Vapnik, 2000}), from the theory of constrained optimization and which must be satisfied at the optimal point considering the constraint 0 < α_{i} and α_{i} ^{* }< C In order to find a model or, in other words, an approximated function, the objective function to be minimized is

subject to

where α_{i}, α_{i} ^{* }are the weights that are found minimizing the function and e and C are the optimization function parameters. The constants C and e determine the accuracy of the SVR models. The best combination of these parameters means that the surrogate model achieved a good fitness adjustment. ^{Smola and Schölkopf (2004)} explained that the regularization constant C determines the trade-off between the training error and model approximation. The parameter e is associated to the precision in a feasible convex optimization problem, i.e., in some cases the "softmargin" is accepted as loss function or the amount of deviations tolerated.

The transformed regression function of SVR, based on support vector, can be reformulated as (^{Guo and Zhang, 2007})

where SV is the support vector set. The transformed regression problem may be solved, for example, by quadratic programming and only the input data corresponding to the non-zeros α_{i }and α_{i} ^{* }contribute to the final regression model (^{Vapnik, 2000}, ^{Üstün et al., 2007}, ^{Smola and Schölkopf, 2004}). The corresponding inputs are called support vectors.

The architecture of a regression machine is depicted graphically in Figure 7, with the different steps for support vector algorithm. The input of the support vector for training processes is mapped into a feature space by a map ϕ. Evaluation of kernel step is processed with a dot product of the training data under the map ϕ (^{Smola and Schölkopf, 2004}). The feature space for nonlinear transformation based on support vector is achieved with the appropriated kernel function. Kernel functions are based on the variance-covariance, a polynomial, radial basis function (RBF), polynomial of degree d, Gaussian RBF network, splines (^{Sánchez A., 2003}). The output prediction in the feature space is obtained with the weights (α_{i} - α_{i} ^{*)} and the term b as in Eq. (22). This support vector machine constructs the decision function in learning machine process. After the training steps are concluded, a test vector is applied in order to verify the results and validate the metamodel. In order to check the results, the correlation factor R2 is computed (^{Meyers and Montgomery, 2002}, ^{Reddy et al., 2011}) as

where t_{j} are the targets or experimental values and o_{j} are the outputs or predicted values from SVR. This regression coefficient estimates the correlation between SVR predicted values and target values.

4 NUMERICAL RESULTS

The laminated structure analyzed here is taken from the work of ^{Varelis and Saravanos 2004}).

Figure 8 shows its geometry and engineering properties.

As already mentioned, the Latin hypercube scheme is used to define the samples. Data samples are different stacking sequence laminates for which the buckling load is computed in order to provide the input-output training pairs for the metamodels. The buckling load is obtained using the commercial finite element package Abaqus. A neural network metamodel and a support vector machine metamodel are trained, and their performances are compared.

4.1 Latin hypercube sampling data

LH plan is applied to select representative stacking sequences of symmetrical laminates with 24 layers and discrete angles [0º, ±45º, 90º]. The following stacking sequence is assumed [±*θ* _{1} ±*θ* _{2} ±*θ* _{3} ±*θ* _{4 }±*θ* _{5 } *θ* _{6 } *θ* _{7}]*s*,. As we can observe, the laminate is not balanced and we have 7 variables. 120 laminates are generated by LHS, and their buckling loads are computed by a numerical model built in the finite element package Abaqus. Then, as a first attempt, 35 sampling points (laminates) were selected for the training step for both metamodels. These quantities are based on the studies of ^{Pan et al. (2010)} and ^{Yang and Gu (2004)}. They used 5N_{dv}, where N_{dv} is the total number of design variables. Table 2 presents the 35 samplings of laminates selected, considering N_{dv} = 7, to train the metamodel and the corresponding buckling loads obtained via finite element simulation (Abaqus).

Laminate | Stacking sequence | Buckling load (Abaqus model) |
---|---|---|

1 | [±45_{4}0_{2}90 0]_{S} |
52413.17 |

2 | [±45 0_{2}±45 0_{2}±45 45 90]_{S} |
46059.99 |

3 | [0_{2} 90_{2} 0_{2} ±45_{2} 90 90 ]_{S } |
38352.41 |

4 | [90_{2}±45 90_{2}±45 0 0]_{S } |
32987.67 |

5 | [0_{2} 90_{2}0_{2} ±45 0_{2} 0 45]_{S } |
36067.43 |

6 | [90_{2}0_{2}90_{2}±45_{2} 90 45]_{S } |
35698.26 |

7 | [±45_{3}0_{2}90_{2} 45 0]_{S } |
51492.20 |

8 | [90_{2}0_{2}90_{4} ±45 0 0]_{S } |
33306.83 |

9 | [0_{2}±45_{4} 0 90]_{S} |
44831.61 |

10 | [0_{4}90_{2}0_{4} 45 45]_{S} |
34240.61 |

11 | [±45 90_{2}0_{4} ±45 45 0]_{S} |
44256.14 |

12 | [±45 90_{2}0_{2}±45 90_{2} 0 45]_{S} |
44033.28 |

13 | [±45 90_{2}0_{2} ±45 90_{2} 0 45]_{S} |
44033.28 |

14 | [±45 0_{2} ±45 90_{2} ±45 0 90]_{S} |
47174.09 |

15 | [90_{2}±45_{3}0_{2} 90 90]_{S} |
41610.35 |

16 | [0_{2}±45 0_{4}±45 0 45]_{S} |
38821.80 |

17 | [±45_{3}90_{2} ±45 90 0]_{S} |
50675.00 |

18 | [±45_{4}0_{2} 45 45] _{S} |
51962.65 |

19 | [±45 90_{4} 0_{2}90_{2} 45 0] _{S} |
39785.67 |

20 | [90_{4}0_{2}±45_{2} 0 90] _{S} |
31729.92 |

21 | [0_{2}90_{2} ±45 0_{4} 45 90] _{S} |
39.949.64 |

22 | [90_{2}±45_{2}0_{4} 45 45] _{S} |
𝑆 41260.40 |

23 | [±45 0_{4}±45_{2} 45 0] _{S} |
𝑆 43927.37 |

24 | [±45_{2}0_{2}±45_{2} 90 45] _{S} |
𝑆 50418.32 |

25 | [±45_{3}0_{2} ±45 90 45] _{S} |
𝑆 51798.71 |

26 | [±45 90_{2}0_{4}90_{2} 0 0] _{S} |
𝑆 43572.26 |

27 | [0_{2}±45 90_{2}±45 0_{2} 45 90] _{S} |
𝑆 42277.67 |

28 | [90_{2}±45 90_{4} ±45 45 45] _{S} |
𝑆 33522.56 |

29 | [90_{2}0_{4}±45_{2} 45 45] _{S} |
𝑆 39165.87 |

30 | [90_{2}0_{2}90_{2}0_{2}90_{2} 0 45] _{S} |
𝑆 35190.60 |

31 | [±45 90_{4}±45_{2} 90 45] _{S} |
𝑆 40925.76 |

32 | [±45_{3}90_{2} ±45 0 45] _{S} |
𝑆 50589.38 |

33 | [0_{2} ±45 0_{2} ±45 0_{2} 45 0] _{S} |
𝑆 40169.62 |

34 | [±45 0_{2} ±45 0_{4} 45 90] _{S} |
𝑆 45359.92 |

35 | [0_{2}90_{2}±45_{3} 90 90] _{S} |
42074.75 |

The pairs of laminate angular orientation and corresponding buckling loads may be converted in pairs of lamination parameters and corresponding buckling loads, as exposed in section 2.1. The advantage of doing this is that the number of lamination parameters is constant. This means that, for metamodels training purposes, the number of inputs became constant. The lamination parameters are computed with Eq. (7) and the results are shown in Table 3.

4.2 Support Vector Regression Metamodel

A support vector regression script was developed in Python language. The input data generated from Latin hypercube is the computational experiments matrix with 35 stacking sequences, converted to lamination parameters, as shown in Table 3. Gaussian RBF function was adopted as the kernel function. The simulation of training data sampling with Latin hypercube sampling and buckling load is presented in Figure 9.

The statistical learning methods, as seen in section 3.2, for metamodel approximation with C = 1e10 and e = 0.155 resulted in the best fitting regression with vectors machine. To estimate the quality of regression the correlation factor was computed as in Eq. 24. The graphics with this correlation was also plotted. The correlation factor calculated for the sampling set was R2 = 0.999994. This value means that the SVR applied in this case presented a good approximated response.

In order to validate the model, 15 news samples were used to test the metamodel. Table 4 presents these validate sampling for laminated composites. The parameters C = 1e10 and e = 0.155, obtained from the training process were considered in this step. Figure 10 shows the validation results for the buckling load. The output vectors were also quite close with the correlation factor R2 = 0.99999870. Based on these results it is possible to conclude that the support vector regression is a good supervising learning method for modeling the critical buckling load of laminated composite plates.

Latin hypercube training samples and the corresponding responses | ||
---|---|---|

Laminate | Stacking sequence | Buckling load |

1 | [±45 90_{2}±45_{3} 45 45]_{S} |
45545.04 |

2 | [90_{4} ± 45 0_{2} ± 45 45 90]_{S} |
32924.13 |

3 | [0_{4}±45_{2} 0_{2} 90 90 ]_{S} |
37879.82 |

4 | [90_{2}±45_{2}90_{2}0_{2} 0 0]_{S} |
53369.56 |

5 | [±45 0_{4} 90_{4} 0 0]_{S} |
43153.43 |

6 | [0_{2}±45_{3}90_{2} 45 45]_{S} |
44933.25 |

7 | [±45 90_{2}0_{4}±45 0 45]_{S} |
44237.63 |

8 | [±45 90_{2}0_{4} ±45 0 45]_{S} |
44237.63 |

9 | [±45_{2} 90_{2}±45_{2} 45 45]_{S} |
48430.77 |

10 | [90_{4}0_{2}90_{2}0_{2} 45 90]_{S} |
30251.10 |

11 | [±45 90_{2}±45_{2}0_{2} 0 45]_{S} |
45477.40 |

12 | [90_{2} ±45 90_{2} 0_{2}±45 90 45]_{S} |
45850.84 |

13 | [0_{4}±45_{3} ±45 0]_{S} |
38248.11 |

14 | [90_{4} ±45 0_{2} ±45 0 45]_{S} |
32989.45 |

15 | [±45_{5} 45 90]_{S} |
52232.50 |

4.3 Neural Network Metamodel

In this work, the neural network is trained using the Matlab Neural Network Toolbox (^{MATLAB, 2010}). In order to compare neural network and SVR results, the training data are the same for both metamodels. NN uses a network with one hidden layer with 10 neurons. The neural network training results are presented in Figure 11. This figure shows a comparison between neural network outputs and target outputs. It is possible to see that the neural network is well trained, which is confirmed by the linear regression with a correlation factor equal to 1.

As was done for the SVR, 15 news samples are presented to the neural network in order to validate the model. The neural network and finite element outputs are presented in Figure 12. It is possible to see that the neural network is not capable of predicting all outputs correctly. This is confirmed by the linear regression with a correlation factor equals to 0.800753 in the validating process. This result can be caused by an overfitted neural network, which means that the neural network just memorized the samples but did not learn the system behavior. The training correlation factor equal to 1 corroborates that hypothesis. Another possibility is that the numbers of samples used in the neural network are not enough for it to learn the model system behavior. In order to verify this possibility, more samples are provided to the neural network training. Results for NN training with 80 samples and NN validation with 15 samples are shown in Figure 13. Neural network improved with more samples. In this case the correlation factor was R2 = 0.951890. Indeed, for the same conditions, SVR had a better performance in representing the approximation of the system in question.

The results showed that the neural network needs more training pairs to learn the behavior of the system. The reasons why the SVR had a better performance than NN can be in the radial basis function used by SVR as kernel functions. This function returned a better nonlinear regression than a neural network approximation. Neural network is based on the Empirical Risk Minimization Inductive Principle. This principle approximates responses for large samples. On the other hand, SVR applies the Structural Risk Minimization Inductive Principle that is based on subset of samples. ^{Vapnik (2000)} presented this approach and applied the statistic learning theory. Furthermore, the fact that SVR optimality is rooted in convex optimization and neural network in minimizing the errors by the backpropagation algorithm can also explain the results.

Besides the application of SVR and NN metamodels, another contribution of this work was the use of lamination parameters as inputs for the metamodels in stead of angular ply orientations for a composite structure analysis. This makes possible to use the same metamodel for representing a laminate with a different number of layers. This means that the metamodels trained with input-output pairs representing a 24-layer laminate can be used for laminates with different number of layers, once the total thickness remains the same.

5 CONCLUSIONS

The computational time can be burdensome in the optimization process of laminated composite designs. Concerned with this fact, this paper investigated the performance of two metamodels employed to analyze the buckling load computation of composite plates. The best metamodel is going to be used in future studies of buckling load optimization in order to speed up the optimization process. The neural network and support vector regression metamodels are obtained by supervised learning. Their learning process is based on input-desired outputs training pairs. In order to select representative samples to construct the metamodels, the Latin hypercube design is used. In order to get a constant number of inputs, the ply orientations are converted into lamination parameters. This step makes possible a metamodel trained with a given number of layers to be able to represent a laminate with a different number of layers since the total thickness of the laminate remains constant. The desired outputs are the laminates buckling loads computed by a finite element model. The neural networks and support vector regression were trained and tested with the same input-desired output pairs. The SVR presented better results in comparison to the neural network. Radial basis function machines applied in SVR, as kernel function, constructed better nonlinear regression than a neural network approximation for the training data. Kernel function overcame the hidden layers for nonlinear response in this study. The fact that SVR optimality is rooted in convex optimization, and neural network in minimizing the errors by the backpropagation algorithm, also explains the results. The metamodels, NN and SVR, have different error minimization techniques, and SVR is the best one for the test cases presented here.