SciELO - Scientific Electronic Library Online

 
vol.32 número1Order of accuracy study of unstructured grid finite volume upwind schemes índice de autoresíndice de assuntospesquisa de artigos
Home Pagelista alfabética de periódicos  

Serviços Personalizados

Journal

Artigo

Indicadores

Links relacionados

Compartilhar


Journal of the Brazilian Society of Mechanical Sciences and Engineering

versão impressa ISSN 1678-5878

J. Braz. Soc. Mech. Sci. & Eng. vol.32 no.1 Rio de Janeiro jan./mar. 2010

http://dx.doi.org/10.1590/S1678-58782010000100012 

TECHNICAL PAPERS

 

Multiobjective optimization techniques applied to engineering problems

 

 

Lidiane Sartini de OliveiraI; Sezimária F. P. SaramagoII

Ilid_sartini@yahoo.com.br Laboratório de Projetos Mecânicos Federal University of Uberlândia - UFU
IIsaramago@ufu.br Federal University of Uberlândia - UFU College of Mathematics 38408-100, Uberlândia, MG, Brazil

 

 


ABSTRACT

Optimization problems often involve situations in which the user's goal is to minimize and/or maximize not a single objective function, but several, usually conflicting, functions simultaneously. Such situations are formulated as multiobjective optimization problems, also known as multicriteria, multiperformance or vector optimizations. Because multiobjective optimization problems arise in different scientific applications, many researches have focused on developing methods for their solution. Thus, there are several criteria that can be considered to solve such complex optimizations. This paper contributes to the study of optimization problems, by comparing some of these methods. The classical method, based on function scalarization, in which a vector function is transformed into a scalar function, is represented here by the weighted objectives and global criterion methods. A different approach involves hierarchical, trade-off and goal programming, which treats the objective functions as additional constraints. Some multicriteria optimization problems are given to illustrate each methodology studied here. The techniques are initially applied to an environmentally friendly and economically feasible electric power distribution problem. The second application involves a dynamics optimization problem aimed at optimizing the first three natural frequencies.

Keywords: multiobjective optimization, weighted objectives, hierarchy, trade-off, global criteria


 

 

Introduction

Many real-world engineering design or decision making problems involve the simultaneous optimization of multiple conflicting objectives. A multiobjective optimization problem is defined as such when the goal is to simultaneously minimize or maximize several functions with the same objective, with one function often conflicting with another (Eschenauer et al., 1990). In such cases, one must look for an optimized vector defined in n-dimensional Euclidean space, En, of project variables that offer the best solution for a vector of objective functions defined in the k-dimensional Euclidean space, Ek, of objective functions (Osyszka, 1981).

Such problems may be subject to restrictions and all related functions may be nonlinear. Several objectives are available and can be considered to solve these complex optimization problems. In such cases, the same values of design variables are unlikely to result in the best optimal values for all the objectives. Hence, some trade-offs between the objectives are needed to ensure a satisfactory design. Because system efficiency indices may differ (and be mutually contradictory), it is reasonable to use the multiobjective approach to optimize the overall efficiency. This can be done mathematically correctly only if a principle of optimality is used. We have used a Pareto-optimality principle whose essence is as follows. The solution to the multiobjective optimization problem is considered Pareto-optimal if no other better solution satisfies all the objectives simultaneously. In other words, there may be other solutions that better satisfy one or several objectives, but they must be less satisfactory than the Pareto-optimal solution in satisfying the remaining objectives. In that case, the result of the multiobjective optimization problem is finding a full set of Pareto-optimal solutions. As a rule, it is impossible to find a full infinite set of Pareto-optimal solutions for particular real-life problems. Therefore, the aim of a multiobjective engineering problem is to determine a finite subset of criteria-distinguishable Pareto-optimal solutions.

The scope of this research involves investigations into multiobjective optimization techniques. Some classical methods are based on scaling the functions, with the vector objective function transformed into a scalar function, while others treat objective functions as additional restrictions.

Multiobjective optimization problems appear in a variety of scientific applications, and several researchers have focused on developing methods to solve them. Yoshimura et al. (2005) utilize the formulation of multiobjective optimization problems, particularly the hierarchical optimization structure, to reformulate problems based on the evolution of number characteristics. The optimization problem studied here involves a variety of performance characteristics such as accuracy, operating efficiency, manufacturing cost, and energy consumed during use. Knowles (2005) uses multiobjective optimization in scenarios where the evaluation of each solution is financially and/or temporally expensive. That paper evaluates several algorithms and proposes a hybrid optimization algorithm. Parsons and Scott (2004) used multiobjective optimization considering the problem of maritime projects. Their article discusses a methodology to help design teams select the best solution from a set of excellent Pareto solutions with a minor additional computation cost. Andersson (2003) applied a multiobjective design of a hydraulic pump. The problem was formulated using dynamic simulation models, response surfaces, and static equations. Vankan and Maas (2002) presented models and the optimization genetic algorithm to solve an engineering design that approaches the detailed of aeronautical designs based on the use of multiobjective functions. Ambrósio (2002), in turn, utilized multiobjective programs as an instrument for agroenvironmental planning. Saramago and Steffen (2002) presented a solution to a multiobjective optimization problem to identify the best trajectory for a handling robot in the presence of moving obstacles, minimizing the total time required to traverse its predefined route and the mechanical energy of its actuators.

This paper contributes to the study of various methods which are potentially applicable for the solution of multiobjective optimization problems. To illustrate the methodologies in question, we discuss the solutions to two problems. In the first application, the techniques are applied to an environmentally friendly and economically feasible electric power distribution problem, where the aim is to select generating unit outputs that meet the demand at minimum operating cost and minimal pollution and atmospheric emissions.

The second example involves the dynamics optimization proposed by Faria (1991) and studied by Oliveira and Saramago (2004), which considers a cantilevered beam whose free extremity contains a mass-spring system. The objective is to maximize the first natural frequency and to distance the first three natural frequencies from each other.

 

Nomenclature

aν, bν, ci

=

cost coefficients of the ith-generator

E

=

modulus of elasticity, N/m2

Ek

=

k-dimensional Euclidean space

En

=

n-dimensional Euclidean space

ƒ(x)

=

vector of the objective function

Fc

=

total fuel cost of the generator problem, $/h

Fe

=

total emission function of atmospheric pollutants, ton/h

=

minimum value of each i-th function

ļ

=

ideal vector for a multiobjective optimization problem

=

inequality constraints functions

hr(x)

=

equality constraints functions

K

=

stiffness spring

L

=

length of each element of cantilever beam, m

Lp

=

metrics in the global criterion method

ms

=

mass of the mass-spring system, Kg

Mviga

=

mass of the cantilever beam, Kg

n; p

=

deviation variables in the goal programming method

NOx

=

nitrogen oxide

PD

=

total demand power, MW

Pi

=

real power output of the ith-generator, MW

Pinf

=

minimum power of each ith-generator, MW

Ploss

=

real power loss in transmission lines, MW

Psup

=

maximum power of each ith-generator, MW

=

n-dimensional real space

ri

=

constant multipliers in the weighting objectives method

s

=

constants of the metrics in the global criterion method

SOx

=

sulfur oxide

t

=

set of design goals

T

=

transpose operator

tinf

=

low limit of the goal desired

tsup

=

upper limit of the goal desired

wi

=

weighting coefficients

x

=

decision variable vector

x*

=

optimum vector

xo(i)

=

optimum vector for each i-th objective function

xiinf, xjsup

=

side constraints

Greek Symbols

=

emission characteristic coefficients

βj

=

weighting coefficients

ξt

=

limits values of the objective functions

ξh, ξhj-1

=

coefficients of the function increments or decrements

Δfi

=

function increments of the pay-off table

ρ

=

density, kg/m3

Ω

=

feasible region

 

Optimization Problem

Multiobjective optimization involves the minimization of a vector of objectives F(x) that can be subject to a number of constraints or bounds:

where x is a vector of the decision variable, ƒ(x) is a vector of the objective function, and and hr(x) are inequality and equality constraints. Two Euclidean spaces are considered in this problem: the n-dimensional space of the decision variables and the k-dimensional space of the objective functions. The constraints given by (2) define the feasible region:

Note that, because ƒ(x) is a vector, if any of the components of ƒ(x) compete with each other, the problem has no single solution. This poses a dilemma, i.e., what solution should be adopted? The answer to this question requires the definition of two important terms.

First, let us consider the so-called ideal solution. Let be a vector of variables which optimizes (either minimizes or maximizes) the i-th objective function ƒi(x). This solution is called the "ideal solution". To determine this solution, one must find the minimum attainable for all the objective functions separately. Assuming that this minimum can be found, the vector xo(i) is:

where indicates the minimum value of each i-th function.

The vector is ideal for a multiobjective optimization problem.

The second term is Pareto-optimal. In the presence of several objective functions, the notion of "optimum" changes because the aim is to find good compromises. The notion of optimum was generalized by Pareto (1896) and continues to be very important in multiobjective analyses. A common way of stating this optimum is as follows:

A point x* Ω is Pareto-optimal if, for every x Ω , either

or there is at least one i I such that

This definition is based upon the intuitive conviction that the point x* is defined as optimal if no objective can be improved without worsening at least one other objective. Unfortunately, Pareto's optimum usually gives not one single solution, but a set of solutions called non-inferior or non-dominated solutions.

 

Some Classic Methods of Multiobjective Optimization

Weighting Objectives Method

The weighted sum strategy converts the multiobjective problem of optimizing the vector ƒ(x) into a scalar problem by building a weighted sum of all the objectives:

where

ri are constant multipliers, wi > 0 are the weighting coefficients that represent the relative importance of each criterion. Objective weighting is obviously the most usual substitute model for vector optimization problems. The trouble here is attaching weighting coefficients to each of the objectives. The weighting coefficients do not necessarily correspond directly to the relative importance of the objectives or allow trade-offs between the objectives to be expressed. For the numerical methods to seek the optimum of (7) so that wi can reflect closely the importance of objectives, all the functions should be expressed in units of approximately the same numerical values.

The best results are usually obtained if ri = 1/, where represents the ideal solution for the problem. Another usual form of writing the coefficients ci is using the initial values of the objective functions ri = 1/ƒi(xº).

Hierarchical Optimization Method

Consider the situation where a user wants to organize the objective functions in terms of importance. Let the numbering 1 to k reflect this order in the sense that the first criterion is the most important one and the k-th criterion is the least important (Osyckza, 1981). In the hierarchical optimization method this order is obeyed, with each objective function optimized separately and a new constraint - which depends on the other objective functions - added in each step. This method can be described as follows:

(2) Find the optimum of the i-th objective function, i.e.,

where ξh are the assumed coefficients of the function increments or decrements given in percentages. The sign '+' refers to the functions that are to be minimized, whereas '-' refers to the functions that are to be maximized. Thus, the optimal solution given by the hierarchical method is the point

 

Trade-off Method

This method involves optimizing a primary objective, ƒr(x), and expressing the other objectives in the form of inequality constraints. Thus, ƒr is called the main objective and ƒi(x), for i = 1, ..., k with i r, are called secondary or side objectives. A method is classified in the trade-off category, if the concept of trading a value of one objective function for a value of another function is used to determine the next step in the search for the optimal solution. This concept is realized by optimizing one of the criterions and considering the others as flexible constraints. Hence, this method is also called constraint method or ξ-constraint method (Osyczka, 1981), and can be executed as shown below:

(1) Find the optimum of the r-th objective function, i. e., find x* such that

subject to additional constraints:

For a minimization problem:

For a maximization problem:

where ξt are the assumed values that cannot be exceeded by the objective functions.

(2) Repeat the process (1) for values different from ξt. A good choice of the set ξ t can be useful in the decision. The search is interrupted when the decision maker finds a satisfactory solution.

It may be necessary to repeat the above procedure for different r indices.

A problem with this method, however, is making a suitable selection of ξt to ensure a feasible solution. A further disadvantage of this approach is that the use of hard constraints is rarely adequate for expressing true design objectives. The difficulty here is in expressing such information at early stages of the optimization cycle. In order to obtain a reasonable choice of ξt it is often useful to optimize each objective function separately, i.e., to find for i = 1..., k. Knowing these values, the additional constraints (11) can be written as follows:

For minimization a problem:

For maximization a problem:

where Δƒji are the assumed values of function increments given by:

A pay-off table can be constructed, as shown in Tab. 1. In this table, row i corresponds to the optimal solution xo(i), which optimizes the ith objective function, and ƒji is the value taken by the jth function ƒj(x), when the ith function ƒi(x) reaches its optimum .

 

 

To obtain the function increments, it may also be convenient to build the pay-off table using Δƒji, given by Eq. (13), as indicated in Tab. 2.

 

 

Similarly, the pay-off table can be built using relative increments of the functions:

Global Criterion Method

In this technique, the multicriteria optimization problem is transformed into a scalar optimization problem by using a global criterion. The function that describes this global criterion must be defined, so that a possible solution close to the ideal solution is found. This function is usually written as follows:

Boychuk and Ovchimikov (1973) used Eq. (15), adopting s = 1, and Salukvadze (1974) considered s = 2. Other values for s can also be used, but the solution obtained after minimizing Eq. (15) differs greatly according to the values of s chosen.

The global criterion method can also be applied using a family of Lp-metrics defined as:

where, for example, for s = 1 and s = 2:

Note that the minimization of L2(ƒ) is equivalent to the minimization of the Euclidean distance between the value of the function and the ideal solution.

Instead of working with distance in an absolute sense, it is recommended to use relative distances, so Eq. (16) can be rewritten as:

In this case, the values of every normalized function are limited to the interval [0,1].

Goal Programming Method

This method requires that the researcher specify goals for each objective he/she desires to reach. The main idea in goal programming is to find solutions that attain a predefined target for one or more objective functions. If no solution reaches predefined goals in all the objective functions, the task is to find solutions that minimize deviations from those goals (Deb, 2001). Thus, the method described here involves expressing a set of design goals, which is associated with a set of objectives,

The quantitative values of the goals are considered restrictions you add. Thus, new variables are added to represent deviations from the predefined goals. In goal programming, the user chooses a target value t for every objective function and the task is then to find a solution whose objective value is equal to t, subject to the condition that the resulting solution is feasible (x Ω).

The optimization problem can be formulated as

where Ω is the feasible search region.

The functions in Eq. (20) are written using 'equal-to' type goals. However, these functions can be written using four different types, as shown below (Steuer, 1986):

1. Less-than-equal-to t (ƒ(x) < t)

2. Greater-than-equal-to t (ƒ(x) > t)

3. Equal-to t (ƒ(x) = t)

4. Within a range (ƒ(x) [tinf, tsup])

In order to meet the above goals, two non-negative deviation variables (n and p) are usually introduced. For the less-than-equal-to type goal (ƒ(x) < t), the positive deviation p is subtracted from the objective function. The deviation p quantifies the amount by which the objective function has not satisfied the target t. Here, the objective of goal programming is to minimize the deviation p.

For the greater-than-equal-to type goal (ƒ(x) > t), a negative deviation n is added to the objective function. Here, the deviation n quantifies the amount by which the objective function has not satisfied the target t. The objective of goal programming is to minimize the deviation n so as to find the solution that minimizes the deviation.

For the equal-to type goal (ƒ(x) = t), the objective function must have the target value t, and, therefore, both positive and negative deviations are used, so ƒ(x) - p + n = t. Here, the objective of the method is to minimize the summation (p + n), so the optimal solution is minimally distant from the goal in either direction.

Thus, to solve a goal programming problem, each goal is converted into at least one equality restriction, and the objective is to minimize all p and n deviations. The formulation of the problem allows the objectives to be under- or overachieved, enabling the designer to be relatively imprecise about initial design goals. The relative degree of under- or overachievement of the goals is controlled by the vectors of weighting coefficients, w and β, and is expressed as a standard optimization problem using the following formulation:

Minimize

Subject to

Here, the parameters wj and βj are weighting factors to minimize the deviations of jth objective in relation jth goal. This terms introduces an element of slackness into the problem, which otherwise requires the goals to be met strictly. The weighting vectors, w or β, enable the designer to express a measure of the relative trade-offs between the objectives. Usually, the weight factors wj and βj are values defined by the decision-maker. Thus, the solution to the problem using goal programming depends on the choice of these weighting factors. This provides a conveniently intuitive interpretation of the design problem, which is solvable using standard optimization procedures.

 

Numerical Simulations

Application 1: Environmentally Friendly and Economically Feasible Distribution Problem

The aim of environmentally friendly, economically feasible distribution of electric power is to select generating unit outputs that meet the demand at a minimum operating cost and that cause minimal pollution and atmospheric emissions, while satisfying all the unit and system constraints. Thus, the objective is to minimize two competing objective functions, fuel cost and emissions, while satisfying equality and side constraints. The vector of real power outputs of the ith-generators is represented by P = [P1, P2, ..., Pn]T. The generator cost curves Fc(P) are represented by quadratic functions. Thus, the total $/h fuel cost can be expressed as:

where n is the number of generators, ai, bi, and ci are the cost coefficients of the ith-generator, and Pi is the real power output of the ith-generator.

The total ton/h emission Fe(P) of atmospheric pollutants such as sulfur oxides, SOx, and nitrogen oxides, NOx, caused by fossil-fueled thermal units can be expressed as:

where αi, φi, γi, ξi and λi are emission characteristic coefficients of the ith-generator.

For stable operations, the real power output of each generator is restricted by lower and upper limits (side constraints), as follows:

The total power generation must cover the total demand PD and the real power loss in transmission lines Ploss. Hence,

The n = 6 generators, cost coefficients and emission characteristic coefficients, according to Abido (2003), are given in Tab. 3. = 10 MW, = 120MW and PD + Ploss = 283 MW. Thus, the formulation of the optimization problem is given by:

 

 

In this example, the optimization problem was investigated using a sequential quadratic programming technique (Grace, 1992) of the Matlab Toolbox.

The ideal solution was calculated by applying Eq. (2) and the results obtained were = 599.22 $/h and = 0.19 ton/h.

a) Weighting Objectives Method

According to Eq. (7), the optimization problem is formulated as:

Table 4 indicates that the optimal values depend on the weighting coefficients wi, as shown in Fig. 1, which represents the curve of the optimal Pareto set.

 

 

 

 

b) Hierarchical Optimization Method

Let us consider the case where the priority is to minimize the fuel cost function.

subject to constraints (29)
Step 2: min Fe(P)
subject to constraints (29)

and the additional constraint:

The optimal solutions are sensitive to the variation of coefficient ξh, as can be noted in Tab. 5. Besides, if this coefficient increases, the optimal solution approaches the ideal solution related with the emission function.

 

 

c) Trade-off Method

Let us consider the example where the main objective is the fuel cost function.

Min Fc(P)
subject to constraints (29)
and the additional constraint:

Using Oliveira's pay-off table (2005), one finds that the relative increment limit is ξt2 = 0.14. The problem (33) was solved adopting the values of the increment around this limit and the results are shown in Tab 6.

 

 

 

 

As can be seen, the ideal solution for is obtained at high values of ξt2. For values of less-than-equal to ξt2 greater importance is given to the minimization of Fe(P).

d) Global Criterion Method

The multiobjective problem (28) subject to constraints (29) was solved considering the following metrics:

The results obtained with the relative L2-Metric and relative L3-Metric are similar and offer a good compromise between the two objective functions, as indicated in Fig. 2.

 

 

On the other hand, the optimal solution for the other metrics is close to the ideal solution for , since the absolute value of Fc(P) predominates over Fe(P).

e) Goal Programming Method

Let us now consider the solution for the environmentally and economically feasible energy distribution problem, Eqs. (28) and (29), based on the goal established for each objective function as its ideal value. Thus, the optimization problem can be rewritten as:

Minimize ƒ(P) = w1p1 + w2p2

The results obtained are presented in Tab. 8, which indicates that this method provided satisfactory results in relation to the solutions obtained through the previous methods. When the priority is w1, the values obtained approach the ideal value , whereas, when the priority is w2, the values tend toward the ideal solution .

 

 

Application 2: Frequency Optimization of a Mass-spring System.

In this application, the simulation is done by means of a Sequential Quadratic Programming technique, using the DOT (Design Optimization Tools) program developed by Vanderplaats (1995). In this program, to execute the sequential optimization method, a pseudo-objective function is written using the Augmented Lagrange Multiplier Method. The unconstrained minimization is done by the Broydon-Fletcher-Goldfarb-Shanno (BFGS) method and the one-dimensional search uses polynomial interpolation techniques.

Let us consider the problem of dynamics optimization proposed by Faria (1991) and studied by Oliveira and Saramago (2004), which considers a cantilevered beam, such as that shown in Fig. 3, whose free extremity contains a mass-spring system.

 

 

The objective is to maximize the first natural frequency and to distance the first three natural frequencies from each other.

The beam was divided into three elements of equal lengths. There are seven design variables: the width (bi) and height (hi) of each of the three segments and the stiffness spring (K).

The Finite Element Method was used to calculate the first three natural frequencies (ω1,ω2 and ω3) applying the code developed by Faria (1991). This code was put into the DOT optimization program to solve the problem:

Subject to

Considering the data: L = 0.1 m; ρ = 7.8 x 103 Kg/m3; E = 2.1 x 1011 N/m2; ms = 0.1 Kg and Mviga = 0.14625Kg. The ideal solution was calculated using Eq. (2), which led to the following results:

a) Weighting Objectives Method

The solution for the problem given by Eqs. (40) and (41) was formulated using Eq. (7) as:

where the ideal solution is given in Eq. (42).

Table 9 indicates that the optimal result is strongly dependent on the weighting coefficients; hence, when w1 = 0.8 (greater priority to ƒ1), the result is closer to the ideal solution (= 8.11). The same behavior holds true when ƒ2 or ƒ3 is prioritized.

b) Hierarchical Optimization Method

Case 1: The priority is to maximize the function ƒ1(x) = ω1

and

Case 2: The priority is to maximize the function ƒ2(x) = (ω2 - ω1)

and

Case 3: The priority is to maximize the function ƒ3(x) = (ω3 - ω2)

and

In these three cases, whose results are presented in Tab. 10, note that the values of the prioritized function are maximized and that the results approach the ideal values. Moreover, the results are not significantly influenced by the adopted ξh factors.

c) Trade-Off Method

The main objective is to maximize the distance between the first two natural frequencies ƒ2(x) = (ω2 - ω1)

Using the same pay-off table once more (Oliveira, 2005), one finds that the relative increment limits are ξt1 = 0.82 and ξt3 = 0.96. The results are shown in Tab. 11.

Similarly to the previous case, values less-than-equal to the limits ξt1 and ξt3 approach the ideal solution. As these values are increased, the results shift away from , prioritizing the functions ƒ1(x) and ƒ3(x).

d) Global Criterion Method

To solve the multiobjective optimization problem (40) subject to constraints (41), the following metrics are adopted:

The relative L2-Metric, L2-Metric and L1-Metric provide good results and represent a compromise between the three objective functions. On the other hand, in the case of the L3-Metric and the relative L3-Metric, the results remained unchanged from the initial value.

e) Goal Programming Method

Let the solution of the dynamic problem be given by Eqs. (40) and (41), considering the goals established for each objective function to be equal to the ideal values of:

ƒ1(x) = 8.11, ƒ2(x) = 10.12 and ƒ3(x) = 46.58 Hz.

The optimization problem can thus be formulated as:

Table 13 shows the optimum solutions. The result obtained by using this method is similar to the result provided by the Weighting Objectives Method, i.e., when the priority is w1, the values obtained approach the ideal value. The same behavior occurs when ƒ2 and ƒ3 are prioritized.

Observing the constraints of the problem given by Eq. (41) and the results obtained for bi, hi and K in Tabs. 9 to 13, one can see that all the optimal solution points obey the imposed constraints.

 

 

 

 

 

 

 

 

 

 

Conclusions

Five different numerical techniques were used to solve the multiobjective problem. Two examples were presented to illustrate the methodology studied here. The results indicated that the hierarchical and trade-off methods are suitable when some of the objective functions need to be prioritized. We also found that the optimal solution was strongly influenced by the choice of the most important criterion. The weighting objective method can also be used in this situation, in which the value of the weighting factor is used to represent this priority. However, the weighting coefficients do not necessarily correspond directly to the relative importance of the objectives. The global method is prescribed for applications where to every objective function it must be given the same level of importance. Lastly, the goal programming method requires that the researcher specify goals for each objective s/he desires to reach. Therefore, an in-depth knowledge of the problem in question is crucial. Successful numerical applications have demonstrated the efficiency of these techniques, when they are applied to solve problems involving electrical and mechanical systems. The user has some tools that can be used, and it is extremely useful to subject the same problem to different optimization techniques so that the results can be compared and the best method for each case can be chosen.

 

References

Eschenauer, H., Koski, J. and Osyczka, A., 1990, "Multicriteria Design Optimization". Berlin, Springer-Verlag.         [ Links ]

Osyczka, A., 1981, "An approach to multicriterion optimization for structural design". Proceedings of International Symposium on Optimum Structural Design, University of Arizona.         [ Links ]

Yoshimura, M., Sasaki, K., Izui, K. and Nishiwaki, S., 2005, "Hierarchical multi-objective optimization methods for deeper understanding of design solutions and breakthrough for optimum design solutions". 6th World Congress on Structural and Mutidisciplinary Optimization, Rio de Janeiro, Brazil.         [ Links ]

Knowles, J., 2005, "PAREGO: a hybrid algorithm with on-line landscape approximation for expensive multi-objective optimization problems", IEEE Transactions on Evolutionary Computation, Vol. 10, No. 1, pp. 50-66.         [ Links ]

Parsons, M.G. and Scott, R.L., 2004, "Formulation of multicriterion design optimization problems for solution with scalar numerical optimization methods". Journal of Ship Research, Vol. 48, No. 1, pp. 61-76.         [ Links ]

Andersson, J., 2003, "Applications of a multi-objective genetic algorithm to engineering design problems". EMO2003 2nd International Conference on Evolutionary Multi-Criterion Optimization, Faro, Portugal, Vol. 2632, pp. 737-751.         [ Links ]

Vankan, W.J. and Maas, R., 2002, "Approximate modelling and multi objective optimisation in aeronautic design", CMMSE, National Aerospace Laboratory NLR, Alicante, Spain.         [ Links ]

Ambrósio, L.A., 2002, "Programação Multicritério: Um instrumento para planejamento agroambiental". Course of Specialization in Agroambiental Management, Quantitative Laboratory of Analyses and Methods", FAEF. (In Portuguese)        [ Links ]

Saramago, S.F.P. and Steffen Jr., V., 2002, "Trajectory modeling of robots manipulators in the presence of obstacles". Journal of Optimization Theory and Applications, Vol. 110, No. 1, pp. 17-34.         [ Links ]

Faria, M.L.L., 1991, "Uma Contribuição aos Procedimentos de Otimização Aplicados a Sistemas Mecânicos". Master Dissertation, Federal University of Uberlândia, Uberlândia, MG, Brazil. (In Portuguese)        [ Links ]

Oliveira, L.S. and S.F.P. Saramago, S.F.P., 2004, "A comparative study about some methods of the multi-objective optimization". CILAMCE 25th Iberian Latin American Congress on Computational Methods In Engineering, Recife, Vol. 1, pp. 1-17.         [ Links ]

Boychuk, L.M. and Ovchinnikov, V.O., 1973, "Principal methods of solution multicriterial optimization problems" (survey). Soviet Automatic Control, Vol. 6, pp. 1-4.         [ Links ]

Salukvadze, M.E., 1974, "On the existence of solution in problems of optimization under vector valued criteria", Journal of Optimization Theory and Applications. Vol. 12, No. 2, pp. 203-217.         [ Links ]

Deb, K., 2001, "Multi-Objetive Optimization Using Evolutionary Algorithms". John Wiley & Sons, pp. 77-80 and 129-135.         [ Links ]

Steuer, R.E., 1986, "Multiple Criteria Optimization: Theory, Computation and Application". New York: Wiley.         [ Links ]

Abido, M.A., 2003, "A niched Pareto genetic algorithm for multiobjective environmental/economic dispatch". Electrical Power and Energy Systems, Vol. 25, pp. 97-105.         [ Links ]

Grace, A., 1992, "Optimization Toolbox- For use with Matlab", The Math Works Inc., Natick.         [ Links ]

Oliveira, L.S., 2005, "A Contribution to the Study about Multicriterion Optimization Methods". Master Dissertation, Federal University of Uberlândia, Uberlândia, MG, Brazil. (In Portuguese)        [ Links ]

Vanderplaats, G., 1995, "DOT - Design Optimization Tools Program - Users Annual". Vanderplaats Research & Development, Inc, Colorado Springs.         [ Links ]

 

 

Paper accepted October, 2009.

 

 

Technical Editor: Fernando Antonio Forcellini

Creative Commons License Todo o conteúdo deste periódico, exceto onde está identificado, está licenciado sob uma Licença Creative Commons