Acessibilidade / Reportar erro

An SLP algorithm and its application to topology optimization

Abstract

We introduce a globally convergent sequential linear programming method for nonlinear programming. The algorithm is applied to the solution of classic topology optimization problems, as well as to the design of compliantmechanisms. The numerical results suggest that the new algorithm is faster than the globally convergent version of the method of moving asymptotes, a popular method for mechanical engineering applications proposed by Svanberg.

topology optimization; compliant mechanisms; sequential linear programming; global convergence theory


An SLP algorithm and its application to topology optimization* * This work was supported by CAPES and FAPESP (grant 2006/53768-0).

Francisco A. M. Gomes; Thadeu A. Senne

Departamento de Matemática Aplicada, IMECC Universidade Estadual de Campinas, 13083-859, Campinas, SP, Brazil E-mails: chico@ime.unicamp.br/ senne@ime.unicamp.br

ABSTRACT

We introduce a globally convergent sequential linear programming method for nonlinear programming. The algorithm is applied to the solution of classic topology optimization problems, as well as to the design of compliantmechanisms. The numerical results suggest that the new algorithm is faster than the globally convergent version of the method of moving asymptotes, a popular method for mechanical engineering applications proposed by Svanberg. Mathematical subject classification: Primary: 65K05; Secondary: 90C55.

Key words: topology optimization, compliant mechanisms, sequential linear programming, global convergence theory.

1 Introduction

Topology optimization is a computational method originally developed with the aim of finding the stiffest structure that satisfies certain conditions, such as an upper limit for the amount of material. The structure under consideration is under the action of external forces, and must be contained into a design domain Ω. Once the domain Ω is discretized, to each one of its elements we associate a variable χ that is set to 1 if the element belongs to the structure, or 0 if the element is void. Since it is difficult to solve a large nonlinear problem with discrete variables, χ is replaced by a continuous variable ρ∈ [0, 1], called the element's "density".

However, in the final structure, ρ is expected to assume only 0 or 1. In order to eliminate the intermediate values of ρ, Bends∅e [1] introduced the Solid Isotropic Material with Penalization method (SIMP for short), which replaces ρ by the function ρp that controls the distribution of material. The role of the penalty parameter ρ > 1 is to reduce of the occurrence of intermediate densities.

One of the most successful applications of topology optimization is the design of compliant mechanisms. A compliant mechanism is a structure that is flexible enough to produce a maximum deflection at a certain point and direction, but is also sufficiently stiff as to support a set of external forces. Such mechanisms are used, for example, to build micro-eletrical-mechanical systems (MEMS).

Topology optimization problems are usually converted into nonlinear programming problems. Since the problems are huge, the iterations of the mathematical method used in its solution must be cheap. Therefore, methods that require the computation of second derivatives must be avoided. In this paper, we propose a new sequential linear programming algorithm for solving constrained nonlinear programming problems, and apply this method to the solution of topology optimization problems, including compliant mechanism design.

In the next section, we present the formulation adopted for the basic topology optimization problem, as well as to the compliant mechanism design problem. In Section 3, we introduce a globally convergent sequential linear programming algorithm for nonlinear programming. We devote Section 4 to our numerical experiments. Finally, Section 5 contains the conclusion and suggestions for future work.

2 Problem formulation

The simplest topology optimization problem is the compliance minimization of a structure (e.g. Bends∅e and Kikuchi [2]). The objective is to find the stiffest structure that fits into the domain, satisfies the boundary conditions and has a prescribed volume. After domain discretization, this problem becomes

where nel is the number of elements of the domain, ρi is the density and vi is the volume of the i-th element, V is the upper limit for the volume of the structure, f is the vector of nodal forces associated to the external loads and K(ρ) is the stiffness matrix of the structure.

When the SIP odel is used to avoid interediate densities, the global stiffness matrix is given by

The parameter ρmin > 0 is used to avoid zero density elements, that would imply in singularity of the stiffness matrix. Thus, for ρ > ρmin, matrix K(ρ) is invertible, and it is possible to eliminate the u variables replacing u = K(ρ)-1f in the objective function of problem (1). In this case, the problem reduces to

This proble has only one linear inequality constraint, besides the box constraints. However, the objective function is nonlinear, and its computation requires the solution of a linear systems of equations.

2.1 Compliant mechanisms

A more complex optimization problem is the design ofa compliant mechanism. Some interesting formulations for this problem were introduced by Nishiwaki et al. [14],Kikuchietal. [10], Lima [11], Sigmund [16], Pedersen et al. [15], Min and Kim [13], and Luo et al. [12], to cite just a few.

No matter the author, each formulation eventually represents the physical structural problem by means of a nonlinear programming problem. The degree of nonlinearity of the objective function and of the problem constrains vary from one formulation to another. In this work, we adopt the formulation proposed by Nishiwaki et al. [14], although similar preliminary results were also obtained for the formulations of Sigmund [16] and Lima [11].

Considering that the mechanism belongs to a domain Ω, fixed at a region Γd of its boundary ∂Ω Nishiwaki et al. [14] suggest to decouple the problem into two distinct load cases. In the first case, shown in Figure 1(a), a load t1 is applied to the region Γt1 ⊂ ∂Ω, and a fictitious load t2 is applied to a different region Γt2 ⊂ ∂Ω,, where the deflection is supposed to occur. The optimal structure for this problem is obtained maximizing the mutual energy of the mechanism, subject to the equilibrium and volume constraints. This problem represents the kinematic behavior of the compliant mechanism.


After the mechanism deformation, the Γt2 region eventually contacts a work-piece. In this case, the mechanism must be sufficiently rigid to resist the reaction force exerted by the workpiece and to keep its shape. This structural behavior of the mechanism is given by the second load case, shown in Figure 1(b). The objective is to minimize the mean compliance, supposing that a load is applied to Γt2, and that there is no deflection at the region Γt 1.

These load cases are combined into a single optimization problem. After discretization and variable elimination, this problem becomes

where fa and fb are the vectors of nodal forces related to the loads t1 and t2 shown in Figure 1(a), fc is the vector of nodal forces related to the load —t2 shown in Figure 1(b), and K1(ρ) and K2(ρ) are the stiffness matrices associated to the first and the second load cases, respectively.

This problem has the same constraints of (2). However, the objective function is very nonlinear, and its computation requires the solution of two linear systems of equations. Other formulations, such as the one proposed by Sigmund [16], also include constraints on the displacements at certain points of the domain, so the optimization problem becomes larger and more nonlinear.

3 Sequential linear programming

Sequential linear programming (SLP) algorithms have been used successfully in structural design (e.g. Kikuchi et al. [10]; Nishiwaki et al. [14]; Lima [11]; Sigmund [16]). This class of methods is well suited for solving large nonlinear problems due to the fact that it does not require the computation of second derivatives, so the iterations are cheap.

However, for most algorithms actually used to solve topology optimization problems, global convergence results are not fully established. On the other hand, SLP algorithms that are proved to be globally convergent are seldom adopted in practice. In part, this problem is due to the fact that classical SLP algorithms, such as those presented in [21] and [8], have practical drawbacks. Besides, recent algorithms that rely on linear programming also include some sort of tangent step that use second order information (e.g. [5] and [6]).

In this section we describe a new SLP algorithm for the solution of constrained nonlinear programming problems. As it will become clear, our algorithm is not only globally convergent, but can also be easily adapted for solving topology optimization problems.

3.1 Description of the method

Consider the nonlinear programming problem

where the functions have Lipschitz continuous first derivatives,

and vectors x1, xu define the lower and upper bounds for the components of x. One should notice that, using slack variables, any nonlinear programming problem may be written in the form (3).

The linear approximations for the objective function and for the equality constraints of (3) in the neighborhood of a point x , are given by

where A(x) = [∇c1(x) ... ∇cm (x)]T is the Jacobian matrix of the constraints. Therefore, given a point x, we can approximate (3) by the linear programming problem

A sequential linear programming (SLP) algorithm is an iterative method that generates and solves a sequence of linear problems in the form (4). At each iteration k of the algorithm, a previously computed point x(k)X is used to generate the linear programming problem. After finding sc, an approximate solution for (4), the variables of the original problem (3) are updated according to X(k+1) = X(k) + Sc.

Unfortunately, this scheme has some pitfalls. First, problem (4) may be unlimited even if (3) has an optimal solution. Besides, the linear functions used to define (4) may be poor approximations of the actual functions f and c on a point x + s that is too far from x. To avoid these difficulties, it is an usual practice to require the step s to satisfy a trust region constraint such as ||s|| ∞ < δ, where 8 > 0, the trust region radius, is updated at each iteration of the algorithm, to reflect the size of the neighborhood of x where the linear programming problem is a good approximation of (3). Including the trust region in (4), we get the problem

where sl; = max{—δk, xl — x(k)} and su = min{δk, xu — x(k)}.

However, unless x(k) satisfies the constraints of (3), it is still possible that problem (5) has no feasible solution. In this case, we need not only to improve f (x(k) + s), but also to find a point that reduces this infeasibility. This can be done, for example, solving the problem

where sln = max{—0.8δk, x1 — x(k)}, sun= min{0.8δk, xu — x(k)}. Clearly, M(x, s) is an approximation for the true measure of the infeasibility given by the function

Although the square of the Euclidean norm is the usual choice for defining φ (see [9]), due to its differentiability, the one-norm is more appropriate when dealing with an SLP algorithm. Besides avoiding the use of a quadratic function, the one-norm allows the replacement of (6) by the equivalent linear programming problem

where z RmI is a vector of slack variables corresponding to the infeasible constraints, and eT = [1 1... 1]. To see how matrix E(x(k)) is constructed, let Ii represent the i-th column of the identity matrix and suppose that { i1,i2, ..., imI} are the indices of the nonzero components of c(x(k)). In this case, the j-th column of E(x(k)) is given by

A basic feasible point for (7) can be obtained taking s = 0 and Zj = |cij (x(k))|, j = 1,..., mI.

One should notice that the trust region used in (6) and (7) is slightly smaller that the region adopted in (5). This trick is used to give (5) a sufficiently large feasible region, so the objective function can be improved. As it will become clear in the next sections, the choice of 0.8 is quite arbitrary. However, we prefer to explicitly define a value for this and other parameters of the algorithm in order to simplify the notation.

Problems (5) and (6) reveal the two objectives we need to deal with at each iteration of the algorithm: the reduction of f (x) and the reduction of φ(x).

If f (x(k)+ sc) << f (x(k)) and φ(x(k)+ sc) << φ(x(k)), it is clear that x(k)+ sc is a better approximation than x(k) for the optimal solution of problem (3). However, no straightforward conclusion can be drawn if one of these functions is reduced while the other is increased.

In such situations, we use a merit function to decide if x(k) can be replaced by x(k)+ sc. In this work, the merit function is defined as

where θ ∈ (0, 1) is a penalty parameter used to balance the roles of f and φ. The step acceptance is based on the comparison of the actual reduction of Ψ with the reduction predicted by the model used to compute sc. The actual reduction of Ψ between x(k) and x(k) + sc is given by

where

is the actual reduction of the objective function, and

is the reduction of the infeasibility.

The predicted reduction of the merit function is defined as

where

is the predicted reduction of f and

is the predicted reduction of the infeasibility.

At the k-th iteration of the algorithm, the step sc is accepted if Ared> 0.1 Pred. If this condition is not verified, δk is reduced and the step is recomputed. On the other hand, the trust region radius may also be increased if the ratio Ared/Pred is sufficiently large.

The role of the penalty parameter is crucial for the acceptance of the step. Following a suggestion given by Gomes et al. [9], at the beginning of the k-th iteration, we define

Whenever the step is rejected, θk is recomputed. However, this parameter is not allowed to increase within the same iteration. The constant N > 0, used to compute θklarge, can be adjusted to allow a nonmonotone decrease of θ.

3.2 An SLP algorithm for nonlinear programming

Let us define θ0 = θmax = 1, and k = 0, and suppose that a starting point x(0)X and an initial trust region radius θo > θmin > 0 are available. A new SLP method for solving problem (3) is given by Algorithm 1.

In the next subsections we prove that this algorithm is well defined and converges to the solution of (3) under mild conditions. In Section 4, we describe a particular implementation of this SLP method for solving the topology optimization problem.

3.3 The algorithm is well defined

We say that a point x ∈ Rn is φ-stationary if it satisfies the Karush-Kuhn-Tucker (KKT) conditions of the problem min φ(x). Besides, a point x X

that satisfies φ(x) = 0 is said to be regular if the gradient vectors of the active constraints at x are linearly independent (i.e. the linear independence constraint qualification holds at x).

In this section, we show that, after repeating the steps of Algorithm 1 a finite number of times, a new iterate x(k+1) is obtained. In order to prove this well definiteness property, we consider three cases. In Lemma 3.1, we suppose that x(k) is not φ-stationary and (6) is infeasible. Lemma 3.2 deals with the case in which x(k) is not φ-stationary, but (6) is feasible. Finally, in Lemma 3.3, we suppose that x(k) is feasible and regular for (3), but does not satisfy the KKT conditions of this problem.

Lemma 3.1.Suppose thatx(k)is not φ-stationary and that the condition stated in step 3 of Algorithm 1 is not satisfied. Then after a finite number of step rejections, x(k) + sc is accepted.

Proof. Define (s0, z0) = (0, —E(x(k))T c(x(k))) as the feasible (yet not basic) initial point for the restoration problem (7), solved at step 2 of Algorithm 1. Define also

where PN(x) denotes the orthogonal projection onto the set

For a fixed x, M(x, s, z) is a linear function of s and z. In this case, ∇M(x, s, z) does not depend on these variables, and we can write ∇M(x) for simplicity.

If x(k) is not φ-stationary and M(x(k), sn, z) > 0, the reduction of the infeasibility generated by sc = sn satisfies

where

After rejecting the step and reducing Sk a finite number of times, we eventually get ||αdn ||∞ = 0.8δk. In this case, defining η = —∇M(x(k))T dn/||dn ||∞ we have

Now, doing a Taylor expansion, we get

Analogously, we have

Therefore, for δk sufficiently small, Ared( δk) = Pred( δk) + O2.), so

Our choice of θk ensures that Pred > 0.5Predfsb. Thus, from (16), we get

Finally, from (17) and (18), we obtain

Therefore, Ared > 0.1 Pred for δk sufficiently small, and the step is accepted.

Lemma 3.2.Suppose thatx(k)is not φ-stationary and that the condition stated in step 3 of Algorithm 1 is satisfied. Then after a finite number of step rejections, x(k) + sc is accepted.

Proof. Let δk(0) be the trust region radius at the beginning of the k-th iteration, and sa be the solution of

Since x(k) is not φ-stationary, ||sa ||∞ > 0. Now, supposing that the step is rejected j times, we get δk(j)< 0.25j<δkK(0). Thus, after

attempts to reduce δk, sn is rejected and Lemma 3.1 applies.

Lemma 3.3.Suppose thatx(k)is feasible and regular for (3), but does not satisfy the KKT conditions of this problem. Then after a finite number of iterations x(k) + sc is accepted.

Proof. If x(k) is regular but not stationary for problem (3), then we have dt = Pϒ(—∇ f (x(k))) 0, where Pϒ denotes the orthogonal projection onto the set

Let α be the solution of the auxiliary problem

Since this is a linear programming problem, α dt belongs to the boundary of the feasible set. Therefore, after reducing δk a finite number of times, we get || α dt || ∞ = δk, implying that α = δk /||dt ||∞. Moreover,

so we have

Combining (20) with the fact that sc is the solution of(5), we get

On the other hand, since x(k) is feasible,

Thus, θk = min{1, θklarge} is not reduced along with δk, and

Since (17) also applies in this case, we can combine it with (21) to obtain (19). Therefore, for δk sufficiently small, Ared > 0.1 Pred and the step is accepted.

3.4 Every limit point of{x(k)}is φ-stationary

As we have seen, Algorithm 1 stops when x(k) is a stationary point for problem (3); or when x(k) is φ-stationary, but infeasible; or even when x(k) is feasible but not regular.

We will now investigate what happens when Algorithm 1 generates an infinite sequence of iterates. Our aim is to prove that the limit points of this sequence are < -stationary. The results shown below follow the line adopted in [9].

Lemma 3.4.Ifx* X is not φ-stationary, then there exists ε1, α1, α2 > 0 such that, if Algorithm 1 is applied to x X and ||x — x* || < ε1, then

Proof. Let (s0*, z0*) = (0, —E(x*)Tc(x* )) be a feasible initial point and (sn*, z*) be the optimal solution of (7) for x = x*.

If x* is not φ-stationary, there exists ε > 0 such that, for all x X, ||x — x*|| < ε, the constraints that are infeasible at x* are also infeasible at x. Thus, for a fixed vector x near x*, we can consider the auxiliary linear programming problem

where ci (x) = ci (x) if ci (x*) > 0 and ci (x) = 0 if ci(x*) = 0. We denote (sn, z) the optimal solution of this problem and (s0, z0) = (0, —E(x*)Tc(x)) a feasible initial point. Following (13), let us define

where Ñ(x) is defined as in (14), using E(x*) and c (x). One should notice that dn (x*) = dn (x*) = PN(x*)( — ∇M(x*)).

Due to the continuity of dn, there must exist ε1∈ (0, ε] such that, for all x X, ||x — x*|| < ε1,

and

Now, let us consider two situations. Firstly, suppose that, after solving (22), we get M(x, sn, z) > 0. In this case, if ||dn(x)||> 0.8δ, then from (18) we have

On the other hand, if ||dn(x) ||∞ < 0.8δ, then from (15) and our choice of θ,

Finally, let us suppose that, after solving (22), we get MM(x, sn, z) = 0. In this case, Pfsbred= M(x, s0, z0), i.e. Pfsbred is maximum, so (24) also holds. The desired result follows from (23) and (24), for an appropriate choice of α 1 and α 2.

Lemma 3.5.Suppose thatx*is not φ-stationary and that K1 is an infinite set of indices such that limk∈K1 x(k) = x*. Thenk | k K1} is bounded away from zero. Moreover, there exist α3 > 0 and k > 0 such that, for k K1, k > k, we have Ared > α 3.

Proof. For k K1 large enough, we have ||x — x*|| < ε1, where s1 is defined in Lemma 3.4. In this case, from Lemma 3.1 we deduce that the step is never rejected whenever its norm is smaller than some δ1 > 0. Thus, δk is bounded away from zero. Moreover, from our step acceptance criterion and Lemma 3.4, we obtain

The desired result is achieved choosing α3 = 0.1 min{α1δ1, α2}.

In order to prove the main theorem of this section, we need an additional compactness hypothesis, trivially verified when dealing with bound constrained problems such as (3).

Hypothesis H1. The sequence {x(k)} generated by Algorithm 1 is bounded.

Theorem 3.1.Suppose that H1 holds. If {x(k)} is an infinite sequence generated by Algorithm 1, then every limit point of {x(k)} is φ-stationary.

Proof. To simplify the notation, let us write fk = f (x(k)), φk = φ(x(k)), Ψk = Ψ(x(k), θk), and A(k)red = Ared(x(k), sck), θk). From (8), we have that

Besides, from (9)-(11), we also have that

Hypothesis H1 implies that there exists an upper bound c > 0 such that | fk — φk | < c for all k ∈ N, so

Noting that θk∈ [0, 1] for all k, and applying (25) recursively, we get

Since the seriesis convergent, the inequality above may be written as

Let us now suppose that x* X is a limit point of {x(k)} that is not φ-stationary. Then, from Lemma 3.5, there exists α3 > 0 such that A(k)red> α3 for an infinite set of indices. Besides, A(k)red > 0 for all k. Thus, Ψk is unbounded below, which contradicts Hypothesis H1, proving the lemma.

3.5 The algorithm finds a critical point

In this section, we show that there exists a limit point of the sequence of iterates generated by Algorithm 1 that is a stationary point of (3).

Lemma 3.6.For each feasible and regular pointx*there exists ∈0, σ > 0 such that, whenever Algorithm 1 is applied to x X that satisfies ||x — x*|| < 0, we have

and

Proof. Since A(x) is Lipschitz continuous, for each x* that is feasible and regular, there exists €0 such that, for all x X satisfying ||x — x*|| < ∈0, A(x) has full row rank and the auxiliary problem

has an optimal solution (s, z) = (s, 0). In this case, A(x)s = —c(x), so || s|| 2 < ||c(x) ||2/ σ, where σ > 0 is the smallest singular value of A(x). Since problem (26) is just (7) without the trust region constraint || s ||< 0.8δ, we have

proving the first part of the lemma. If (s, 0) is also feasible for (7), then sn = s, and we have

On the other hand, if ||s||> 0.8δ, then we can define sn = δs/||s|| ∞ and zn = (1 — δ/||s||)z0 (where z0 is the z vector corresponding to s = 0), so (sn, zn) is now feasible for (7). Moreover, since M(x, 0, z0) = || c(x) ||1, M(x, s, 0) = 0, and M is a linear function of s and z, we have M(x, sn (x, δ)) = M(x, sn, zn) = (1 — δ/||s||)||c(x)||1. Thus,

The second part of the lemma follows from (27) and (28).

Lemma 3.7.Let{x(k)}be an infinite sequence generated by Algorithm 1. Suppose that{x(k)}k∈k1is a subsequence that converges to the feasible and regular point x* that is not stationary for problem (3). Then there exist c1, k1, δ' > 0 such that, for x {x(k) | k K1, k > k1}, whenever M(x, sn, z) = 0 at step 3 of Algorith 1 , we have

Proof. As in Lemma 3.3, let us define dt = PΓ(—∇ f (x)), where

Let us also denote std the solution of

After some algebra, we note that sd = tdt is also the solution of

where

Since (29) is a linear programming problem and ∇ f (x)Tdt < 0, we conclude that t = t. Besides, t = 1 satisfies xl < x + sn + s < xu, so

Remembering that sc is the solution of (5), we obtain

Since PΓ(—∇f (x)) is a continuous function of x, and x* is regular and feasible but not stationary, there exist c'1, c'2 > 0 and k1> 0 such that, for all x {x(k) | k K1, k > k1},

From (30) and the fact that || sn||∞ < 0.8δk, we have that

Thus, from (32) we obtain

Combining (31), (33) and (34), we get, for all x {x(k) | k K1, k > k0},

The desired result is obtained taking

Lemma 3.8.Let{x(k)}be an infinite sequence generated by Algorithm 1. Suppose that{x(k) }k∈K1 is a subsequence that converges to the feasible and regular point x* that is not stationary for problem (3). Then there exist β, c2, k2 > 0 such that, whenever x {x(k) | k K1, k > k2} and ||c(x)||1 < βδk,

and θsup(x, δ) = 1, where θsupis given by (12) and δ' is defined in Lemma 3.7.

Proof. From Lemma 3.6, we obtain

Therefore, defining β= 0.8σ, we get ||s||∞ < 0.8δk, so M(x, sn, z) = 0 at step 3 of Algorithm 1.

From Lemma 3.7 and the Lipschitz continuity of ∇ f (x), we can define k2> 0 such that

for all x {x(k) | k ∈ K1, k > k2}. Thus, choosing conveniently, we prove the first statement of the Lemma.

To prove the second part of the lemma, we note that

so

Thus, for an appropriate choice of β, we obtain for θ = 1, and we get the desired result.

Lemma 3.9.Let{x(k)}be an infinite sequence generated by Algorithm 1. Suppose that H1 holds, and that{x(k)}k∈K1is a subsequence that converges to the feasible and regular pointx*that is not stationary for proble (3). Then .

Proof. The sequences {θkmin} and { θklarge} are bounded belo and nonincreasing, so both are convergent. Moreover, they converge to the same limit, as limk ←∞ ( θklarge — θkmin) = 0. Besides . Therefore, {θk} is convergent.

Suppose, for the purpose of obtaining a contradiction, that the infinite sequence {θk} does not converge to 0. Thus, there must exist k3> k2 and θU > θL > 0 such that θL < θk< θU for k > k3.

Now, suppose that x {x(k) | k ∈ K1, k > k3}, and M(x, sn) = 0. In this case, from Lemma 3.7, we obtain

Since θ is not increased if the step is rejected, for each θ tried at the iteration that corresponds to x, we have that

Using a Taylor expansion and the fact that ∇ f and A are Lipschitz continuous, we obtain

Thus, there exists δ ∈ (0, δ') ⊂ (0, δ min) such that, if δ ∈ (0, δ) and x{x(k) | kK1, k > k3},

Let k'3> k3 be an iteration index such that, for all x {x(k) | k ∈ K1, k > k'3}, and for all θ tried at the iteration that corresponds to x, we have

If, in addition, δ ∈ [δ/10, δ), then

Therefore, for all δ ∈ [δ/10, δ) and all x {x(k) | k ∈ K1, k > k'3}, we have

On the other hand, if M(x, sn) > 0, then

In this case, from (28) and the fact that 0 is not increased if the step is rejected, we get

Using (35) again, there exists δ ∈ (0, δmin) such that, if δ ∈ (0, δ) and x{x(k) | kK1, k > k3},

so (36) also applies.

Thus, for some δ ∈ [δ/10, δ), the step is accepted, which means that δk is bounded away from zero for k K1, k > k3' , so Pred is also bounded away from zero.

Since Ared > 0.1 Pred, the sequence {x(k)} is infinite and the sequence {θk} is convergent, we conclude that Ψ(x, θ) is unbounded, which contradicts Hypothesis H1, proving the lemma.

Lemma 3.10.Let{x(k)}be an infinite sequence generated by Algorithm 1. Suppose that H1 holds, and that {x(k)}keK1is a subsequence that converges to the feasible and regular point x* that is not stationary for proble (3). If x {x(k) | k ∈ K1, k > k2} and ||c(x)||1> βδ, then

Proof. Observe that, when θsup≠ 1,

From Lemma 3.6, if x {x(k) | k ∈ K1, k > k2}, we have that

Therefore, since ||c(x)||1> βδ, we have δ/θsup = O(||c(x)||1). □

Lemma 3.11.Let{x(k)}be an infinite sequence generated by Algorithm 1. Suppose that{x(k)}k∈K1is a subsequence that converges to the feasible and regular pointx*that is not stationary for proble (3). Then there exist k4 > 0, θ ∈ (0, 1] such that, if

satisfies (9)-(12), then Ared > 0.1 Pred.

Proof. From the fact that ∇ f(x) and A(x) are Lipschitz continuous, we may write Ared = Pred + O2). Now, supposing that ||c(x) || 1> βδ, we have.

Since our choice of θ ensures that Pred > 0.5[M(0) — M(sc)], Lemma 3.6 implies that, for k K1 sufficiently large,

Thus, dividing both sides of(37) by Pred, we get

which yields the desired result.

Lemma 3.12.Let{x(k)}be an infinite sequence generated by Algorithm 1. Suppose that all of the limit points of{x(k)}are feasible and regular and that Hypothesis H1 holds. Then, there exists a limit point of {x(k)} that is a stationary point of problem (3).

Proof. Following the guidelines of Lemma 13 of [9], we note that, by Hypothesis H1, there exists a convergent subsequence {x(k) }k∈K1. Suppose, for the purpose of obtaining a contradiction, that the limit point of this subsequence is not a stationary point of (3). Then, from Lemma 3.9, we have that limk ←∞ θk = 0.

Since (10)-(11) imply that θklarge > min{1, θ0, θ1, ..., θk—1}, there must exist an infinite subset K2K1 such that

where δ-k is one of the trust region radii tested at iteration k. Therefore, there also exists θ, k5 > 0 such that, for all k ∈ K2, k > k5, we have θklarge < 2θkmin,

Lemma 3.8 assures that θksup(δ) = 1 for all k ∈ K2 whenever ||c(x(k))||1 < βδ. So, by (38) and (39),

for all k ∈ K2. Therefore, since ||c(x(k))||1 — 0, we conclude that

Assume, without loss of generality, that

for all k ∈ K2, where δ' is defined in Lemma 3.7. Thus, δ-k cannot be the first trust region radius tried at iteration k. Let us call δk the trust region radius tried immediately before δ-k, and θk the penalty parameter associated to this rejected step. By (39) and the choice of the penalty parameter, we get θk< θ for all k ∈ K2, k > k5. Therefore, Lemma 3.11 applies, so ||c(x(k))||1 < βδk for all k ∈ K2, k > k5. Moreover, since δk > 0.1δk, inequality (41) implies that

Let us define θ'(δk) = θlkarge if δk was the first trust region radius tested at iteration k, and θ '(δk) = θ (δk') otherwise, where δk' is the penalty parameter tried immediately before δk at iteration k.

From (9)-(12), the fact that θ is not allowed to increase within an iteration, equation (38) and Lemma 3.8, we have

for all k K2, k > k5. Since ∇ f(x) and A(x) are Lipschitz continuous, we may write

for all k ∇ K2, k > k5. Besides, by Lemma 3.8, (42) and the definition of Pred, e have

From Lemma 3.10 and (40), we obtain

So, by (42) and (43), we also have δkksup-k) = O(||c(x(k))||1. Therefore, from the feasibility of x*, the right-hand side of (44) tends to zero for k K2, k > k5. This implies that, for k large enough, Ared > 0.1 Pred for δk, contradicting the assumption that δk was rejected.

Theorem 3.2.Let{x(k)}bean infinite sequence generated by Algorithm 1. Suppose that hypotheses H1 and H2 hold. Then all of the limit points of {x(k)} are f-stationary. Moreover, if all of these limit points are feasible and regular, there exists a limit point x* that is a stationary point of problem (3). In particular, if all of the φ-stationary points are feasible and regular, there exists a subsequence of {x(k)} that converges to feasible and regular stationary point of(3).

Proof. This result follows from Theorem 3.1 and Lemma 3.12. □

4 Computational results

In this section, we present one possible implementation for our SLP algorithm, and discuss its numerical behavior when applied to the solution of some standard topology optimization problems.

4.1 Algorithm details

Steps 2 and 4 constitute the core of Algorith 1. The implementation of the remaining steps is straightforward.

Step 2 corresponds to the standard phase 1 of the two-phase method for linear programming. If a simplex based linear programming function is available, then sn may be defined as the feasible solution obtained at the end of phase 1, supposing that the algorithm succeeds in finding such a feasible solution. In this case, we can proceed to the second phase of the simplex method and solve the linear programming problem stated at Step 4. One should note, however, that the bounds on the variables defined at Steps 2 and 4 are not the same. Thus, some control over the simplex routine is necessary to ensure that not only the objective function, but also the upper and lower bounds on the variables are changed between phases.

On the other hand, when the constraints given in Step 2 are incompatible, the step sc is just the solution obtained by the simplex algorithm at the end of phase 1. Therefore, if the two-phase simplex method is used, the computation effort spent at each iteration corresponds to the solution of a single linear programming problem.

If an interior point method is used as the linear programming solver instead, then some care must be taken to avoid solving two linear problems per iteration.

A good alternative is to try to compute Step 4 directly. In case the algorithm fails to obtain a feasible solution, then Steps 2 need to be performed. Fortunately, in the solution of topology optimization, the feasible region of (5) is usually not empty, so this scheme performs well in practice.

4.2 Filtering

It is well known that the direct application of the SIMP method for solving a topology optimization problem may result in a structure containing a checkerboard-like material distribution (e.g. Diaz and Sigmund [7]). To circumvent this problem, several regularization schemes were proposed in the literature.

In our experiments, three different schemes were considered, in order to see how they affect the performance of the algorithms. The first one was the density filter proposed by Bruns and Tortorelli [3]. For each element i, this filter replaces ρi by a weighted mean of the densities of the elements belonging to a ball Bi with radius rmax.

Other filter we have tested was the Sinh method of Bruns [4], that combines the density filter with a new scheme for avoiding intermediate densities, replacing the power function of the SIMP model by the hyperbolic sine function. This filter was chosen because it turns the volume constraint into a nonlinear inequality constraint, so the problem is more difficult to solve.

Finally, we also tried the dilation filter introduced by Sigmund [17]. This filter replaces the density of an element i by the maximum of the densities of the elements that belong to the neighborhood Bi. This filter also turn the volume constraint into a nonlinear constraint, so the related problems are more challenging.

4.3 Description of the tests

In order to confirm the efficiency and robustness of the new algorithm, we compare it to the globally convergent version of the Method of Moving Asymptotes, the so called Conservative Convex Separable Approximations algorithm (CCSA for short), proposed by Svanberg [20].

At every iteration, the CCSA method approximates the objective and constraint functions by convex separable functions and solve the resulting sub-problem. This inner iteration is repeated until its objective and constraint functions become greater than or equal to the original functions at the optimal solution of the subproblem. A parameter vector, a, is used to define upper and lower limits for the variables, as well as to convexify the objective and constraint functions.

We solve four topology optimization problems. The first two are compliance minimization problems easily found in the literature: the cantilever and the MBB beams. The last two are compliant mechanism design problems: the gripper and the force inverter. All of them were discretized into 4-node rectangular finite elements, using bilinear interpolating functions to approximate the displacements.

We also analyze the effect of the application of the filters presented in Section 4.2, to reduce the formation of checkerboard patterns in the structures. For the density, the dilation and the erosion filters, the penalty parameter p of the SIMP method is set to 1, 2 and 3, consecutively. The Sinh method uses a similar parameter p, that is set to 1 to 6, consecutively.

When the SIMP method is used and p = 1 or 2, the SLP and the CCSA

algorithms stop whenever Δf, the difference between the objective function of two consecutive iterations, falls below 10—3. For p = 3, both algorithms are halted when Δf < 10—3 for three successive iterations. For the sinh method, if p = 1, 2 or 3, we stop the algorithms whenever Δf is smaller than 10—3. If p = 4, 5 or 6, we require that Δf < 10—3 for three successive iterations. Besides, for the density filter, we also define a limit of 1000 iterations for each value of the penalty parameter p used by the SIMP method. When the dilation filter is used, this limit is increased to 2800 iterations. For the Sinh filter, a limit of 500 iterations for each p was adopted. Although not technically sound, this stopping criterion based on the function improvement is quite common in topology optimization.

The initial trust region radius used by the SLP algorithm was set to 0.1. For the CCSA method, the stopping tolerance for the subproblems was set to 10—5. A limit of 20 iterations was also defined for each subproblem. The components of the initial vector σ0 were set to 0.1.

The tests were performed on a personal computer, with an Intel Pentium T4200 processor (2.0GHz, 4GB RAM), under the Windows Vista operating system. The algorithms were implemented in Matlab.

4.4 Cantilever beam design

The first problem we consider is the cantilever beam presented in Figure 2. We suppose that the structure's thickness is e = 1cm, the Poisson's coefficient is a = 0.3 and the Young's modulus of the material is E = 1 N/cm2. The volume of the optimal structure is limited by 40% of the design domain. A force f = 1 N is applied downwards in the center of the right edge of the beam. The domain was discretized into 1800 square elements with 1 mm2 each. The radius of the filters, rmax, was set to 2.5.


The optimal structures for all of the combinations of methods and filters are shown in Figure 3. Table 1 contains the numerical results obtained, including the optimal value of the objective function, the total number of iterations and the execution time. In this table, the rows labeled Ratio contain the ratio between the values obtained by the SLP and the CCSA algorithms. A ratio marked in bold indicates the superiority of SLP over CCSA.


The cantilever beams shown in Figure 3 are quite similar, suggesting that all of the filters efficiently reduced the formation of checkerboard patterns, as expected. On the other hand, Table 1 shows a clear superiority of the SLP algorithm. Although both methods succeeded in obtaining the optimal structure with all of the filters (with minor differences in the objective function values), the CCSA algorithm spent much more time and took more iterations to converge.

4.5 MBB beam design

The second problem we consider is the MBB beam presented in Figure 4. The structure's thickness, the Young's modulus of the material and the Poisson's coefficient are the same used for the cantilever beam. The volume of the optimal structure is limited by 50% of the design domain. A force f = 1 N is applied downwards in the center of the top edge of the beam. The radius of the filters, rmax, as set to 2.5.


The domain was discretized into 3750 square elements with 1 mm2 each. The optimal structures are shown in Figure 5. Due to symmetry, only the right half of the domain is shown. Table 2 contains the numerical results obtained for this problem.


Again, the structures obtained by both methods are similar. The same happens to the values of the objective function, as expected. However, the structure obtained using the density filter has some extra bars. Table 2 shows that the SLP algorithm performs much better than the CCSA method for the MBB beam.

4.6 Gripper mechanism design

Our third problem is the design of a gripper, whose domain is presented in Figure 6. In this compliant mechanism, a force fa is applied to the center of the left side of the domain, and the objective is to generate a pair of forces with magnitude fb at the right side. We consider that the structure's thickness is e = 1 mm, the Young's modulus of the material is E = 210000 N/mm2 and the Poisson's coefficient is a = 0.3. The volume of the optimal structure is limited by 20% of the design domain. The input and output forces are fa = fb = 1 N. The domain was discretized into 3300 square elements with 1 mm2. The filter radius was set to 1.5.


Table 3 summarizes the numerical results. Figure 7 shows the grippers obtained. Due to symmetry, only the upper half of the domain is shown.


Once again, the results presented in Table 3 suggest that the SLP method is better than CCSA, since the SLP algorithm spent much less time to obtain the optimal solution in all cases.

4.7 Force inverter design

Our last problem is the design of a compliant mechanism known as force inverter. The domain is shown in Figure 8. In this example, an input force fa is applied to the center of the left side of the domain and the mechanism should generate an output force fb on the right side of the structure. Note that both fa and fb are horizontal, but have opposite directions. For this problem, we also use e = 1 mm, a = 0.3 and E = 210000 N/mm2. The volume is limited by 20% of the design domain, and the input and output forces are given by fa = fb = 1 N. The domain was discretized into 3600 square elements with 1 mm2. The filter radius was set to 2.5.


Figure 9 shows the mechanisms obtained. Again, only the upper half of the structure is shown, due to its symmetry. Table 4 contains the numerical results.


The structures obtained by the algorithms are fairly similar, with the exception of the Sinh filter. According to Table 4, the SLP algorithms found the best solution for all of the filters. The CCSA method attained the best solution just when no filter was used. As in the previous examples, the SLP method took much less time to converge than the CCSA algorithm.

5 Conclusions and future work

In this paper, we have presented a new globally convergent SLP method. Our algorithm was used to solve some standard topology optimization problems. The results obtained show that it is fast and reliable, and can be used in combination with several filters for removing checkerboards. The new algorithm seems to be faster than the globally convergent version of the MMA method, while the structures obtained by both methods are comparable. All of the filters work well in combination with the SLP algorithm to avoid the occurrence of checkerboards, although the dilation filter seems to be more expensive than the others. The nonlinearity introduced by the dilation and the Sinh filters poses no difficulty for the SLP method. For the combination of

filter and trust region radii tested, the Sinh filter presented the best formed structures. Some of the filters allowed the formation of one node hinges. The implementation of hinge elimination strategies, following the suggestions of Silva [18], is one possible extension of this work. We also plan to analyze the behavior of the SLP algorithm in combination with other compliant mechanism formulations, such as those proposed by Pedersen et al. [15], Min and Kim [13], and Luo et al. [12].

Acknowledgements. We would like to thank Prof. Svanberg for supplying the source code of his algorithm and Talita for revising the manuscript.

Received: 15/08/10.

Accepted: 05/01/11.

#CAM-316/10.

  • [1] M.P. Bendse, Optimal shape design as a material distribution problem Struct. Optim., 1 (1989), 193-202.
  • [2] M.P. Bends∅e and N. Kikuchi, Generating optimal topologies in structural design using a homogenization method. Comput. Methods Appl. Mech. Eng., 71 (1988), 197-224.
  • [3] T.E. BrunsandD.A. Tortorelli, An element removal and reintroduction strategy for the topology optimization of structures and compliant mechanisms Int. J. Numer. Methods Eng., 57 (2003), 1413-1430.
  • [4] T.E. Bruns, A reevaluation of the SIMP method with filtering and an alternative formulation for solid-void topology optimization. Struct. Multidiscipl. Optim., 30 (2005), 428-436.
  • [5] R.H. Byrd, N.I.M Gould and J. Nocedal, An active-set algorithm for nonlinear programming using linear programming and equality constrained subproblems. Report OTC 2002/4, Optimization Technology Center, Northwestern University, Evanston, IL, USA, (2002).
  • [6] CM. Chin and R. Fletcher, On the global convergence of an SLP-filter algorithm that takes EQP steps. Math. Program., 96 (2003), 161-177.
  • [7] A.R. Diaz and O. Sigmund, Checkerboard patterns in layout optimization. Struct. and Multidiscipl. Optim., 10 (1995), 40-45.
  • [8] R. Fletcher and E. Sainz de la Maza, Nonlinear programming and nonsmooth optimization by sucessive linear programming. Math. Program., 43 (1989), 235256.
  • [9] F.A.M. Gomes, M.C. Maciel and J.M. Martinez, Nonlinear programming algorithms using trust regions and augmented Lagrangians with nonmonotonepenalty parameters. Math. Program., 84 (1999), 161-200.
  • [10] N. Kikuchi, S. Nishiwaki, J.S.F. Ono and E.C.S. Silva, Design optimization methodforcompliantmechanismsandmaterialmicrostructure. Comput. Methods Appl. Mech. Eng., 151 (1998), 401-417.
  • [11] C.R. Lima, Design of compliant mechanisms using the topology optimization method (in portuguese). Dissertation, University of São Paulo, SP, Brazil, (2002).
  • [12] Z. Luo, L. Chen, J. Yang, Y. Zhang and K. Abdel-Malek, Compliant mechanism design using multi-objective topology optimization scheme of continuum structures. Struct. Multidiscipl. Optim., 30 (2005), 142-154.
  • [13] S. Min and Y. Kim, Topology Optimization of Compliant Mechanism with Geometrical Advantage. JSME Int. J. Ser. C, 47(2) (2004), 610-615.
  • [14] S. Nishiwaki, M.I. Frecker, M. Seungjae and N. Kikuchi, Topology optimization of compliant mechanisms using the homogenization method. Int. J. Numer. Methods Eng., 42 (1998), 535-559.
  • [15] C.B.W. Pedersen, T. Buhl and O. Sigmund, Topology synthesis of large-displacement compliant mechanisms. Int. J. Num. Methods Eng., 50 (2001), 2683-2705.
  • [16] O. Sigmund, On the design of compliant mechanisms using topology optimization. Mech. Struct. Mach., 25 (1997), 493-524.
  • [17] O. Sigmund, Morphology-based black and white filters for topology optimization. Struct. Multidiscipl. Optim., 33(4-5) (2007), 401-424.
  • [18] M.C. Silva, Topology optimization to design hinge-free compliant mechanisms (in portuguese). Dissertation, University of São Paulo, SP, Brazil, (2007).
  • [19] K. Svanberg, The Method of Moving Asymptotes - A new method for structural optimization. Int. J. Numer. Methods Eng., 24 (1987), 359-373.
  • [20] K. Svanberg, A class of globally convergent optimization methods based on conservative convex separable approximations SIAM J. Optim. 12(2) (2002), 555-573.
  • [21] Z. Zhang, N. Kim and L. Lasdon, An improved sucessive linear programming algorithm. Manag. Science 31 (1985), 1312-1351.
  • *
    This work was supported by CAPES and FAPESP (grant 2006/53768-0).
  • Publication Dates

    • Publication in this collection
      22 Mar 2011
    • Date of issue
      2011

    History

    • Received
      15 Aug 2010
    • Accepted
      05 Jan 2010
    Sociedade Brasileira de Matemática Aplicada e Computacional Sociedade Brasileira de Matemática Aplicada e Computacional - SBMAC, Rua Maestro João Seppe, nº. 900 , 16º. andar - Sala 163, 13561-120 São Carlos - SP Brasil, Tel./Fax: 55 16 3412-9752 - São Carlos - SP - Brazil
    E-mail: sbmac@sbmac.org.br