Acessibilidade / Reportar erro

Complexity control in the topology optimization of continuum structures

Abstract

A general mesh independent filter as a mean to control the complexity of topology optimization designed structures is discussed. A new mesh-independent filter, applied over the move-limits of the sequential linear programming is proposed, and it is shown that its use alleviates common problems in the continuum topology optimization, like checkerboarding, mesh dependency, as well as effects associated to non-structured meshes, like numerical anisotropy. The structural optimization formulation adopted in this work is the minimization of a penalized function of the volume, with constraints on the compliance of each load case. Aspects of this penalized objective function are discussed, and several numerical examples are shown.

Topology optimization; filtering; gradient control; complexity control


TECHNICAL PAPERS

Complexity control in the topology optimization of continuum structures

E. L. Cardoso; J. S. O. Fonseca

Grupo de Mecânica Aplicada, Departamento de Engenharia Mecânica – UFRGS, Rua Sarmento Leite, 425, 90050-170 Porto Alegre, RS. Brazil, eduardo_lenz@yahoo.com.br, jun@ufrgs.br

ABSTRACT

A general mesh independent filter as a mean to control the complexity of topology optimization designed structures is discussed. A new mesh-independent filter, applied over the move-limits of the sequential linear programming is proposed, and it is shown that its use alleviates common problems in the continuum topology optimization, like checkerboarding, mesh dependency, as well as effects associated to non-structured meshes, like numerical anisotropy. The structural optimization formulation adopted in this work is the minimization of a penalized function of the volume, with constraints on the compliance of each load case. Aspects of this penalized objective function are discussed, and several numerical examples are shown.

Keywords: Topology optimization, filtering, gradient control, complexity control

Introduction

The main objective of this work is to present a methodology to control the complexity of structures designed by topology optimization. This control is attractive because the cost of a mechanical part depends on its complexity, and therefore, the economy achieved in material can be easily overrun with the increase of the complexity of the part. Addressing this issue, this work proposes the use of a general gradient control technique (filtering), which can be used in arbitrary meshes. Also, it is known that restricting the spatial variation (gradient) of the density makes the problem well-posed (Bendsøe, 1995), prevents the appearance of the checkerboard and, as the complexity of the topology is related to the number of holes (transition from void to fill), the gradient control also controls the complexity.

Studies about gradient controls (spatial variation of the design variable) have been done to avoid the checkerboard and to assure the existence of solutions (Niordson, 1983; Swan and Kosaka, 1997; Peterson and Sigmund, 1998; Fonseca and Kikuchi, 1998; Cardoso and Fonseca, 1999 and Bourdin, 2001). Some references (Díaz and Sigmund, 1995 and Jog and Haber, 1996) suggested that the checkerboard is associated to the interpolation order of displacement and density fields in the finite element solution, like in the Stokes problem. Due to this reason, some researchers use high order finite elements with the usual constant interpolation for the density. Other approaches are the use of gradient controls, like digital imaging filtering or the stricter slope control proposed by Niordson, 1983, and implemented for the continuum problem by Peterson and Sigmund 1998 and Cardoso and Fonseca, 1999. The main advantages of using gradient controls over high order finite elements are the complexity control, the fact that the set of the admissible designs is closed (Bendsøe, 1995), and computational efficiency.

The main objective of the topology optimization problem is to find a material distribution that extremizes a given functional (objective function) subjected to a set of constraints. The evolution of engineering lead to the necessity of efficient methodologies to design mechanical parts and structures, thus saving material and time. This is the reason why topology optimization is becoming a very important research field.

The basic goal of topology optimization, the material distribution, is achieved by a consistent parameterization of the material properties in each part of the design domain. When dealing with isotropic materials, a natural question is whether there exists or not material in a given point, which leads to a discrete problem. It is well-known that this integer parameterization leads to numerical problems, associated to the non-uniqueness of solutions (Ambrosio and Buttazzo, 1993 and Bendsøe, 1995). To relax the integer parameterization of the material distribution, Bendsøe and Kikuchi, 1988, introduced the parameterization of the material distribution by means of the homogenization method (Hassani and Hinton, 1998a and 1998b), which enlarges the space of admissible solutions making the problem well-posed. The homogenization method is simple and useful, however it increases the number of design variables used in the optimization problem, as it requires deriving a model for the dependence of the material properties with respect to the geometrical parameters of the cell.

A different approach to the non-uniqueness of the solution to the integer program is to reduce the admissible set by the introduction of perimeter constraints, proposed by Ambrosio and Buttazzo, 1993 and further developed by Beckers, 1997.

The power-law or SIMP is a simpler approach to relax the space of admissible solutions without increasing the number of design variables (Bendsøe and Sigmund, 1999). It has been used as an alternative to the full homogenization, where the material properties are parameterized by

where E is the interpolated fourth order elastic constitutive tensor, r is the material density or material fraction, E0 is the fourth order tensor relative to the material properties of the base material and p is used to adjust the degree of nonlinearity of this equation. This approach is a continuous interpolation of the material properties with respect to the density (amount of material) in each point of the design domain. This continuous parameterization relaxes the original integer problem, enlarging the design space. Recently, Bendsøe and Sigmund, 1999, associated this parameterization to the use of isotropic composite microstructures. With this result, it is clear that the use of the SIMP approach enlarges the design domain like the use of the homogenization, but it is also clear that the spaces obtained are different, because the space spanned by non-isotropic microstructures is larger than the space spanned by their isotropic counterparts. Due to this reason, one can achieve better (extreme) results using the homogenization instead of SIMP (Park 1995 and Bendsøe, 1995).

When the goal is to obtain a mechanical part made of isotropic material, the use of continuous parameterization, like homogenization and SIMP, leads to the problem of intermediate (composite) materials in some regions of the design domain. These regions are important and, sometimes, cannot be discarded in a simple threshold post processing procedure. To this end, one should obtain a pure integer design after relaxing the original problem with discrete material parameterization. Using the SIMP approach, one can start with a unitary exponent in Eq. (1) and increment the exponent as the optimization is carried on. This process, a continuation approach, enlarges the design space by using a continuous parameterization and, after that, reduces this space using penalization. This is a well-known result of the mixture theory, in which a linear relation is an unattainable upper-bound and thus can lead to extreme designs, like the one obtained with homogenization. In fact, the mixture of two materials has a nonlinear relation between the properties of each material and the amount of each material in the mixture. Starting with a unitary exponent in the SIMP relation provides the upper bound and thus we are looking for a solution in a larger design space. This solution is extreme, but contains large areas with intermediate material. Penalizing the constitutive relation, the stiffness of the intermediate material is no longer attractive and, for some value of the exponent, it is possible to achieve an almost black and white design.

In this work, a linear relation between the properties of the base material and the material in each point of the design domain is used. The objective function used is a penalized function of the amount of material (volume) of the structure. This objective function is used because we are concerned with material reduction for a fixed compliance. The objective function used is not related to the filtering technique proposed, and can be used with any other kind of gradient control or constraints.

Gradient Control

Gradient controls are used to constrain the spatial variation (gradient) of the design variable. As shown by Bendsøe, 1995, constraining the spatial gradient of the density makes the H1 norm

bounded (H1 < + ¥), making it possible to prove that the problem is well-posed and has (at least) one solution. This control can be achieved using digital imaging filtering techniques or an additional set of constraints in the optimization formulation (strict gradient control). Although the strict gradient control allows a better control over the gradient of the densities, it is too costly, due to the large number of additional constraints imposed to the optimization algorithm. Filtering means that an approximated gradient constraint is being imposed, using some procedure related to image manipulation, which is faster, but not so precise. Filtering has been used in different forms, and the results obtained show that it can control the checkerboard and, in some cases, control the complexity of the topology. In a general form, filtering means applying a mathematical operator to some value like the density, the gradient of the objective function or any other variable. Therefore, if one wants to smooth the spatial variation of the design variable, an operator that prevents fast variations of the filtered variable should be applied. The simplest operator is the average mean of the value of a given patch of elements. The expression for a general filter is

where g is the filtered value of f, due to the convolution operator h. Filters are classified by the way the neighbor elements are considered: fixed grid, where only neighbors that share nodes and/or sides of the central element are taken into account, and spatial filters, where neighbors inside a given spatial neighborhood are considered. As the main objective of the filters is constraining the spatial variation of the density (frequency), from now on only low-pass filters are considered. Avoiding frequencies higher than the transition from void to fill, in the distance of two elements, makes it possible to avoid the checkerboard, and changing the frequency of the filter, it is possible to control the complexity of the topology. For smooth (low frequency) filters, there should be no checkerboards neither tiny reinforcements. For very weak (high frequency) filters, there should be tiny reinforcements and maybe checkerboards. Therefore, the filter can be used as a complexity control.

Recently, Bourdin, 2001, studied a modified version of the traditional minimum compliance formulation, where the density field is regularized by the application of a convolution operator. The existence of solutions and the convergence of the finite element approximations are established.

Fixed Grid Filters

Fixed grid filters are the simplest ones. Basically, the only difference among these filters is the way the average value is considered. Another advantage of this type of filter is the closed relation to the image manipulation filtering techniques, where each element corresponds to a pixel (or a voxel). The main disadvantage of fixed grid filters is its dependency on the finite element discretization, which limits its best performance to structures meshes. The first attempt to avoid the checkerboard was made by Bendsøe and Kikuchi (Bendsøe, 1995), using a four elements patch, previously used for incompressible problems. It is a fixed grid filter, and it is applied over the density field.

Other very simple fixed grid filter is the 3x3 neighborhood filter,

where H is the discrete convolution operator. The weighting factor b can be selected within the interval [1, + ¥], and smaller the b, weaker is the filter. This filter is usually applied over the density field, over some sensitivity field, with the form

where C=(c+1)/2 is used, c=3 for a 3 x 3 filter and m and n are the (relative) centroidal positions of each neighbor element.

The filter proposed by Fonseca and Kikuchi, 1998, can be seen as the first attempt to implement a strict gradient control in the topology optimization of continuum structures. This filter is a fixed grid filter, where the weight factors are evaluated in advance to impose, approximately, a gradient constraint. The filter is applied over the upper and the lower move limits of the SLP, and using this filter, one can control the complexity of the result.

All of these filters are restricted to structured meshes made of rectangular finite elements. One very interesting fixed grid filter is the one proposed by Swan and Kosaka, 1997, where node and side neighbors are selected. Each class of neighbor has a fixed weight, and the volume of the neighbor is taken into account in the averaging process. The expression for this filter is

where a is the variable being filtered, â is the filtered value, W1 and W2 are weighting factors and Vi is the volume of an element of the mesh. Using this filter, one can have different finite elements in the same mesh, which is a great advance when compared to other fixed grid filters. Swan and Kosaka filter is inadequate to impose strict bounds on the spatial variation of the density field; however it is effective in preventing checkerboards and keeping some control on the complexity of the design.

Spatial Filters

A serious drawback of fixed grid filters, as the one proposed by Swan and Kosaka, is that its area of influence is spanned by finite elements neighborhood. Thus, refining the mesh reduces the spatial area of influence of the filter, and leads to more complex designs.

Therefore, it is necessary to introduce spatial filters to avoid mesh dependency. Another advantage of the spatial filters over most of the fixed grid filters is that controlling the area of influence of the spatial filter, it is possible to control the complexity of the topology. This control is not a very strict control. The mesh dependency is alleviated with the use of these filters, as for a constant area of influence, refining the mesh increases the number of elements considered, avoiding reinforcements smaller than some scale.

Figure 1 illustrates the concept of spatial filter. In this figure, the element being filtered has a neighborhood, selected by the radius of the filter. Elements with centroids inside the area of influence of the filter (circle) are considered in the averaging process.


The simplest spatial filter is the linear filter, with weights given by

where Rmax is the radius of the filter, Rij is the distance between the centroids of elements i and element j and CVi is the set containing the neighbors of the element i. The average value is given by

where nv is the number of neighbors. As can be seen in the previous equations, this filter does not take into account the size of the neighbors, making it adequate to be used with regular meshes.

Sigmund, 1997, uses a spatial filter on the gradients of the optimization problem. The filter proposed by Sigmund has the following form

As discussed before, this filter is mesh independent, but does not take into account the size of the neighbors.

In order to use a spatial filter with any kind of mesh it is proposed a spatial version of the filter proposed by Swan and Kosaka. In this filter, the weights are evaluated by the relative position of the neighbor to the central element. So, there are two possibilities, using an average weight or using individual weights for each neighbor. From these two possibilities, two filter definitions are proposed: the first filter will be referenced as AWSF (average weight spatial filter)

where

and the second formulation as IWSF (individual weight spatial filter)

where

In the above equations, a is the value of the variable being filtered, â is the filtered value, Vi is the volume of the element i, W is a weighting factor, is an averaged value of the weighting factors and nv is the number of elements in CVi. This filter can be applied to any variable, like density field or the gradient field. In this work, the filter is applied over the upper and lower move limit fields of the SLP. Doing this, the density field and gradient fields are not disturbed in an artificial way. The only function of the filter it to guide the possible values obtained by the SLP at each iteration.

Strict Gradient Controls

Strict gradient controls have been used since Niordson, 1983. The approach proposed by Niordson was in the opposite direction of the solution proposed by Cheng and others. While Cheng, 1981, relaxed the set of possible solutions, using microstructures, Niordson restricted this set, restricting the possible variation of the thickness, and also solved the ill-posed problem of thickness optimization of a plate. After that, Bendsøe, 1995, studied this restriction, associating it to bounds on the H1 norm. Therefore, even if the goal of using a filter is to control the checkerboard, it also controls the spatial variation of the density, bounding its norm. Using this information, Fonseca and Kikuchi, 1998, proposed a fixed grid filter whose weights are evaluated to restrict the spatial variation of the density. Doing this, one can restrict the set of admissible solutions (by bounding the norm), and also control the complexity of the solution. Petersson and Sigmund, 1998, proposed a strict gradient control, where the variations of the densities are constrained directly in the SLP problem, as a new set of constraints. This gradient control is very expensive, but allows a very strict constraint on the H1 norm, controlling the checkerboard and the complexity of the topology. This gradient control was implemented for regular meshes. Cardoso and Fonseca, 1999, proposed a more general implementation for this gradient control, allowing its use with non regular meshes. The large number of constraints introduced to bound the variation of the density filed from each element to its neighbors make this approach too expensive for practical applications. Also, the use of filtering techniques is much cheaper and leads to similar results

Volume Minimization with Compliance Constraint

Clearly the effects of a strict gradient control and the effect of filtering are the same: constrain the spatial variation of the design variable. Controlling the variation of the density field is actually a control on the spatial variation of the material properties. Therefore it makes no sense to adopt a nonlinear relation of the material properties with respect to the density. For example, using a strict gradient control does not allow 0-1 designs, so "gray" areas are always created (areas with intermediary densities). To further penalize the appearance of gray areas, Petterson and Sigmund, 1998, increased the penalty exponent of the constitutive relation up to 5. Doing this, the result is not related to the spatial variation of the base material, especially for a very smooth spatial variation of the density. It is practically impossible to build a part with large areas of intermediate material, even though it is possible to associate this SIMP "gray" material to a particular isotropic microstructure. Consequently, one can use a slightly different approach. Instead of penalizing the material properties, one can penalize the cost (objective) function. Therefore, if one wants to minimize the amount of material used in a particular design, constraining the compliance of the part, the following formulation can be used

where Flimk is the maximum compliance for a particular load case k, and nlc is the number of load cases. Considering the following relation between the volume and the design variable

the objective function and the set of constraints are convex, so the solution may be unique, depending on the material interpolation. If a linear relation (upper bound) between the effective properties and the base material is used, this problem is convex and has a unique solution. As this parameterization represents an upper bound for the parameterized material properties, the result usually has large extents of gray areas, due to the artificially high stiffness of the intermediate material. For this reason, some researches penalize the stiffness of intermediate densities using a nonlinear relation (SIMP).

If instead of penalizing the material properties, the cost of the intermediate densities is penalized, we also avoid large areas of intermediate densities in the result. The following relation

also penalizes the intermediate densities. This relation makes the cost (volume) larger for intermediate densities, so the optimizer tries to avoid intermediate densities. As can be seen, this new objective function is non-convex, so using n different than one makes the problem non-convex, like when penalizing the constitutive relation. Therefore, a continuation approach can be used. The original convex problem (no penalization) is taken as the starting point. Once in the global minima (it depends on the optimizer) of the design set (which also depends on the gradient control) the problem is penalized, changing the objective function. This new objective function is non-convex, so the problem will converge to some local minima, which contains fewer intermediate densities, as the penalization prevents it.

If the convex formulation is changed directly to a highly non-convex problem, the change in the result can be dramatic, so it is a good idea changing slightly the non convexity of the objective function. Doing this, it is also possible to investigate the influence of the non-convexity of the objective function in the complexity of the result. This is the same behavior encountered in the modification of the constitutive relation, where the exponent of the constitutive relation can be related to some special kind of microstructure and the penalization of the objective function can be related to some economic features, like the price of the intermediate densities (so constructing the isotropic microstructure can be related to an increase in the cost of the process). An additional penalization term can be considered in Eq. (16), like the one used by Fernandes, Guedes and Rodriges, 1999, resulting in

which can be used to further penalize the design. The algorithm starts with a convex objective function (n=1 and b =0), changes to a non-convex objective function (n<1 and b =0) and, if needed, makes it even more non-convex (b >0). This process can be automatic, changing the parameters after the convergence of each level. The need for additional penalization appears when the gradient of the design variable is constrained, so large gray areas, corresponding to the transitions from fill to void, can be reduced.

Mathematical Programming

Among several optimizers, the sequential linear programming was chosen, due to its simplicity and because the reliable public domain SLATEC library is freely available (Hanson and Hirbert, 1981). The linear approximation for the proposed problem is

where r0i is the actual density value for element i, Fk is the compliance of load case k, amin and amax are the minimum and maximum values for the move limits and ai is the move limit offset factor. This local approximation of the objective function and constraints requires some first order sensitivities with respect to the design variables. The objective function in Eq. (17) is an explicit function of the design variable, so the sensitivity is very easy to obtain:

The flexibility constraint is an implicit function of the design variable, so the sensitivity with respect to the density is more involved, but has a closed form. Differentiating the definition of flexibility,

where the sensitivity of the displacements can be obtained directly from the equilibrium equation, gives

The last set of constraints in Eq. (18) are known as move limits. They are added to keep the linear approximation valid at the current point. The move limit strategy is crucial to the success of an SLP implementation and many researches have proposed different strategies (Wujek and Renaud, 1998a and 1998b; Chen, 1993). One of the simplest strategies is avoiding the zigzag (Bazaraa and Shetty, 1979), which means that, if in the last two iterations the sign of the change in one design variable changes, the move limit of this design variable is decreased by a factor of (1-a) and if there is no change in the sign then the move limit of this variable is increased by a factor of (1+a). However this scheme tends to be unstable in this application and was modified to consider the last three iterations. After the evaluation of the moving limits, one of the proposed filters is applied separately, on the upper and on the lower moving limits. This assures that the possible range of density for each design variable is related to the surrounding elements. After the LP, elements densities cannot differ much from its neighbors, avoiding the checkerboard. As discussed before, as the number of elements used in the averaging process increases, simpler is the topology. One drawback of the LP method, as pointed by Bruns and Tortorelli, 2001, is the loss of symmetry, due to the nature of the LP approximation. It was observed that the use of the filters proposed in this work also alleviate this effect.

Solution of the Equilibrium Equations

Using the strong form of the equilibrium equations of the elasticity,

where ó is the Cauchy stress tensor and b is the body force vector, and considering a linear relation between the stress tensor and the strain tensor

it is possible to express Eq. (22) using the material properties and the continuum material parameterization of Eq. (1), with p=1, which gives

Here the small displacement strain tensor is used

so the results are valid only for small displacements (it is important to emphasize that both the filtering technique and the objective function can be used in non-linear problems). The weak form of Eq. (22) is

where v is an arbitrary test function.

In this work, the equilibrium problem is solved using the finite element method. The design domain is divided in ne finite elements, and it is assumed a local interpolation of the displacements and densities. Using the usual finite element procedures used (Bathe, 1996), the well known element stiffness matrix is obtained

where a constant interpolation for the design variables is used for all elements in the mesh. This result is very important, because the density is a simple scaling factor for the usual stiffness matrix. The procedure to obtain the global stiffness matrix is the subject of any finite element book (Bathe, 1996).

The non conforming four nodes element (Kasper and Taylor, 2000a and 2000b), the 3D non conforming eight nodes element, and the GT9 triangular element with drilling degrees of freedom (Yuquin and Yin, 1994) are used in this work. These are low order elements, so the checkerboard instability is not avoided. The use of these elements, however, implies in a better displacement solution in the equilibrium problem, with a small increase in the computational time. The nonconforming formulation alleviates the parasitic shear, so the final result is not corrupted by this kind of finite element error, especially in tiny reinforcements in bending, so usual in topology optimization. The GT9 has a behavior between the 6 node triangular element and the constant strain triangle, with a small increase in the computational time when compared to the latter. Another advantage of the incompatible elements is the stress recovery, allowing a better description of the stress field when the stress is of interest (especially in non-linear problems).

Results

In the following, some results obtained with the use of the proposed formulation are shown. The non-conforming bidimensional four node element and the non-conforming three dimensional eight node are used. The triangular element considered is the GT9. For all examples, the material is assumed as isotropic and elastic, with E=1*107Pa and Poisson's ratio equal to 0.3. All the applied loads have a value of 1N. In all figures, a gray scale is used to represent the density field, where white means void (r=1*10-3) and black means base material.

Influence of the Penalization

As explained before, instead of penalizing the constitutive relation, one can penalize the objective function. To show the influence of the penalty exponent in Eq. (16) it is used the traditional 8 x 5 short cantilever beam (See Bendøe, 1995, for this and other standard problems) discretized by 2160 (60 x 36) finite elements. The limit compliance is equivalent to 120% of the compliance obtained for 100% of volume. Four different values of the penalization exponent are used: 1, 1/2, 1/6 and 1/8, and the volume fractions obtained for those exponents are 62.3%, 62.95%, 64.53% and 64.16%, respectively. When the exponent is equal to one, there is no penalization, so the result contains large areas with intermediate densities. As the exponent is decreased, the relation becomes non-convex, penalizing intermediate densities. Figure 2 shows the topology obtained with these exponents. Smaller the exponent, fewer the gray areas. This is a good feature, but the non-convexity also makes the discrete solution non-unique.


To show the effects of the non-convexity of the penalized objective function, the previous example is considered again, but with a mesh of 3525 (75 x 47) finite elements. The limit compliance is set to 125% of the compliance obtained for 100% of volume fraction and the penalization exponent is set to 1/8. The two filters proposed are used, with a radius of 1.66*10-1 m, (equivalent to select eight neighbors for each element). Two different initial density distributions are used, as shown in Fig. 3 (first column), a homogeneous with r =0.5 and a random distribution. The second column in Fig. 3 shows the results obtained with the use of the IWSF filter and the third with the use of the AWSF filter.


The final volume fractions are shown in Tab. 1.

To avoid this dependency with respect to the initial density distribution, we used a continuation approach. First, the linear problem is solved until convergence. After that, the exponent n is decreased and the problem continues until the new convergence is reached. This approach needs more iterations, but it is effective and we also observed that it is more stable than starting with a non-convex problem. For the same problem of the Fig. 3, the same solution for the random and homogeneous initial density distribution is obtained, as shown in Fig. 4. No assumption on the initial densities distribution was made, so it can be an infeasible point. Table 2 shows the volume fractions obtained with this procedure. Two volume fractions are shown: the first corresponds to the converged convex solution and the second to the converged penalized solution. As in the previous example, the second column in Fig. 4 is obtained with the use of the IWSF filter and the third with the used of the AWSF filter.


Figure 5 shows the solution of another common problem in the literature, with the use of the continuation method. The domain is a unitary square and the limit compliance is set to 130% of the compliance obtained with a volume fraction of 100%. In this example, 1800 (30 x 60) finite elements are used and the radius of the filters is 2.6*10-2m (equivalent to 8 neighbors). The third row in Fig. 5 is obtained with the use of the AWSF filter and the fourth with the use of the IWSF filter. Table 3 shows the volume fractions obtained for this example.


Influence of the Filter

The results shown above were obtained with filtering otherwise checkerboard would have been present. For the same problem and discretization, it is possible to vary the radius of the filter to obtain a simpler topology or to obtain a more complex one. If one chooses a very small radius, checkerboard can appear, so there is a lower limit on the value of the radius. For a coarse mesh, the filter can blur the result and many levels of penalization would be required, so there is a minimum resolution to apply the filter, as observed before by Fonseca and Kikuchi, 1998. This is not a problem, because the finite element solution requires a mesh good enough to represent the problem and one can think in the finite element solution and in the optimization problem when doing the mesh. Figure 6 shows the influence of the size of the filter for a fixed mesh. As the radius of the filter is increased, thin reinforcements are no longer present in the topology. This is very important, because the number of holes usually increases the cost of the part. Another important thing is that the finite element solution for these thin reinforcements can be inaccurate because they are usually one or two elements wide, especially when low order finite elements are used in bending. Therefore, with the use of the filter, one can assure a minimum mesh size for some parts of the topology. This is not a strict control, but as shown it works and with some experience the designer can estimate the radius of the filter to fit the characteristics of the design.


For the short clamped beam, with a mesh of 6300 finite elements, we changed the radius of the filter, to show its influence in the complexity of the topology. Figure 6 shows the topology obtained for increasing values of the radius of the filter. The values correspond to a maximum number of 4, 8 and 12 neighbors. It is clear that the larger the number of elements inside the radius of the filter, simpler is the topology.

The proposed filters can also minimize the influence of the mesh pattern on the topology. It is well known that the way the finite elements are disposed in the mesh can influence the result. This is known as numerical anisotropy. Most of the papers dealing with topology optimization use a regular quadrilateral mesh, so the mesh quality is not considered. To show the influence of the proposed filtering technique, the short clamped beam is used. The limit compliance is set to 120% of the compliance obtained for 100% volume fraction and different mesh patterns are used. The mesh patterns are shown in the first column of the Fig. 7. The second column shows the solutions obtained without filtering. It is clear that the mesh pattern changes the checkerboard, showing how strong the effects of the mesh quality in the topology optimization are. When using the filter, the same result is obtained, independent of the mesh pattern (third column). The filter used in this example is the IWSF and the radius is fixed for the three meshes (2*10-1m). It must be observed that the meshes in Fig. 7 have a different number of finite elements. The volume fractions obtained for this example are 63.36% for the first pattern, 63.03% for the second pattern and 65.57% for the last pattern.


Multiple load cases

The proposed formulation can be used for multiple load cases. As shown in Eq. (14), each load case represents one constraint. The computational cost associated to each constraint is very low and it is possible to use as many load cases as needed. To exemplify the solution of multiple load cases problems, it is considered the two load cases short beam (Bendsøe et al., 1995), discretized with 6400 finite elements. The continuation method is used, with final exponent equal to 1/8. Figure 8 shows the results obtained using the IWSF filter and Fig. 9 shows the results obtained using the AWSF for the same value (8.8*10-2m) of the radius. Table 4 shows the volume fractions obtained for this example.



Three-dimensional Results

The proposed formulation can be applied to three-dimensional problems, with no modifications. To illustrate three-dimensional results, it is used as design domain a unitary cube, clamped in one face. In the first example, a point load is applied in the bottom of the opposite face. The result is shown in Fig. 10. Due to the symmetry of the problem, only half domain is discretized with 4000 elements. The AWSF filter is used with a radius equivalent to take into account each edge neighbor of each element. The limit compliance is set to 2.06*10-5Nm.


Figure 11 illustrates the result obtained for the same geometry, but with the load applied in the center of the opposite face. The same mesh is used and the limit compliance is 5.5*10-6Nm (140% of the compliance for 100% volume fraction). Both results where obtained with the continuation method, with a penalization exponent of 1/8. The results where exported to the ANSYS format, using a threshold value of 0.9 (no element with density less than 0.9 is considered). The compliance evaluated by ANSYS differs less than 1% for all examples.


The results obtained are very close to the results obtained in references where a perimeter constraint is used (Fernandes, Guedes and Rodriges, 1999). It is very interesting verifying that simpler topologies, obtained with filtering, have smaller perimeter than the complex ones, as predicted by the perimeter control theory. Although different, these approaches lead to similar results in terms of complexity.

Conclusions

A new spatial filter was proposed, together with a penalized version of the traditional formulation for volume minimization. The spatial filter (in two different formulations) can prevent the checkerboard instability and also controls the complexity (number of holes) of the topology. It is shown that using this filter, mesh influence can be alleviated. This is important, due to the minimum requirements of mesh quality to ensure a good finite element approximation for the response of the structure. Low order elements have a poor behavior in bending, so it is advisable to use improved formulations, like some nonconforming and subintegrated finite elements in structural optimization, especially when designing low fraction or very flexible structures. The objective function used in this work allows reducing gray areas, increasing the cost of the intermediate densities, which are expensive to build. A continuation method was used to further penalize the intermediate densities, when it is convenient. Some results with multiple load cases and some 3D results were shown indicating that this formulation is very general and can be used with any kind of mesh or finite element. Like other formulations based on mathematical programming methods, other types of constraints, like displacement constraints, can be added with little effort. As shown in some examples the use of non-regular meshes can induce very strong influence of the finite element solution on the final topology. In these cases, the use of mesh independent filters can alleviate this influence and also control the complexity of the topology.

It is important to notice that the introduction of the filter restricts the possible solutions, because the set of admissible solutions is modified by the filter. Therefore, it is pointless to compare the results obtained with different filtering techniques, because they may not belong to the same set of admissible solutions. Features to be compared are the mesh independence, checkerboard control and complexity control.

Acknowledgements

The authors gratefully acknowledge the support of this research by CAPES and the computational facilities of CESUP-UFRGS.

  • Ambrosio, L., Buttazzo, G., 1993, "An optimal design Problem with Perimeter Penalization", Calc. Var., 1, pp. 55-69.
  • Bathe, K., 1996, "Finite Element Procedures", Prentice-Hall.
  • Bazaraa, M. S., Shetty, C. M., 1979, "Nonlinear Programming - Theory and Algorithms", John Wiley & Sons, New York.
  • Beckers, M., 1997, "Optimisation de Structures en Variables Discretes", Universite de liege, Collection des Publications de la faculté des Science Appliquées, No. 181.
  • Bendsøe, M. P., 1995, "Optimization of Structural Topology, Shape and Material", Springer-Verlag, New York.
  • Bendsøe, M. P., Díaz, A. R., Lipton, R., Taylor, J., E., 1995, "Optimal Design of Material Properties and Material Distribution for Multiple Loading Conditions", International Journal of Numerical Methods in Engineering, Vol. 38, pp.1149-1170.
  • Bendsøe, M. P., Kikuchi N., 1988, "Generating Optimal Topologies in Structural Design Using Homogenization Method", Computer Methods in Applied Mechanics and Engineering, Vol. 71, No. 2, pp.197-224.
  • Bendsøe, M. P., Sigmund, O., 1999, "Material Interpolation Schemes in Topology Optimization", Archive of Applied Mechanics, Vol. 69, pp. 635-654.
  • Bourdin, B., "Filters in topology optimization", In. J. Numer. Meth. Engng, 2001: 50:2143-2158.
  • Bruns, T. E., Tortorelli D. A., "Topology optimization of non-linear elastic structures and compliant mechanisms", Comput, Methods Appl. Mech. Engrg., 190 (2001) 3443-3459.
  • Cardoso, E. L., Fonseca, J., S., O., 1999c, "Spatial Gradient Control in the Structural Topology Optimization", First ASMO UK/ISSMO Conference on Engineering Design Optimization, Ilkley, West Yorkshire, UK.
  • Chen, T. 1993, "Calculation of the Move Limits for The Sequential Linear Programming Method", International Journal of Numerical Methods in Engineering, Vol. 36, pp. 2661-2679.
  • Cheng, K.T, 1981, "On Non-Smoothness in Optimal Design of Solid, Elastic Plates", Int. J. Solids Structures, Vol. 17,pp. 795-810.
  • Díaz, A., Sigmund, O., 1995, "Checkerboard Patterns in Layout Optimization", Structural Optimization, Vol. 10, No. 1, pp. 40-45.
  • Fernandes, P., Guedes, J., M., Rodrigues, H., 1999, "Topology Optimization of Three-Dimensional Linear Elastic Structures with a Constraint on Perimeter", Computers & Structures, Vol. 73, No. 6, pp. 583-594.
  • Fonseca, J. S. O., Kikuchi, N., 1998, "Digital Imaging Filtering in Topology Optimization", Computational Mechanics New Trends and Applications, CIMNE, Spain.
  • Hanson, R. J., Hirbert, K., L., 1981, "A sparse Linear Programming Subprogram", Report SAND81-0297, Sandia National Laboratories.
  • Hassani, B., Hinton, E., 1998a, "A Review of Homogenization and Topology I - Homogenization Theory for Media With Periodic Structure", Computer and Structures, Vol. 69, pp. 707-717.
  • Hassani, B., Hinton, E., 1998b, "A Review of Homogenization and Topology II- Analitycal and Numerical Solution of Homogenization Equations", Computer and Structures, Vol. 69, pp. 719-738.
  • Jog, C., Haber, R. B., 1996, "Stability of Finite Element Models for Distributed-Parameter Optimization and Topology Design", Computer Methods in Applied Mechanics and Engineering, Vol. 130, pp. 203-226.
  • Kasper, E. P., Taylor R. L., 2000a, "A mixed-enhanced strain method Part I: Geometrically linear problems", Computers & Structures, 75, pp. 237-250.
  • Kasper, E. P., Taylor R. L., 2000b, "A mixed-enhanced strain method Part II: Geometrically nonlinear problems", Computers & Structures, 75, pp. 251-260.
  • Niordson, F. I., 1983, "Optimal Design of Plates with a Constraint on the Slope of the Thickness Function", Int. J. Solids Structures, Vol. 19, pp. 141-151.
  • Park, Y. K., 1995, "Extensions of Optimal Layout Design Using the Homogenization Method", PhD theses, University of Michigan.
  • Petersson, J., Sigmund, O., 1998, "Slope constrained Topology Optimization", International Journal of Numerical Methods in Engineering, Vol. 41, pp. 1171-1194.
  • Sigmund, O. 1997, "On the design of compliant mechanisms using topology optimization", Mechanics of Structures and Machines, 25(4), pp. 495-526, 1997.
  • Swan, C. C., Kosaka, I., 1997, "Voigt-Reuss Topology Optimization for Structures with Linear Elastic Material Behavior", International Journal of Numerical Methods in Engineering, Vol. 40, No. 1, pp. 3033-3057.
  • Wujek, B. A., Renaud, J., E., 1998a, "New Adaptative Move-Limit Management Strategy for Approximate Optimization, Part 1", AIAA Journal, Vol. 36, No. 10, pp. 1911-1921.
  • Wujek, B. A., Renaud, J., E., 1998b, New Adaptative Move-Limit Management Strategy for Approximate Optimization, Part 2, AIAA Journal, Vol. 36, No. 10, pp. 1922-1934.
  • Yuquin, L., Yin, X., 1994, "Generalized Conforming Triangular Membrane Element with Rigid Rotational Freedoms", Finite Elements in Analysis and Design, Vol. 17, pp. 259-271.

Publication Dates

  • Publication in this collection
    18 Mar 2004
  • Date of issue
    Sept 2003
Associação Brasileira de Engenharia e Ciências Mecânicas - ABCM Av. Rio Branco, 124 - 14. Andar, 20040-001 Rio de Janeiro RJ - Brazil, Tel.: +55 21 2221-0438, Fax: +55 21 2509-7129 - Rio de Janeiro - RJ - Brazil
E-mail: abcm@abcm.org.br