Acessibilidade / Reportar erro

Nonlinear cutting stock problem model to minimize the number of different patterns and objects

Abstract

In this article we solve a nonlinear cutting stock problem which represents a cutting stock problem that considers the minimization of, both, the number of objects used and setup. We use a linearization of the nonlinear objective function to make possible the generation of good columns with the Gilmore and Gomory procedure. Each time a new column is added to the problem, we solve the original nonlinear problem by an Augmented Lagrangian method. This process is repeated until no more profitable columns is generated by Gilmore and Gomory technique. Finally, we apply a simple heuristic to obtain an integral solution for the original nonlinear integer problem.

cutting problem; nonlinear programming; column generation


Nonlinear cutting stock problem model to minimize the number of different patterns and objects

Antonio Carlos MorettiI; Luiz Leduíno de Salles NetoII

IInstituto de Matemática, Estatística e Computação Científica - UNICAMP, Cidade Universitária Zeferino Vaz s/n, Barão Geraldo, 13084-790 Campinas, SP, Brazil. E-mail: moretti@ime.unicamp.br

IIEscola de Engenharia Industrial Metalúrgica - UFF, Av. dos Trabalhadores, 420, Vila, 27255-970 Volta Redonda, RJ, Brazil. E-mail: leduino@metal.eeimvr.uff.br

ABSTRACT

In this article we solve a nonlinear cutting stock problem which represents a cutting stock problem that considers the minimization of, both, the number of objects used and setup. We use a linearization of the nonlinear objective function to make possible the generation of good columns with the Gilmore and Gomory procedure. Each time a new column is added to the problem, we solve the original nonlinear problem by an Augmented Lagrangian method. This process is repeated until no more profitable columns is generated by Gilmore and Gomory technique. Finally, we apply a simple heuristic to obtain an integral solution for the original nonlinear integer problem.

Mathematical subject classification: 65K05.

Key words: cutting problem, nonlinear programming, column generation.

1 Introduction

The Unidimensional Cutting Stock Problem (1/V/I/R according Dyckhoff [5]) is characterized by cutting stocks in just one dimension. More specifically, we have m items with different sizes with width equal to wi and we must cut, through its length, a minimum number of master rolls (with width W > wi for all i) to attend demand di for each item i. Each combination of items cut from a master roll is called a cutting pattern. The problem is to determine the frequency of each cutting pattern to attend demand and (for instance) minimize the number of objects cut.

A reasonable goal to be met in a industry is to minimize the number of master rolls used to produce the demanded items. If we consider that there are a sufficient number of objects of same width W available, then the formulation below describes the mathematical model that minimize the total number of objects (i.e., master rolls) used in a cutting plan:

where

  • c1 is the cost for each master roll used;

  • aij is the number of items i in cutting pattern j;

  • xj is the number of of objects cut according cutting pattern j.

In some cases, the minimization the number of objects used are not the only goal for the manager. In fact, when we have a large demand to attend in a short period of time, the number of machine setup done for cutting the items from the master rolls takes a growing importance, since each time we process a cutting pattern there is a need to adjust the knives in the cutting machine and this adjustment takes time. Adding this setup cost in the previous problem (P1) we obtain a new formulation which minimizes the number of objects and the number of setup:

where c2 is the setup cost and d(xj) =

Combinatorial problems involving setup costs are known to be very hard to solve. In particular () presents two conflicting objectives: (1) Minimize the total number of processed objects and (2) the total number of setup used.

Solving problem (P1) is already a hard task to do, since it is a N P-Hard problem. Problem () is a harder problem, since, besides being a nonlinear integer problem, the nonlinear part of the objective function is discontinuous. This fact, does not allow us to solve the problem by using the Gilmore and Gomory strategy [9, 10]. Vanderbeck [22] investigates the problem of minimizing the number of different cutting patterns as an integer nonlinear programming, where the number of objects is fixed. In his approach, Vanderbeck uses Dantzig-Wolfe decomposition [20, 21]. Since the model considered works with a huge number of variables, the method solves only problems with a small number of items. For this reason several papers considering this problem involve the use of heuristic procedures. Below, we describe two of these methods which will be used to compare with our approach.

  • Sequential Heuristic Procedure - SHP: It was proposed by Haessler [11] and it is based on an exaustive technique of cutting pattern repetitions. In each iteration an aspiration criterion is computed then a search is done to look for cutting patterns that satisfy such parameters until the demand are all attended. The SHP give us a good initial solution and it is used by others method to compare the quality of their solutions. It generates an inexpensive good initial solution to the ().

  • Kombi: This method was developed by Foester and Wascher [7] and it is based on a combination of cutting patterns in order to reduce the number of setups of a given cutting plan. The idea of reducing the number of cutting patterns using a post-optimization procedure was initially mentioned by Hardley [13]. Others methods based on this idea were published by Johnston [15], Allwood and Goulimis [1] and Diegel et al. [3]. All of them have in common the combinations of pair or triples of cutting patterns, but, they differ in the way the combinations are carried out. The method Kombi can be seen as a generalization of Diegel's method, in which ideas of combining cutting patterns are extended to a consistent system independently from the number of cutting patterns combined. It makes use of the fact that the sum of the cutting pattern frequencies of the resulting cutting patterns has to be identical to the sum of the frequencies belonging to the original cutting patterns in order to keep the material input constant. The results presented show that the setup was reduced by up to 60% in relation to the original cutting plan. Kombi has also been proved superior to SHP.

2 Smoothing a discontinuous cost function

Many practical problems require the minimization of functions that involve discontinuous costs. Martinez [16] propose a smoothing method for the discontinuous cost function and establish sufficient conditions on the approximation that ensure that the smoothed problem really approximate the original problem.

Consider the problem

where f : n® is continuous, gi : n ® is continuous for all i = 1, ¼, m and W Í n. Also, Hi : ® , i = 1, ¼, m, are nondecreasing functions such that Hi is continuous except at breakpoints aij, j Î Ii. The set Ii can be finite or infinite but the set of breakpoints is discrete, in the sense that:

The side limits

Hi(t), Hi(t), exist for all j Î Ii and

for all j Î Ii, i = 1, ¼, m.

The cost functions Hi will be approximated by a family of continuous nondecreasing functions Hik : ® .. We assume that the approximating functions are such that, for all µ > 0,

Note that (2.3) implies that

Hik(t) = Hi(t) " t Î .

For each k, we define the approximated problems as

Since (2.4) has a continuous objective function, we can use continuous optimization algorithms to solve it. The following theorem prove that the solution of (2.1) can be approximated by the solution of (2.4).

Theorem 2.1 (Martinez [16]) Assume that for all k = 0, 1, 2, ¼, xk is a solution of (2.4) and that x* Î W is a cluster point of {xk}. Then, x* is a solution of (2.1).

We adapt those ideas for the (P1). Also, we relax (P1) by eliminating the integrality constraints. We will denote this problem as

where:

  • W = {x Î n such that Ax = d , x > 0};

  • f(x) = c1 · xi;

  • Hi(t) = c2d(t), i = 1, ¼, n.

Note that Ii = {0} for all i = 1, ¼, n once the only discontinuous point of Hi(t), i = 1, ¼, n is t = 0. Also, we have that

We approximate each one of the functions Hi by the following continuous functions:

Is easy to see that

Hik(t) = Hi(t)

for all t Î and for all i = 1, ¼, n; and, Hik(t) uniformly converges to Hi(t) if t ¹ 0.

Therefore, the conditions of Theorem (2.1) can be applied in this case. Let Pk denotes the approximate nonlinear programming problem:

So, Theorem (2.1) says that if for all k = 0, 1, 2, ¼, the point xk is the best solution found for the approximate problem Pk and x* is the cluster point of the sequence {xk} then x* is the solution of (P2).

Once we obtain a solution for (P2), we can use a rounding procedure to obtain a integer solution for (). However, there exist a problem to be solved before that: "How to generate good columns (i.e., cutting patterns) for problem (P2)?". Next section we answer this question.

3 Column generation in a nonlinear problem

The column generation procedure developed by Gilmore and Gomory [9, 10] for linear programming problems, made it possible to solve large scale cutting stock problem. Problems encountered in real life may involve a very large numbers of variables, the trick is to work with only a few cutting patterns (variables) at a time and to generate new profitable cutting patterns only when they are really needed. In [17] we applied the column generation procedure in a nonlinear problem by making use of an auxiliary linear programming problem "closer" to our nonlinear problem. By "closer", we mean that, the solution of (P2) satisfies the optimality conditions of problem (P3).

Consider the following linear programming problem:

where S is the number of different cutting patterns in the solution obtainedin P2.

In Belov and Scheithauer [2], they propose a linearization of the bi-criterion cutting stock objective function in the following way: After solving a sequence of problems Pk we get a (cluster) solution for j = 1, ¼, n and let us call P4 the following auxiliary linear programming problem:

where uj = if > 0.

We use (P4) to generate a new column for the original problem (P2). In the column generation procedure, we need to solve a Knapsack Problem. To generate good profitable columns in the sense of reducing trim loss and setup number, Haessler [12] suggests to solve a bounded knapsack problem where the upper bounds for the variables are fixed according to:

In our work, we accepted Haessler's suggestion, but, we compute the upper limits in a different way. This modification is described in Section 4.

4 Solving the Nonlinear Cutting Problem

Below, we describe the main steps of the new algorithm for solving the nonlinear cutting stock problem.

New Algorithm – NANLCP

Step 1: Compute an initial solution (x*) for () using SHP;

Step 2: Obtain a solution for the current (P2) solving Pk for k = 10i, i = 1, 2, ¼, 5;

Step 3: If the solution obtained in Step 2 is better, for f(x) = c1 · xi + c2 dxi than x*, update it;

Step 4: Get the simplex multiplier pi, i = 1, ¼, m by solving (P4);

Step 5: Solve a Bounded Knapsack Problem with the objective function coefficients given by the simplex multiplier of (P4):

Step 6: If Z < 1 solve the Knapsack Problem with no limits in the variables with as the optimal objective function value;

Step 7: If < 1 go to 8. Otherwise, add the new column into problem (P2) and go back to Step (2).

Step 8: Use a rounding procedure to obtain an integer solution. Stop.

To obtain an initial solution for (),we implemented the SHP method with some modifications, as described in [17].

The software BOX9903 [14] was used to solve the sequence of nonlinear problems. The BOX9903 solves the optimization problem:

Minimize subject to

f(x)

Ax = d

h(x) = 0

l < x < u

where f : n® and h : n® m are differentiable functions and A is a real matrix m × n. The format of our problem does not include the function h(x) and the limits are defined by l = 0 e u = ¥. So, in each iteration, BOX9903 solves the problem:

Minimize subject to

L (x)

l < x < u

where

is called the Augmented Lagrangian. In iteration j we define r = K = 10j, where j assumes the values 1, 2, 3, 4 and 5. The Lagrange multiplier is estimated in each iteration j. Since this is a local method, each time a column is added to the problem (), we solve a sequence of problems Pk with 20 different starting points: the current solution, the null solution and 18 random generated points.

After solving a sequence of problems (Pks and obtaining the best solution for the new problem, we need to solve a Bounded Knapsack Problem to findif there is a profitable column (i.e.,a cutting pattern) to be appended to the problem (P2). To obtain the simplex multipliers, used as coefficients in the objective function of the Knapsack Problem, we work with problem (P3). In the Bounded Knapsack Problem, we determine the bounds of the components of the current solution (i.e., variables xj) using the following reasoning: Let the setup number be equal S, S < m, that means that we have S different cutting patterns. Our interest, when adding a new column, is to reduce not only the trim loss, but, also the setup number. Assume we want to reduce the current setup number by 20%. Therefore, after adding the new generated column to (P2) we hope the new setup number (denoted by NSN) to be 0.8*S. Let N B be the number of production rolls processed in the current solution. Suppose that this number remains constant, we should have, on average, (M B P = N B/N S N) processed object per cutting pattern. And, assuming the new generated cutting pattern will belong to the solution with frequency equal to M B P and that the items in this cutting pattern will not be in the others nonzeros cutting patterns, then to guarante a feasible solution, each item in this cutting pattern must be limited by bi = ëdi / M B Pû. And, if bi > W/wi then we fix bi = ëW/wiû so we will obtain only feasible cutting patterns.

Finally, we used the BRURED method described in [23] to round the solution found in the end of the process. This method does not mess up with the setup number and it is fast. First, we round the nonzero variables up, that is, = éxjù. Usually, this procedure may generate an excess of production. This excess can be reduced by checking which variables can be reduced by one unit without making the problem infeasible. So, for each k Î {1, 2, ¼, n}, if the variable xk, after the round up, satisfy the inequality

then we make = xk - 1.

5 Computational experiments

In order to evaluate our approach, we generated several random instances using the one-dimensional cutting stock problem's generator, CUTGEN1, developed by Gau and Wascher [8]. The input parameters for the CUTGEN are

  • m = problem size;

  • W = standard length;

  • v1 = lower bound for the relative size of order lengths in relation to W, i.e. wi > v1 × W (i = 1 ¼ m);

  • v2 = upper bound for the relative size of order lengths in relation to W, i.e. wi < v2 × W (i = 1 ¼ m);

  • = average demand per order length.

By using these parameters we generate the following data set

  • wi Î [v1 × W, v2 × W], 1 = 1, ¼, m;

  • di, i = 1, ¼, m such that the total demand D = m × .

As in Foester and Wascher's work [7], we generated 18 classes of random problems by combining different values of the generator's parameters:

  • v1 assumed values 0.01 and 0.2;

  • v2 assumed values 0.2 and 0.8;

  • the number of items in the original cutting plan, denoted by m was set to 10 (small instances), 20 (mid-size instances) and 40 (large instances);

  • problems with low average demand ( = 10) and high average demand ( = 100);

  • In all the classes W = 1000.

Each class contains 100 instances. We generated six classes with small items (v1 = 0.01 and v2 = 0.2), six classes with wide-spread items (v1 = 0.01 and v2 = 0.8) and other six classes with large items (v1 = 0.2 and v2 = 0.8). Table 1 shows the 18 classes and theirs parameters.

The solutions obtained by NANLCP were compared with the solutions of the methods: SHP [11], KOMBI234 [7] and ANLCP [17]. These methods (not including ANLCP) were also used as basis of comparison by Umetami et al. [19] with the heuristic APG. We run CUTGEN1 with the same seed (i.e.,1994) and parameters defined in [7] and [19] to generate all the 1800 instances (i.e., 18 classes, each class with 100 instances).

KOMBI234 uses the solution obtained by the heuristic developed by Stadler [18] as a starting solution. This combination, Stadler + KOMBI234, produced the best results encountered in the literature, at the time Foester and Wascher published their work.

The methods NANLCP and ANLCP and the heuristic SHP were implemented by the authors in Fortran, running under g77 for Linux, in a Athlon XP 1800Mhz, 512MB of RAM. The results for KOMBI234 were obtained from an implementation in MODULA-2 in MSDOS operating system with a IBM 486/66. Therefore, the time was not considered when the comparison is done with KOMBI234.

Below, we present the computational results. We fix c1, the cost for each object used, equal to 1 and c2, the setup cost, equal to 100. The value of c2 can be seen as a penalization parameter for the setup number.

Tables 2, 3 and 4 show the average of the setup number and the average of the number of objects used in the final solution of the 100 instances in all the classes: small items, wide-spread items and large items. The averages of setup and number of objects described in the end of each table suggest that the NANLCP has a better performance than ANLCP, Kombi234 and SHP.

Table 5 presents the variation of setup and number of objects of the method (NANLCP) in relation to SHP. For instance, to compute the variation for the setup number we use the formula

100 × (SetupNANLCP - SetupSHP)/SetupSHP,

therefore, a negative number indicates that NANLCP was better than SHP. The average setup in NANLCP method was better than the average setup for SHP in all the classes, except for classes 5 and 6. And, the average number of objects in NANLCP method was better than the average obtained by SHP in 14 classes out of 18.

Table 6 presents the variation of setup and the number of objects of our method (NANLCP) in relation to KOMBI234. In all the classes NANLCP obtained a better average for the setup than KOMBI. But, the average number of objects used by KOMBI was better than NANLCP in the 15 classes.

When comparing the quality of the solutions of each method, we use c2 equal 5 and 10 in the objective function. Those are "real life" values for c2 and by doing so, the comparison with the others methods were more honest. In general, NANLCP obtained better objective function values than SHP and KOMBI.

In the literature, Diegel et al. [4] were the only to mention about practical values for c1 and c2. According to Diegel, a exact relation between c1 and c2 depends on the data we have on hands. But, they say, that c2 is never much bigger than c1. However, if the main goal is to minimize the setup number then c2 must be bigger than c1. Therefore, the relation between those two cost depends on several factors as: demand, deadlines, labor costs, etc.

The results obtained by NANLCP when c1 = 1 e c2 = 5 are presented in Table 7. The averages for the objectives function values for NANLCP were better than those obtained by SHP in all 18 classes an better than KOMBI in 15 classes.

The computational results confirm the good performance of NANLCP method. The version NANLCP, as shown in Table 8, obtained average costs better than SHP in 16 classes and better than KOMBI in all the classes.

6 Conclusions and future work

Based on the computational results presented, we may affirm that the NANLCP has an average performance better that the ANLCP [17] and, therefore, it is a good method to use when the objective is to minimize the cost of the processed object number and setup. Specifically in relation to Setup Number, NANLCP has a better performance than SHP and KOMBI in almost all the classes.

It is important to say that another advantage of NANLCP is the possibility of working with explicit values of c1 and c2 in the objective function. In fact, for real life problems these costs depend on many factors as demand, deadline, labor costs, etc. We do not know any other method that treat the problem of minimizing the setup and the number of processed object in the same way NANLCP does.

However, NANLCP method has one disadvantage in relation to SHP and KOMBI: the computational time. Although, we can not compare the computational time with KOMBI, since it was not implemented by the authors and, therefore, it was not run in the same computational environment is easy to see that the NANLCP method has a higher computational time. This happens because once a new column is added to the problem, we need to solve (Pk) with 20 initial solutions to (hopefully) get rid of local minimum. For classes with a large numbers of items, as Classes (11, 12, 17, 18) the computational time for NANLCP was high.

Implementing better strategies to obtain global solutions when solving nonlinear problems, can make NANLCP better. Also, a better strategy to obtain a integer solution from a fractional solution is welcome since the BRURED method used here is very naive, although, it is fast.

Acknowledgements. The authors appreciate the constructive comments of the referee, resulting in an improved presentation.

Received: 01/III/07. Accepted: 13/VI/07.

#728/07.

  • [1] J.M. Allwood and C.N. Goulimins, Reducing the number of patterns in one-dimensional cutting stock problems Control Section Report EE/CON/IC/88/10, Industrial Systems Group, Department of Electrical Engineering, Imperial College, London, (1988).
  • [2] G. Belov and G. Scheithauer, The number of setups (different patterns) in one-dimensional stock cutting Technical Report, Dresden University (2003).
  • [3] A. Diegel, M. Chetty, S. Van Schalkwyck and S. Naidoo, Setup combining in the trim loss problem - 3-to-2 & 2-to-1 Working paper, Business Administration, University of Natal, Durban, First Draft (1993).
  • [4] A. Diegel, E. Montocchio, E. Walters, S. Schalkwyk and S. Naidoo, Setup minimising conditions in the trim loss problem European J. Oper. Res., 95 (1996), 631-640.
  • [5] H. Dyckhoff, A typology of cutting and packing problems European J. Oper. Res., 44 (1990), 145-159.
  • [6] H. Dyckhoff and U. Finke, Cutting and Packing in Production and Distribution: A Typology and Bibliography Springer-Verlag, Berlin Heidelberg, 1992.
  • [7] H. Foester and G. Wascher, Pattern Reduction in One-dimensional Cutting-Stock Problems International Journal of Prod. Res., 38 (2000), 1657-1676.
  • [8] T. Gau and G. Wascher, CUTGEN1: A Problem Generator for the Standard One-dimensional Cutting Stock Problem European J. Oper. Res., 84 (1995), 572-579.
  • [9] P.C. Gilmore and R.E. Gomory, A Linear Programming Approach to the Cutting Stock Problem Oper. Res., 9 (1961), 849-859.
  • [10] P.C. Gilmore and R.E. Gomory, A Linear Programming Approach to the Cutting Stock Problem Oper. Res., 11 (1963), 863-888.
  • [11] R. Haessler, Controlling Cutting Pattern Changes in One-Dimensional Trim Problems Oper. Res., 23 (1975), 483-493.
  • [12] R. Haessler, A Note on Computational Modifications to the Gilmore-Gomory Cutting Stock Algorithm Oper. Res., 28 (1980), 1001-1005.
  • [13] C.J. Hardley, Optimal cutting of zinc-coated steel strip Oper. Res., 4 (1976), 92-100.
  • [14] N. Krejic and J.M. Martinez, Validation of an Augmented Lagrangian algorithm with a Gauss-Newton Hessian approximation using a set of hard-spheres problems Comput. Optim. Appl., 16 (2000), 247-263.
  • [15] R.E. Johnston, Rounding algorithms for cutting stock problems Asia-Pacic. J. Oper. Res., 3 (1986), 166-171.
  • [16] J.M. Martinez, Minimization of discontinuous cost functions by smoothing Acta Applicandae Mathematical, 71 (2001), 245-260.
  • [17] A.C. Moretti and L.L. Salles Neto, Modelo não-linear para minimizar o número de objetos processados e o setup num problema de corte unidimensional Anais do XXXVII Simpósio Brasileiro de Pesquisa Operacional, Gramado, (2005) 1677-1688.
  • [18] H. Stadler, A one-dimensional cutting stock problem in the aluminium industry and its solution European J. Oper. Res., 44 (1990), 209-223.
  • [19] S. Umetami, M. Yagiura and T. Ibaraki, One Dimensional Cutting Stock Problem to Minimize the Number of Different Patterns European J. of Oper. Res., 146 (2003), 388-402.
  • [20] F. Vanderbeck, On Dantizg-Wolfe decomposition in integer programming and ways to perform branching in a branch-and-price algorithm Research Papers in Management Studies, University of Cambridge, 1994-1995, no. 29.
  • [21] F. Vanderbeck, Computational Study of a Column Generation Algorithm for Bin Packing and Cutting Stock Problems Mathematical Programming, 86(3) (1999), pp. 565-594.
  • [22] F. Vanderbeck, Exact Algorithm for Minimizing the Number of Setups in the one-dimensional cutting stock problem Operations Research, 48 (2000), pp. 915-926.
  • [23] G. Wascher and T. Gau, Heuristics for the Integer One-dimensional Cutting Stock Problem: a computational study OR Spektrum, 18 (1996), 131-144.

Publication Dates

  • Publication in this collection
    02 Apr 2008
  • Date of issue
    2008

History

  • Accepted
    13 Apr 2007
  • Received
    01 Mar 2007
Sociedade Brasileira de Matemática Aplicada e Computacional Sociedade Brasileira de Matemática Aplicada e Computacional - SBMAC, Rua Maestro João Seppe, nº. 900 , 16º. andar - Sala 163, 13561-120 São Carlos - SP Brasil, Tel./Fax: 55 16 3412-9752 - São Carlos - SP - Brazil
E-mail: sbmac@sbmac.org.br