Acessibilidade / Reportar erro

Step-size estimation for unconstrained optimization methods

Abstract

Some computable schemes for descent methods without line search are proposed. Convergence properties are presented. Numerical experiments concerning large scale unconstrained minimization problems are reported.

unconstrained optimization; descent methods; step-size estimation; convergence


Step-size Estimation for Unconstrained Optimization Methods

Zhen-Jun ShiI, II; Jie ShenIII

ICollege of Operations Research and Management, Qufu Normal University, Rizhao, Shandong 276826, P.R.China. E-mail: zjshi@qrnu.edu.cn

IIInstitute of Computational Mathematics and Scientific/Engineering Computing, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, P.O. Box 2719, Beijing 100080, China. E-mail: zjshi@lsec.cc.ac.cn

IIIDepartment of Computer & Information Science, University of Michigan, Dearborn, MI 48128, USA. E-mail: shen@umich.edu

ABSTRACT

Some computable schemes for descent methods without line search are proposed. Convergence properties are presented. Numerical experiments concerning large scale unconstrained minimization problems are reported.

Mathematical subject classification: 90C30, 65K05, 49M37.

Key words: unconstrained optimization, descent methods, step-size estimation, convergence.

1 Introduction

A well-known algorithm, for the unconstrained minimization of a function f(x) in n variables

having Lipschitz continuous first partial derivatives, is the steepest descent method (Fiacco, 1990, Polak, 1997, [8, 17]). The iterations correspond to the following equation

where ak is the smallest nonnegative value of a that locally minimizes f along the direction -Ñf(xk) starting from xk. Curry (Curry, 1944, [5]) showed that any limit point x* of the sequence {xk} generated by (2) is a stationary point (Ñf(x*) = 0).

The iterative scheme (2) is not practical because the step-size rule at each step involves an exact one-dimensional minimization problem. However, the steepest descent algorithm can be implemented by using inexact one-dimensional minimization. The first efficient inexact step-size rule was proposed by Armijo (Armijo, 1966, [1]). It can be shown that, under mild assumptions and with different step-size rules, the iterative scheme (2) converges to a local minimizer x* or a saddle point of f(x), but its convergence is only linear and sometimes slower than linear.

The steepest descent method is particularly useful when the dimension of the problem is very large. However, it may generate short zigzagging displacements in a neighborhood of a solution (Fiacco, 1990, [8]).

For simplicity, we denote Ñf(xk) by gk, f(xk) by fk and f(x*) by f*, respectively, where x* denotes a local minimizer of f. In the algorithmic framework of steepest descent methods, Goldstein (Goldstein, 1962, 1965, 1967, [10, 11, 12]) investigated the iterative formula

where dk satisfies the relation

which guarantees that dk is a descent direction of f(x) at xk (Cohen, 1981, Nocedal and Wright, 1999, [4], [14]). In order to guarantee the global convergence, it is usually required to satisfy the descent condition

where c > 0 is a constant. The angle property

is often used in many situations, with h0Î (0,1].

Observe that, if ||gk|| ¹ 0 then dk = -gk satisfies (4), (5) and (6) simultaneously. Throughout this paper, we take dk = -gk.

There are many alternative line-search rules to choose ak along the ray Sk = {xk+adk|a> 0}. Namely:

(a) Minimization Rule. At each iteration, ak is selected so that

(b) Approximate Minimization Rule. At each iteration, ak is selected so that

(c) Armijo Rule. Set scalars sk, g, L and s with sk = , g Î (0,1), L > 0 and s Î (0, ). Let ak be the largest a in {sk, gsk, g2sk,...,} such that

(d) Limited Minimization Rule. Set sk = where ak is defined by

and L > 0 is a given constant.

(e) Goldstein Rule. A fixed scalar s Î (0, ) is selected, and ak is chosen in order to satisfy

(f) Strong Wolfe Rule. At the k-th iteration, ak satisfies simultaneously

and

where s Î (0, ) and b Î (s, 1).

(g) Wolfe Rule. At the k-th iteration, ak satisfies (12) and

Some important global convergence results for methods using the above mentioned specific line search procedures have been given in the literature ([25, 15, 16, 23, 24]).

This paper is organized as follows. In the next section we describe some descent algorithms without line search. In Sections 3 and 4 we analyze their global convergence and convergence rate respectively. In Section 5 we give some numerical experiments and conclusions.

2 Descent Algorithm without Line Search

We assume that

(H1).f(x) is bounded below. We denote L(x0) = {x Î Rn|f(x) < f(x0)}.

(H2). The gradient g(x) is uniformly continuous on an open convex set B that contains L0.

We sometimes further assume that the following condition holds.

(H3). The gradient g(x) is Lipschitz continuous on an open convex set B that contains the level set L(x0), i.e., there exists L such that

Obviously, (H3) implies (H2).

We shall implicitly assume that the constant L in (H3) is easy to estimate.

Algorithm (A).

Step 0. Choose x0Î Rn, d Î (0, 2) and L0 > 0 and set k: = 0;

Step 1. If ||gk|| = 0 then stop; else go to Step 2;

Step 2. Estimate Lk > 0;

Step 3. xk+1 = xk-;

Step 4. Set k: = k+1 and go to Step 1.

Note. In the above algorithm, line search procedure is avoided at each iteration, which may reduce the cost of computation. However, we must estimate Lk at each iteration. Certainly, if the Lipschitz constant L of the gradient of objective functions is known a priori, then we can take Lk º L in the algorithm. In many practical problems, the Lipschitz constant L is not known a priori and we must estimate it and find an approximation Lk to L at each step.

For estimating Lk we define

We can also estimate Lk in Algorithm (A) by solving the following minimization problem

where dk-1 = xk-xk-1, yk-1 = gk-gk-1 and ||·|| denotes Euclidean norm. In this case,

Similarly, Lk can be found by solving the problem

and set

The last two formulae are useful because they arise from the classical quasi-Newton condition (e.g. [14]) and from Barzilai and Borwein's idea (1988, [2]). Some recent observations on Barzilai and Borwein's method are very exciting (Fletcher, 2001, [7], Raydan 1993, 1997, [20, 21], Dai and Liao, 2002, [6]).

If the Hessian matrix Ñ2f(xk) is easy to evaluate then we can take

3 Convergence Analysis

The following lemma can be found in many text books. See, for example, [14].

Lemma 3.1 (mean value theorem). Suppose that the objective function f(x) is continuously differentiable on an open convex set B, then

where xk, xk+adk Î B and dkÎ Rn. Further, if f(x) is twice continuously differentiable on B, then

and

3.1 Convergence of Algorithm (A)

Theorem 3.1. If (H1) and (H3) hold, Algorithm (A) generates an infinite sequence {xk}, and

then

Proof. By Lemma 3.1 and (H3) we have

Taking dk = -gk and a = in the above formula, we have

Noting that Lk ³ r L, we obtain

Therefore, {fk} is a monotone decreasing sequence. So, by (H1), {fk} has a lower bound and, thus, {fk} has a finite limit. It follows from the above inequality that

By (25) we have

The above inequality and (27) show that (26) holds. The proof is finished.

Corollary 3.1. If the conditions in Theorem 3.1 hold and rL < Lk< M (M > 0 is a fixed large integer) for all k, then

and, thus,

Remark. The above theorem shows that we can set a large Lk to guarantee the global convergence. However, if Lk is very large then ak will be very small and will slow the convergence rate of descent methods. On the other hand, very small values of Lk may fail to guarantee the global convergence. Thus, it is better to set an adequate estimation Lk at each iteration.

3.2 Comparing with other step-sizes

Theorem 3.2. Assume that the hypotheses of Theorem 3.1 hold. Denote the exact step-size by

(including exact line search rules (a) and (b)). Then

Proof. For the line search rules (a) and (b), (H3) and Cauchy-Schwartz inequality, we have:

Therefore,

Noting that Lk > rL, we have

Theorem 3.3. Assume that the hypotheses of Theorem 3.1 hold. Denote

the step-size defined by the line search rule (c) with L being the Lipschitz constant of Ñf(x). Then,

where K1 = {k|

= sk} and K2 = {k|
< sk}.

Proof. If k Î K1, then

If k Î K2 then < sk and thus /g < sk, by line search rule (c), we have

Using the Mean Value Theorem on the left-hand side of the above inequality, we see that there exists qk Î [0,1] such that

and, thus,

By (H3), Cauchy-Schwartz inequality and (32) we obtain:

Since dk = -gk, by the above inequality, we get:

Theorem 3.4. Assume that the hypotheses of Theorem 3.1 hold. Denote the step-size defined by the line search rule (d) with L being the Lipschitz constant of Ñf(x). Then,

where ak is yet the step-size of Algorithm (A).

Proof. If = sk then

If < sk then = 0 and, thus,

Since dk = -gk, by the above inequality, we get:

Theorem 3.5. Assume that the hypotheses of Theorem 3.1 hold and let

be defined by the line search rule (e). Then,

Proof. By the line search rule (e), we have

By the mean value theorem, there exists qk such that

So,

By (H3), the Cauchy-Schwartz inequality and (36), we have:

Since dk = -gk, we get:

Theorem 3.6. Assume that the hypotheses of Theorem 3.1 hold and that

is defined by the line search rules (f) or (g). Then,

Proof. By (13), (14) and the Cauchy-Schwartz inequality, we have:

Since dk = -gk we get:

4 Convergence Rate

In order to analyze the convergence rate of the algorithm, we make use of Assumption (H4) below.

(H4). {xk} ® x*(k ® ¥ ), f(x) is twice continuously differentiable on N(x*, ) and Ñ2f(x*) is positive definite.

Lemma 4.1. Assume that (H4) holds. Then, (H1),(H3) and thus (H2) hold automatically for k sufficiently large, and there exists 0 < m' <M' and

0<
such that

and thus

By (39) and (40) we can also obtain, from Cauchy-Schwartz inequality, that

and

The proof of this theorem results from Lemma 2.2.7 of [25]. See also Lemma 3.1.4 of [26].

Lemma 4.2. If (H1) and (H3) hold and Algorithm (A) with Lk< M and L < M generates an infinite sequence {xk}, then there exists h > 0 such that

Proof. As in the proof of Theorem 3.1, we have:

Taking

we obtain the desired result.

Theorem 4.1. If the assumption (H4) holds, Algorithm (A) with rL < Lk< M and L < M generates an infinite sequence {xk}, then {xk} converges to x* at least R-linearly.

Proof. By (H4), there exists k' such that xk Î N(x*,0) for k > k'. By (43) and Lemma 4.1 we obtain

By (42) we can assume that M' < L and prove that q < 1. In fact, by the definition of h in the proof of Lemma 4.2, we obtain

Set

Since, obviously, w < 1, we obtain from the above inequality that

By Lemma 4.1 and the above inequality we have:

Thus,

This shows that {xk} converges to x* at least R-linearly.

5 Numerical Experiments

We give an implementable version of this descent method.

Algorithm (A)'.

Step 0. Choose x0Î Rn, d Î (0, 2) and M >> L0 > 0 and set k: = 0;

Step 1. If ||gk|| = 0 then stop; else go to Step 2;

Step 2. Estimate Lk Î [L0, M];

Step 3.xk+1 = xk-;

Step 4. Set k: = k+1 and go to Step 1.

The following formulae for ak = d/Lk(k > 1):

1.

2.

3.

4.

are tried to compare the new algorithm against PR conjugate gradient method with restart and BB method ([2]). In conjugate gradient methods one has:

where

The corresponding methods are called FR (Fletcher-Revees), PR (Polak-Ribiére) and HS (Hestenes-Stiefel) conjugate gradient method respectively. Among them the PR method is regarded as the best one in practical computation. However, PR conjugate gradient method has no global convergence in many situations. Some modified PR conjugate gradient methods with global convergence were proposed (e.g., Grippo and Lucidi [9]; Shi [22], etc.). In PR conjugate gradient method, if

dk > 0 occurs, we set dk = -gk or set dk = -gk at every n iteration. This is called restart conjugate gradient method (Powell [18]).

We tested our algorithms with a termination criterion when ||gk|| < eps with eps = 10-8. The number of iterations used to get that precision is called IN. The number of function evaluations for getting the same error is denoted by FN.

We chose 18 test problems from the literature, including More, Garbow and Hillstrom, 1981, [13]; the BB gradient method combined with Raydan and Svaiter's CBB method [19]; and PR conjugate gradient algorithm with restart. The Raydan and Svaiter's CBB method is an efficient nonmonotone gradient method [2, 20, 21], which is sometimes superior to some CG methods [2]. The initial iterative points were also from the literature [13].

For PR conjugate gradient method with restart, we use Wolfe rule (g) with b = 0.75, s = 0.125. For our descent algorithms without line search, we choose the parameters d = 1, M = 108 and Lk defined by (44), (45), (46) and (47) respectively. The corresponding algorithms are denoted by A1, A2, A3 and A4 respectively.

Failures in the application of the new method may appear, mainly due to the inadequate estimations of Lk and sometimes due to the roundoff errors. In order to avoid failure, we check f(xk-gk) < fk in our numerical experiment. It is obbserved that d Î (0,2) is an adjustable parameter in the gradient method without line search. We can adjust d to satisfy the descent property of objective functions and improve the performance of the new method. If f(xk-gk) < fk holds then we continue the iteration. Otherwise, we may reduce d by setting d = gd with g Î(0,1) until the descent property holds.

In numerical experiments, by Theorem 3.1, Lk may converge to +¥. Thus, letting Lk Î [L0, M] seems to be unreasonable. In fact, we have no criteria to determine L0 and M. Generally, a very large L0 may lead to slow convergence rate and a very large M may violate the global convergence. As a result, we should choose carefully L0 and M in practical computation in order to satisfy both the global convergence and the fast convergence rate. Actually, we can combine the step-size estimation and line search procedure to produce efficient descent algorithms. For example, in Armijo'e line search rule, L > 0 is a constant at each iteration, and we can take the initial step-size s = sk = 1/Lk at the k-th iteration. In this case, the steepest descent method has the same numerical performance as our corresponding descent algorithm. Accordingly, choosing an adequate initial step-size is very important for Armijo's line search and it is also the real aim of step-size estimation.

We use a Pentium IV portable computer and Visual C++ to implement our algorithms and test the 18 problems. The numerical results are reported in Table 1.

The computational results show that the new method in the paper is efficient in practice. It needs much less iterative number and less function evaluations and then less CPU time (seconds) than PR conjugate gradient with restart and CBB method in some situations. This also shows that CBB method is a promising method because it is superior to the new algorithms A1, A2 and A4 for these test problems.

The gradient method with Lk defined by (45) seems to be the best algorithm for these test problems. Thereby, to estimate Lk is the key to constructing gradient methods without line search. If we take

or

where dk-1 = xk-xk-1 and yk-1 = gk-gk-1, we can obtain Barzilai and Borwein's method ([2]) which is an effective method for solving large scale unconstrained minimization problems.

Conclusion

In future research we should seek more approaches for estimating the step-size as exactly as possible and find some available technique to guarantee both the global convergence and quick convergence rate of gradient methods. We can also use some step estimation approaches to improve the original BB method and conjugate gradient methods, for example, [19].

Acknowledgements. The work was supported in part by NSF DMI-0514900, Postdoctoral Fund of China and K.C.Wong Postdoctoral Fund of CAS grant 6765700. The authors would like to thank the anonymous referees and the editor for many suggestions and comments.

Received: 29/XI/04. Accepted: 16/V/05.

#620/04.

  • [1] L. Armijo, Minimization of function having Lipschitz continuous first partial derivatives, Pacific J. Math., 16 (1966), 1-13.
  • [2] J. Barzilai and J.M. Borwein, Two-point step size gradient methods, IMA J. Numer. Anal., 8 (1988), 141-148.
  • [3] E.G. Birgin and J. M. Martinez, A spectral conjugate gradient method for unconstrained optimization, Appl. Math. Optim., 43 (2001), 117-128.
  • [4] A. I. Cohen, Stepsize analysis for descent methods, J. Optim. Theory Appl., 33(2) (1981), 187-205.
  • [5] H. B. Curry, The method of steepest descent for non-linear minimization problems, Quart. Appl. Math., 2 (1944), 258-261.
  • [6] Y. H. Dai and L. Z. Liao, R-linear convergence of the Barzilai and Borwein gradient method, IMA J. Numer. Anal., 22 (2002), 1-10.
  • [7] R. Fletcher, The Barzilai Borwein method-steepest descent method resurgent? Report in the International Workshop on Öptimization and Control with Applications", Erice, Italy, July (2001), 9-17.
  • [8] A. V. Fiacco, G. P. McCormick, Nonlinear programming: Sequential Unconstrained Minimization Techniques, SIAM, Philadelphia, (1990).
  • [9] L. Grippo and S.Lucidi, A globally convergent version of the Polak-Ribiere conjugate gradient, Math. Prog., 78 (1997), 375-391.
  • [10] A. A. Goldstein, Cauchy's method of minimization, Numer. Math. 4 (1962), 146-150.
  • [11] A. A. Goldstein, On steepest descent, SIAM J. Control, 3 (1965), 147-151.
  • [12] A. A. Goldstein, J. F. Price, An effective algorithm for minimization, Numer. Math., 10 (1967), 184-189.
  • [13] J. J. Moré, B. S. Garbow and K. E. Hillstrom, Testing unconstrained optimization software, ACM Trans. Math. Software, 7 (1981), 17-41.
  • [14] J. Nocedal and J. S. Wright, Numerical Optimization, Springer-Verlag New York, Inc. (1999).
  • [15] J. Nocedal, Theory of algorithms for unconstrained optimization, Acta Numerica, 1 (1992), 199-242.
  • [16] M. J. D. Powell, Direct search algorithms for optimization calculations, Acta Numerica, 7 (1998), 287-336.
  • [17] E. Polak, Optimization: Algorithms and Consistent Approximations, Springer, New York, (1997).
  • [18] M. J. D. Powell, Restart procedure for the conjugate gradient method, Math. Prog.,12 (1977), 241-254.
  • [19] M. Raydan and B. F. Svaiter, Relaxed steepest descent and Cauchy-Barzilai- Borwein method, Comput. Optim. Appl., 21 (2002), 155-C167.
  • [20] M. Raydan, The Barzilai and Borwein method for the large scale unconstrained minimization problem, SIAM J. Optim., 7 (1997), 26-33.
  • [21] M. Raydan, On the Barzilai Borwein gradient choice of steplength for the gradient method, IMA J. Numer. Anal., 13 (1993), 321-326.
  • [22] Z. J. Shi, Restricted PR conjuate gradient method and its convergence(in Chinese), Advances in Mathematics, 31(1) (2002), 47-55.
  • [23] P. Wolfe, Convergence condition for ascent methods II: some corrections, SIAM Rev., 13 (1971), 185-188.
  • [24] P. Wolfe, Convergence condition for ascent methods, SIAM Rev., 11 (1969), 226-235.
  • [25] Y. Yuan and W. Y. Sun, Optimization Theory and Methods, Science Press, Beijing (1997).
  • [26] Y. Yuan, Numerical Methods for Nonlinear Programming, Shanghai Scientific & Technical Publishers, (1993).

Publication Dates

  • Publication in this collection
    20 Apr 2006
  • Date of issue
    Dec 2005

History

  • Accepted
    16 May 2005
  • Received
    29 Nov 2004
Sociedade Brasileira de Matemática Aplicada e Computacional Sociedade Brasileira de Matemática Aplicada e Computacional - SBMAC, Rua Maestro João Seppe, nº. 900 , 16º. andar - Sala 163, 13561-120 São Carlos - SP Brasil, Tel./Fax: 55 16 3412-9752 - São Carlos - SP - Brazil
E-mail: sbmac@sbmac.org.br