ABSTRACT
In this paper we develop a generic mixed bi-parametric barrier-penalty method based upon barrier and penalty generic algorithms for constrained nonlinear programming problems. When the feasible set is defined by equality and inequality functional constraints, it is possible to provide an explicit barrier and penalty functions. If such case, the continuity and differentiable properties of the restrictions and objective functions could be inherited to the penalized function.
The main contribution of this work is a constructive proof for the global convergence of the sequence generated by the proposed mixed method. The proof uses separately the main results of global convergence of barrier and penalty methods. Finally, for some simple nonlinear problem, we deduce explicitly the mixed barrier-penalty function and illustrate all functions defined in this work. Also we implement MATLAB code for generate iterative points for the mixed method.
Keywords:
nonlinear programming; mixed barrier-penalty methods; convergence of mixed algorithm
1 INTRODUCTION
The mathematical optimization is one of the concepts widely used to analyze many complex decision or allocation problems. In order to better use available resources, optimization techniques allow the selection of values for a certain number of interrelated variables, and with them we could measure the performance and quality of a decision by focusing on some objective functions.
Specifically, a mathematical optimization problem consists of minimizing or maximizing an objective function f (x) subject to restrictions , where f is a real valued continuous function defined on . In this work, we consider the feasible set Ω having three types of restrictions
where Ω1 can be whatever restriction set that is difficult to handle, Ω2 is a robust set and Ω3 could be a simple set such as signal or boundary restrictions. The robust set means that it has a dense nonempty interior subset. In other words, the set has an interior, and it is possible to get any boundary point by approaching it from a sequence of interior points, Luenberger & Ye (200811 LUENBERGER DG & YE Y. 2008. Linear and nonlinear programming. 3th ed.. Springer New York.).
According to specifications above, we consider the following optimization problem,
One of the most common nonlinear programming problems formulation is when the restrictions are characterized by equality and inequality functional constraints, Bazaraa et al. (20131 BAZARAA MS, SHERALI HD & SHETTY CM. 2013. Nonlinear programming: theory and algorithms. John Wiley & Sons.), Luenberger & Ye (200811 LUENBERGER DG & YE Y. 2008. Linear and nonlinear programming. 3th ed.. Springer New York.), Wright & Nocedal (199918 WRIGHT SJ & NOCEDAL J. 1999. Numerical optimization. vol. 2. Springer New York.), Griva et al. (20099 GRIVA I, NASH SG & SOFER A. 2009. Linear and nonlinear optimization. vol. 108. Siam.). In which given the continuous functions , the classical nonlinear optimization problem is
where the restriction sets are given by .
For many decades, many authors proved some theoretical results and proposed several algorithms in order to solve nonlinear optimization problems considering penalty or barrier function methods. Luenberger & Ye (200811 LUENBERGER DG & YE Y. 2008. Linear and nonlinear programming. 3th ed.. Springer New York.), Fiacco & McCormick (19907 FIACCO AV & MCCORMICK GP. 1990. Nonlinear programming: sequential unconstrained minimization techniques. vol. 4. Siam.) state convergence for both methods, Polyak Polyak (197115 POLYAK BT. 1971. The convergence rate of the penalty function method. USSR Computational Mathematics and Mathematical Physics, 11(1): 1-12.) showed convergence rate for penalty function method within Hilbert space, Bertsekas (19762 BERTSEKAS DP. 1976. On penalty and multiplier methods for constrained minimization. SIAM Journal on Control and Optimization, 14(2): 216-235.) obtained convergence and rate of convergence results for the sequences of primal and dual variables generated on penalty and Lagrange multiplier methods, he showed that the multiplier method is faster than the pure penalty method. Fiacco & McCormick (1990) demonstrate by contradiction the global convergence for mixed penalty-barrier method, also Breitfeld & Shanno (19954 BREITFELD MG & SHANNO DF. 1995. A Globally Convergent Penalty-Barrier Algorithm for Nonlinear Programming. In: Operations Research Proceedings 1994. pp. 22-27. Springer.) proposed composite algorithm of augmented Lagrangian, modified log-barrier, and classical log-barrier methods for that they demonstrated global convergence to a first-order stationary point for the constrained problem which was based on Breitfeld & Shanno (19943 BREITFELD MG & SHANNO DF. 1994. A globally convergent penalty-barrier algorithm for nonlinear programming and its computational performance. Rutgers University. Rutgers Center for Operations Research [RUTCOR].).
In this work, we develop the mixed barrier-penalty method for solving a general nonlinear problem (2); and we provide a generic bi-parametric algorithm. The main contribution is a constructive proof of global convergence of sequence generated by that mixed method as an alternative proof to existing ones with slightly different assumptions. Suñagua & Oliveira (201716 SUÑAGUA P & OLIVEIRA AR. 2017. A new approach for finding a basis for the splitting preconditioner for linear systems from interior point methods. Computational Optimization and Applications, 67(1): 111-127.) showed that computational experiments for NETLIB problems work successfully for large scale linear optimization problems.
2 BARRIER METHODS OVERVIEW
Barrier methods are also called interior point or internal penalty methods. Some theoretical results of them were developed by Martınez & Santos (199512 MARTINEZ JM & SANTOS SA. 1995. Métodos computacionais de otimização. Colóquio Brasileiro de Matemática, Apostilas, 20. Available at: https://www.ime.unicamp.br/~martinez/mslivro.pdf.
https://www.ime.unicamp.br/~martinez/msl...
), Luenberger & Ye (200811 LUENBERGER DG & YE Y. 2008. Linear and nonlinear programming. 3th ed.. Springer New York.), Nash & Sofer (199314 NASH SG & SOFER A. 1993. A barrier method for large-scale constrained optimization. ORSA Journal on Computing, 5(1): 40-53.), and Wright (199217 WRIGHT MH. 1992. Interior methods for constrained optimization. Acta Numérica, 1: 341-407.). These methods are applicable to problems of the form
where f is a continuous function and Ω is a robust restriction set. This kind of set
often arises from the inequality constraints, that is, , for which there is a point such that .
Barrier methods work by establishing a barrier on the boundary of the restriction set that prevents a search procedure from leaving the feasible region. A barrier function is a function B(·) defined on the interior set of Ω such that (i) B is continuous,(ii) , (iii) as x approaches the boundary of Ω. For inequality constraints in many practical applications, the barrier functions commonly used are the logarithmic or inverse barrier function. They are defined on Int(Ω) respectively by
Now, the problem (4) can be transformed into a penalized subproblem
where µ > 0 is called barrier parameter and we take µ small (going to zero). In this approach, the main assumption is that the original problem (4) has a global solution x ∗. Let x(µ) be a global solution of subproblem (5). When , we hope x(µ k ) converges to x ∗.
Given , we have a generic barrier algorithm given in Algorithm 1.
The following Lemma gives a set of inequalities that follow directly from Algorithm 1 steps. A proof is based from Luenberger & Ye (200811 LUENBERGER DG & YE Y. 2008. Linear and nonlinear programming. 3th ed.. Springer New York.) and Martınez & Santos (199512 MARTINEZ JM & SANTOS SA. 1995. Métodos computacionais de otimização. Colóquio Brasileiro de Matemática, Apostilas, 20. Available at: https://www.ime.unicamp.br/~martinez/mslivro.pdf.
https://www.ime.unicamp.br/~martinez/msl...
).
Lemma 2.1.Let {x k } be a sequence generated byAlgorithm 1, then
-
-
-
.
Proof. Since {µ k } is a monotone decreasing sequence, x k+1 is a global minimizer of (6) and recalling (ii) of barrier condition, B is non-negative function, then
For establishes the second inequality, we also have
now, using (8) and (7), we get
eliminating the common factor , we prove the item 2.
Finally, by previous inequality,
hence . □
The global convergence of the barrier method, in the sense that any limit point of the sequence is a solution of problem (4), can be verified from the previous Lemma.
Theorem 2.1.Let {x k } be a sequence generated byAlgorithm 1, in which. Then, any limit point of the sequence is a global minimizer of problem (4).
Proof. Let be global minimum value of on Int(Ω), whose solution is x k+1 . By Lemma 2.1, for all k. If , then
First of all, we prove only the sequence {f k } converges to f ∗, next we continue with the demonstration of convergence of some subsequence that will converge to some global minimizer.
Indeed, {f k } is a bounded below monotone decreasing sequence, hence it converges to its infimum, we say . If , then . Recalling x ∗ the global minimizer of (4) and since f is a continuous function, then there is an open ball ℬ centered at x ∗ such that, for all we have,
Since for all , and , we have . Therefore,
Thus, for any , and k large enough, we get
Then, from (9) and (11), we have
which contradicts to . Therefore, . That is
Now, let be any subsequential limit of {x k }, more precisely, there is a subsequence such that . If with , then by continuity of f a subsequence cannot converge to zero, which contradicts . Therefore, or , but . Thus, every limit point generated by Algorithm 1 is a global solution of the problem (4). □
3 PENALTY METHODS OVERVIEW
Given a continuous function, we consider the problem
where Ω1 and Ω2 are any arbitrary subsets of ℝn . In most applications Ω1 is defined implicitly by functional restrictions as , where . In some cases, we assume that f and h are twice differentiable functions. A basic assumption is that problem (GP) admits global minimizer, some theoretical results were established by Polyak (197115 POLYAK BT. 1971. The convergence rate of the penalty function method. USSR Computational Mathematics and Mathematical Physics, 11(1): 1-12.), Breitfeld & Shanno (19954 BREITFELD MG & SHANNO DF. 1995. A Globally Convergent Penalty-Barrier Algorithm for Nonlinear Programming. In: Operations Research Proceedings 1994. pp. 22-27. Springer.), Nash (201013 NASH SG. 2010. Penalty and barrier methods. Wiley Encyclopedia of Operations Research and Management Science,.), and Luenberger & Ye (200811 LUENBERGER DG & YE Y. 2008. Linear and nonlinear programming. 3th ed.. Springer New York.).
Given a restriction set Ω1, a penalty function is defined as a function satisfying (i) 𝒫 is continuous, (ii) , and (iii) .
In order to solve the problem (13), the penalty function method solves the following penalized subproblem
where is a constant called penalty parameter. For ρ large, is clear that a solution of (14) will be in a region where 𝒫 is small. Thus, when is expected that the corresponding optimal points will approach the feasible set Ω1.
For C 2 class functions, , some useful penalty functions 𝒫 based on the type of restrictions may be
-
, quadratic penalty,
-
,
-
-
,
in the first item the quadratic penalty function preserves C 2 property, but in the last three items 𝒫 is only C 1.
Given , we have a generic penalty algorithm given in Algorithm 2 for solving the problem (13), that works iteratively updating the parameter ρ before solving the penalized subproblem (14)
In general, one of the suggestions to compute ρk is taking and , Fletcher (20138 FLETCHER R. 2013. Practical methods of optimization. John Wiley & Sons, Chichester.). However, when Ω1 is the set of equality constraints , a basic rule that works in practice is that if , then , otherwise ρ does not change. That approach was successfully tested for linear programming problems, Suñagua & Oliveira (201716 SUÑAGUA P & OLIVEIRA AR. 2017. A new approach for finding a basis for the splitting preconditioner for linear systems from interior point methods. Computational Optimization and Applications, 67(1): 111-127.).
The following Lemma gives a set of inequalities that follow directly from Algorithm 2 steps. A proof is based in Martınez & Santos (199512 MARTINEZ JM & SANTOS SA. 1995. Métodos computacionais de otimização. Colóquio Brasileiro de Matemática, Apostilas, 20. Available at: https://www.ime.unicamp.br/~martinez/mslivro.pdf.
https://www.ime.unicamp.br/~martinez/msl...
) and Luenberger & Ye (200811 LUENBERGER DG & YE Y. 2008. Linear and nonlinear programming. 3th ed.. Springer New York.).
Lemma 3.1.Let {x k } be a sequence generated byAlgorithm 2, which x k+1 is global solution of problem (Q k ). Then
-
-
-
.
Proof. Since {ρ k } is a monotone increasing sequence and x k is a global minimizer of subproblem (15), then
To establish the second inequality, recalling the optimalities of x k and x k+1 , we have
using (17) and (16), we get
as , then . Finally, using this inequality
hence . □
Lemma 3.2.If x*is a global minimizer of (GP) then for
Consequently,, if and only if, xkis the global solution of (GP).
Proof. Since and and x k is the global minimizer of (Q k−1 ), then
where . □
The global convergence of the penalty method, in the sense any limit point of the sequence is a solution, can be verified from the two previous Lemmas.
Theorem 3.1 (Global convergence for penalty method). Let {x k } be a sequence of global minimizers of (Qk ) generated by Algorithm 2 in which . Then, any limit point of the sequence is a global minimizer of problem (13).
Proof. With a slight change of notation, the proof is based on Martínez & Santos’ demonstration. Indeed, let be a subsequence of {x k } such that . By the continuity of f , we have
Let f* be an optimal value of problem (GP). By Lemma 3.1 and Lemma 3.2, the sequence is nondecreasing and bounded above by f ∗, then
Thus, using (18) and (19), yields
Since and , we conclude that . Using the continuity of 𝒫, , thereby . To prove the optimality of , just note that by Lemma 3.2, , then
which completes the proof, because obviously and then . □
Furthermore, by definition of ψ and (19)
Therefore , then
And using (19)
4 MIXED BARRIER-PENALTY METHOD
For continuous function , we consider the general programming problem
where Ω1, Ω2 and Ω3 are restriction sets that are defined in (1).
As in the previous sections, we assume that problem (22) admits a global minimizer. Now, let 𝒫 a penalty function related to Ω1 and B a barrier function related to Ω2. Then, taking the penalty parameter and the barrier parameter , we have the associate mixed barrier-penalty subproblem,
Since the general problem (NLP) admits global minimizer, then the problem (BP ρ,µ) in (23) also admits a global solution for any feasible parameter values. Therefore, we define
In order to solve the general problem (22), we provide a generic algorithm given in Algorithm 3, that works iteratively updating ρ and µ parameters before solving the penalized subproblem (23).
To establish the global convergence of the Algorithm 3, firstly we can associate the additive terms in two convenient ways
Therefore, fixing respectively ρ and µ, we define , then we associate to (NLP) the following two problems
Since the problem (NLP) admits a global minimizer, both (GP ρ ) and (GP µ ) in (27) also admit global minimizers. Therefore, defining
We have respectively the barrier and penalty subproblems
By fixing one of the parameters according to (27), the two problems in (28) are equivalent to (BP ρ,µ ). In fact
therefore, we can apply the results obtained in the preceding two sections.
In order to understand more clearly the ideas of the mixed problem, we consider the following particular quadratic problem
According to the contours of the objective function and graph of restrictions in Figure 1, the optimal point is . First, if we consider the Lagrangian function , the Karush Kuhn-Tuker conditions (-Kuhn & Tucker, 195110 KUHN HW & TUCKER AW. 1951. Nonlinear Programming. In: Proceedings of the Second Berkeley Symposium on Mathematical Statistics and Probability. pp. 481-492. Berkeley, California: University of California Press. Available at: http://projecteuclid.org/euclid.bsmsp/1200500249.
http://projecteuclid.org/euclid.bsmsp/12...
) are
whose unique solutions for variables and Lagrangian parameters are
Now, we associate to (30) the mixed barrier-penalty subproblem
where the penalized objective function is
where M is a large enough positive number such that and , surely this region lies within inequality constraints . This condition ensures that the barrier function is non-negative in the region that contains optimal point.
It is easy to see that Φ is a smooth function, thereby from first-order necessary conditions for optimal points, we have
Solving this nonlinear system, subject to and , by the substitution method, we obtain
Thus, for each optimal point in (33), the optimal values of problem (QP ρ,µ ) in (31) is , whose graph is shown in Figure 2 with .
We can see that for fixed is an increasing function and for fixed is a decreasing function. This fact will be showed theoretically in Theorem 4.1.
Furthermore, using (33), when and , the following gradient’s coefficients in (32) converge to optimal Lagrangian parameters
where . In addition, the Hessian matrix for Φ is
Then ∇2Φ is a positive definite matrix, that guarantees the minimality of x 1 and x 2 in (33). Moreover, the approximate condition number of this matrix is
hence ∇2Φ is ill-conditioned for very small µ and small ρ, however for large ρ, that condition number could be reduced.
Next, we have the global convergence theorem for the mixed barrier-penalty algorithm.
Theorem 4.1 (Global convergence for mixed method).Let {x k } be a sequence of global minimizers of (BP k ) problem in (25) generated by mixedAlgorithm 3in whichand. Then any limit point of sequence is a global minimizer of the (NLP) problem.
Proof. In order to apply the results of the preceding sections, the idea is to fix, one of the parameters in the (BP ρ,µ ) subproblem in (23) one at a time, and apply the corresponding results for each subproblems in (27).
Firstly, to fix ρ, let be the sequence generated by Algorithm 1 for solving the (GP ρ ) subproblem in (27). By applying Lemma 2.1, we get
By the monotonicity in (34) and by (12) the sequence converges to global optimal value of the problem (GP ρ ) in (27), that is,
In addition, from Theorem 2.1 all convergent subsequence of converges to a global minimizer of the problem (GP ρ ) in (27).
Similarly, to fix µ, let be the sequence generated by the associated Algorithm 2 for solving the (GP µ ) subproblem in (27). By applying Lemma 3.1, we get
By the monotonicity in (36) and by (21), the sequence converges to global optimal value of the (GP µ ) problem of (27), that is,
In addition, from Theorem 3.1 all convergent subsequence of converges to a global minimizer of the problem (GP µ ) in (27). And by Lemma 3.2, we get .
Now let {x k } be a sequence of minimizers obtained by Algorithm 3 for a mixed problem. More precisely, let which also minimizes (28), because according to (29), we have
Since (35) and (37), we have
Giving x(ρ k , µ k ) the solution of (BP ρ ,µ ) for and . For , additionally we solve (BP ρ ,µ ) for and , which solution is called x(ρ k , µ k−1 ). Using (34)
and by (29), for and , we have
Using (39) and (40), we get
Similarly, for , additionally we consider a solution of (BP ρ ,µ ) for and , which solution is called x(ρ k−1 ,µ k ). Using (36)
and by (29) for and , we have
Using (41) and (42), we get
Let x* be a global minimizer of (NLP). Recalling a solution of the problem in (27), with the additional assumption , we can conclude that . Moreover x* is a feasible point of the problem (GP ρ ), then . Therefore, is a monotone nonincreasing sequence that is bounded below by f(x*) and using (12) this sequence converges to its infimum f(x*). Also, is monotone nondecreasing sequence that is bounded above by f(x*) and by (21) that sequence converges to its supremum f(x*), that is,
By applying squeeze theorem1 1 formulated in modern terms by Carl Friedrich Gauss to (38) and (43), we show
Let be any subsequence of {x k } such that . By the continuity of f, we get . The final demonstration will be done by contradiction under the assumption with . Using (10) for the problem (GP ρ ), we have , for any . Furthermore, using (20) for the problem (GP µ ), also we have , and by continuity of f, the sequence cannot converge to zero, which contradicts (44). □
5 APPLICATIONS
5.1 Barrier-Penalty applied to convex problem
The Algorithm 4 is an algorithm based on generic Algorithm 3 in order to solve the nonlinear problem (30).
For we write a MATLAB script for Algorithm 4 in order to compute a sequence of optimal points that approach to . The iterative results is shown in Table 1.
The path following points is shown in Figure 3, where last points are close to x*. The exact results also solved by MATLAB are and .
5.2 Penalized standard linear programming problem
We consider the standard linear programming problem where several variables are upper bounded
where A is m × n matrix, , and E is formed by rows of n × n identity matrix corresponding to bounded variables, thereby Ex is the vector of bounded variables for which u is the vector of upper bounds. In this case is usual to add the slack variable v such that , where v ≥ 0.
In the most computational packages that implement Interior Point Methods for solving linear programming problems only barrier parameter is considered.
In order to solve the LP problem (46), by using the quadratic penalty and logarithmic barrier functions, the objective function is penalized as follow
where µ and ρ are respectively the barrier and penalty parameters and nb is the number of bounded variables. Then the associated mixed barrier-penalty subproblem is
Since Φ(x,v,ρ,µ) is a smooth function on open set . By applying the first-order necessary condition, we have
Defining , we get
Taking , we rewrite . Thus .
Therefore, the optimality conditions for subproblem (LPP ρ,µ ) on and are
In Interior Point Methods reviewed on Suñagua & Oliveira (201716 SUÑAGUA P & OLIVEIRA AR. 2017. A new approach for finding a basis for the splitting preconditioner for linear systems from interior point methods. Computational Optimization and Applications, 67(1): 111-127.), we find a search direction by applying Newton’s Method for solving nonlinear system (49). In fact, the Newton’s directions satisfy
solving this block linear equations, we find up
replacing this on third group of equations
where , then
using (52) and first group of equations of (50), we get the normal equations
close to optimal point, D matrix is very bad scaled and then ADA T is also very ill-conditioned. In this case, the penalty parameter δ improves that condition number, which is helpful for solving the symmetric positive definite system by applying conjugate gradient method for instance.
Alternatively to (52) and (53), dx and dy also obtain by solving the following augmented system
where . This system is also symmetric, indefinite and better condition number due to penalty parameter.
For computational experiments, we use the open source package PCx (Czyzyk et al., 19975 CZYZYK J, MEHROTRA S, WAGNER M & WRIGHT SJ. 1997. PCx user guide (Version 1.1). Optimization Technology Center, Northwestern University,.), that implements the Mehrotra’s Predictor-Corrector algorithm in which the barrier parameter µ is already incorporated in order to solve linear programming problems. By adding an appropriate code to PCx, we achieve to incorporate the penalty parameter δ, thus, we obtain the modified PCx called the Predictor-Corrector mixed algorithm with barrier and penalty parameters. The numerical results for several NETLIB LP problems were computed for approaches proposed in Suñagua & Oliveira (201716 SUÑAGUA P & OLIVEIRA AR. 2017. A new approach for finding a basis for the splitting preconditioner for linear systems from interior point methods. Computational Optimization and Applications, 67(1): 111-127.), where the goodness of the approaches were compared according to Dolan & More (20026 DOLAN ED & MORE JJ. 2002. Benchmarking optimization software with performance profiles. Mathematical programming, 91(2): 201-213.) performance profile criteria.
6 CONCLUSIONS
Firstly, we present a brief summary of the main concepts and results upon the barrier and penalty methods, where for each method we show the global convergence theorems in order to use these strategies in the proof of the global convergence theorem for mixed algorithm.
In the Section 4, we provide a mixed algorithm for solving mixed barrier-penalty subproblem (23), and we provide a constructive proof on global convergence theorem for mixed barrierpenalty methods as an alternative showed in Fiacco & McCormick (19907 FIACCO AV & MCCORMICK GP. 1990. Nonlinear programming: sequential unconstrained minimization techniques. vol. 4. Siam.) and Breitfeld & Shanno (19954 BREITFELD MG & SHANNO DF. 1995. A Globally Convergent Penalty-Barrier Algorithm for Nonlinear Programming. In: Operations Research Proceedings 1994. pp. 22-27. Springer.). For simple convex nonlinear problems we write MATLAB code in order to generate iterative points that illustrate penalty and barrier functions.
Finally, we develop an application for nonlinear programming problems with equality and inequality functional constraints, such as, quadratic programming problem, and a standard linear programming problem. Since the functions involved have the smooth property on an open set, then the optimality conditions for each class of problems are stated, those can be solved by applying interior point methods.
ACKNOWLEDGEMENTS
Thanks to CNPq, FAPESP (grant number 2010/06822-4) and Universidad Mayor de San Andrés (UMSA) for their financial support.
References
-
1BAZARAA MS, SHERALI HD & SHETTY CM. 2013. Nonlinear programming: theory and algorithms. John Wiley & Sons.
-
2BERTSEKAS DP. 1976. On penalty and multiplier methods for constrained minimization. SIAM Journal on Control and Optimization, 14(2): 216-235.
-
3BREITFELD MG & SHANNO DF. 1994. A globally convergent penalty-barrier algorithm for nonlinear programming and its computational performance. Rutgers University. Rutgers Center for Operations Research [RUTCOR].
-
4BREITFELD MG & SHANNO DF. 1995. A Globally Convergent Penalty-Barrier Algorithm for Nonlinear Programming. In: Operations Research Proceedings 1994. pp. 22-27. Springer.
-
5CZYZYK J, MEHROTRA S, WAGNER M & WRIGHT SJ. 1997. PCx user guide (Version 1.1). Optimization Technology Center, Northwestern University,.
-
6DOLAN ED & MORE JJ. 2002. Benchmarking optimization software with performance profiles. Mathematical programming, 91(2): 201-213.
-
7FIACCO AV & MCCORMICK GP. 1990. Nonlinear programming: sequential unconstrained minimization techniques. vol. 4. Siam.
-
8FLETCHER R. 2013. Practical methods of optimization. John Wiley & Sons, Chichester.
-
9GRIVA I, NASH SG & SOFER A. 2009. Linear and nonlinear optimization. vol. 108. Siam.
-
10KUHN HW & TUCKER AW. 1951. Nonlinear Programming. In: Proceedings of the Second Berkeley Symposium on Mathematical Statistics and Probability. pp. 481-492. Berkeley, California: University of California Press. Available at: http://projecteuclid.org/euclid.bsmsp/1200500249
» http://projecteuclid.org/euclid.bsmsp/1200500249 -
11LUENBERGER DG & YE Y. 2008. Linear and nonlinear programming. 3th ed.. Springer New York.
-
12MARTINEZ JM & SANTOS SA. 1995. Métodos computacionais de otimização. Colóquio Brasileiro de Matemática, Apostilas, 20. Available at: https://www.ime.unicamp.br/~martinez/mslivro.pdf
» https://www.ime.unicamp.br/~martinez/mslivro.pdf -
13NASH SG. 2010. Penalty and barrier methods. Wiley Encyclopedia of Operations Research and Management Science,.
-
14NASH SG & SOFER A. 1993. A barrier method for large-scale constrained optimization. ORSA Journal on Computing, 5(1): 40-53.
-
15POLYAK BT. 1971. The convergence rate of the penalty function method. USSR Computational Mathematics and Mathematical Physics, 11(1): 1-12.
-
16SUÑAGUA P & OLIVEIRA AR. 2017. A new approach for finding a basis for the splitting preconditioner for linear systems from interior point methods. Computational Optimization and Applications, 67(1): 111-127.
-
17WRIGHT MH. 1992. Interior methods for constrained optimization. Acta Numérica, 1: 341-407.
-
18WRIGHT SJ & NOCEDAL J. 1999. Numerical optimization. vol. 2. Springer New York.
-
1
formulated in modern terms by Carl Friedrich Gauss
Publication Dates
-
Publication in this collection
18 May 2020 -
Date of issue
2020
History
-
Received
07 Dec 2018 -
Accepted
31 Oct 2019