A Review of Gradient Algorithms for Numerical Computation of Optimal Trajectories Wander

In the last four decades, a great number of numerical methods have been developed for computing solutions to optimal control problems. These methods are divided into two main classes: indirect and direct. The indirect ones involve the solution of a nonlinear two-point boundary value problem obtained from a set of necessary conditions for a local extremum of the objective function, provided by the Pontryagin’s Maximum Principle (Pontryagin et al., 1962) or the calculus of variations (Bliss, 1946; Hestenes, 1966). For instance, the quasilinearization method, also known as generalized Newton-Raphson method (McGill and Kenneth, 1964), and the shooting one (Sage and White, 1977) belong to this class of numerical methods. On the other hand, the direct methods use only equations of motion and terminal conditions and they attempt to minimize the objective function through an itera-


INTRODUCTION
In the last four decades, a great number of numerical methods have been developed for computing solutions to optimal control problems.These methods are divided into two main classes: indirect and direct.The indirect ones involve the solution of a nonlinear two-point boundary value problem obtained from a set of necessary conditions for a local extremum of the objective function, provided by the Pontryagin's Maximum Principle (Pontryagin et al., 1962) or the calculus of variations (Bliss, 1946;Hestenes, 1966).For instance, the quasilinearization method, also known as generalized Newton-Raphson method (McGill and Kenneth, 1964), and the shooting one (Sage and White, 1977) belong to this class of numerical methods.On the other hand, the direct methods use only equations of motion and terminal conditions and they attempt to minimize the objective function through an iteraby Kelley (1960) and is referred to as the gradient method.
Other direct methods are the steepest descent method (Bryson and Denham, 1962) and the ones based upon the second variation theory (Bullock and Franklin, 1967;Longmuir and Bohn, 1969;Bryson and Ho, 1975).These direct methods use the calculus of variations, but there is another class of direct methods that transform optimal control problems into nonlinear programming ones, such as the direct transcription or collocation method (Betts, 1993;1994) and the direct methods based on differential inclusion concepts (Seywald, 1994;Coverstone-Carroll and Williams, 1994).
In this paper, the steepest descent method and the direct method based upon the second variation theory, which will be referred to as second variation method, are discussed, considering different algorithms in comparison to the original ones proposed by Bryson and Denham (1962) and Longmuir and Bohn (1969).The steepest descent method was developed ables were considered through the penalty function method (Hestenes, 1969;O'Doherty and Pierson, 1974).A procedure to adjust the step size in control space is proposed to improve the convergence rate and avoid the divergence of the method as the optimal solution is approached.On the other hand, the second order gradient method involves the determination of closed-loop solutions of a linear quadratic optimal control problem using the Riccati transformation.The algorithm was developed for a Bolza problem of optimal control, with Legendre condition and avoid the divergence of the method (Bullock and Franklin, 1967).In both methods, problems with that introduces a new independent variable and an additional control one, and transforms a time-free problem in a new The methods are applied in solving two classic optimization problems -Brachistochrone and Zermelo problems -, and their main advantages and disadvantages are discussed.Finally, an algorithm that combines the main positive characteristics of these methods was presented and applied in space trajectories optimization for transference between coplanar circular orbits.

STEEPEST DESCENT METHOD
The steepest descent method is an iterative direct method widely used for computing a m-vector of control variables , which minimizes a scalar performance index in an optimization problem (Kelley, 1960;Bryson and Denham, 1962;McIntyre, 1968;McDermott and Fowler, 1977).
was here presented for a Mayer problem of optimal control treated by using the penalty function method (Hestenes, time were treated by using a transformation approach, which introduces new independent and additional control variables has two main features: the algorithm is very simple, requiring a single numerical integration of the adjoint equations at each step, and some of the typical divergence problems described by McDermott and Fowler (1977) are circumvented.
Consider the system of differential equations (Eq.1): where: is a n-vector of state variables; u is a m-vector of control variables , f i (.) and that there are no constraints on the state or control variables.The problem consists in determining the control u * (t) that transfers the system (Eq. 1) from the initial conditions (Eq.2): t f (Eq.3): and minimizes the performance index (Eq.4): with g R R n : For completeness, a brief description of the steepest descent method is presented.The development of the algorithm was based on the classic calculus of variations (Bliss, 1946;Hestenes, 1966;Gelfand and Fomin, 1963;Elsgolts, 1977).Introducing the n-vector of Lagrangian multipliers -adjoint variables -, the augmented performance index is formed as Eq.5: where, H is the Hamiltonian function (Eq.6), Let u 0 (t), t 0 t t f be an arbitrary starting approximation of the control u * (t), t 0 t t f and 0 (t), t 0 t t f the corresponding trajectory, obtained by integrating the system (Eq.1).Let is small, in order that the linearity assumptions are valid, and they satisfy the constraint as in Eq. 7: where, W( symmetric m × m matrix of weighting functions chosen to improve convergence of the steepest descent method, and K is the step size in control space.Both W( ) and K must be chosen by the user of the algorithm.
The control variation u(t),t 0 t t f causes perturbations in the state vector so that the corresponding trajectory to the control u 1 (t),t 0 t t f , can be expressed as 1 (t) 0 (t) (t).Accordingly, the change in J¯, which is assumed to be Eq.8 (McIntyre, 1968): since (t 0 ) = 0, taking the n-vector of the Lagrangian system of differential equations (Eq.9): and the boundary conditions (Eq.10) Equation 8 reduces to Eq. 11: (11) calculus of variations, solution of which is given by Eqs. 12 and 13 (Gelfand and Fomin, 1963;Elsgolts, 1977): where is the Lagrangian Multiplier corresponding to the constraint (Eq.7).
Thus, from Eqs. 11 to 13, it follows Eq. 14: and the new value of the performance index will be smaller than the previous ones.
The step-by-step computing procedure to be used in the steepest descent method is summarized as: point 0 at t 0 (t f ) at t f , with the nominal control u 0 (t), t 0 t t f ; t f to t 0 , with the terminal conditions (Eq.10); from Eq. 13; f t t t t u 0 ), ( from Eq. 12; ), ( ) ( ) ( and repeat the process until the integral It should be noted that as the nominal control u n (t) approaches the optimal one u * (t), the integral approaches zero and the Lagrangian multiplier tends to u(t), obtained from Eq. 12, can become too large and the process diverges.In order to avoid this drawback, the step size in control space K must suppose that K m is the step size in control space at the m-th iterate, if , with 0 < < 1 as a reduction factor.L and must be chosen by the user of the algorithm.Numerical experiments have shown that this approach provides good results.

SECOND VARIATION METHOD
The direct method based upon the second variation is also an iterative method used for computing a m-vector of control variables , which minimizes a scalar performance index in an optimization problem (Bullock and Franklin, 1967;Longmuir and Bohn, 1969;Bryson and Ho, 1975;Imae, 1998).In the present work, this method will be simply referred to as a second variation method.
The second variation method is developed for a Bolza proposed by Longmuir and Bohn (1969) is applied for solving the linear two-point boundary value problem, associated with the accessory minimization problem obtained from the second variation of the augmented performance index of the original optimization problem.Therefore, the algorithm here presented is different from the one described by Bullock and Franklin (1967), which was developed for a Mayer problem and involves a different set of transformation matrices for solving the linear two-point boundary value problem associated with the accessory minimization problem.The present algorithm is also different from the one proposed by Bryson and Ho (1975), since it does not include the stopping condition for time-free problems, which are treated by a transformation approach as mentioned.However, the algorithm is closely based on the approach for second variation methods described by Longmuir ed by Bullock and Franklin (1967) in their algorithm in order to assure the convergence and to satisfy Legendre's condition.
For completeness, a brief description of the development of the second variation method is presented.Consider the system of differential equations (Eq.15): where, is a n-vector of state variables and u is a m-vector of control variables, . It is assumed that there are no constraints on the state or control variables.
The problem consists in determining the control u * (t), which transfers the system (Eq.15) from the initial conditions (Eq.16): t f (Eq.17): and minimizes the performance index (Eq.18): with g: R n R and g i , i =1,...,n, continuous on R n .F(.) and F i , i =1,...,n R n ×R m , and : R n R q , q<n, i (.) and i j , i=1,...,q, and j=1,...,n R n .Furthermore, it is assumed that the matrix x has maximum rank.
The second variation method is an extension of the steepest descent one already presented and it is also based on the classic calculus of variations.The main difference is the inclusion of the second-order terms in the expansion of the augmented performance index about a nominal solution.By virtue of this fact, the method is also known as second-order gradient method (Bryson and Ho, 1975).
Introducing the n-vector of Lagrangian multipliersadjoint variables -and the q-vector of Lagrangian multipliers, the augmented performance index is formed by Eq. 19: Here, the Hamiltonian function H is defined as In order to derive the algorithm of the second variation method, one proceeds as in the steepest descent method.Let u 0 (t), t 0 t t ƒ be an arbitrary starting approximation of the control u * (t), t 0 t t ƒ ; 0 (t), t 0 t t ƒ , the corresponding trajectory, obtained by integrating the system (Eq. 1) and 0 be an arbitrary starting approximation of the Lagrange multiplier .And, u 1 (t) = u 0 (t) + u(t), t 0 t t ƒ , be the second iterate such that J[u 1 J[u 0 ].The control variation u(t), t 0 t t ƒ causes perturbations in the state vector (t), as well in the adjoint vector (t) and in the Lagrange multiplier , so that the nominal values of (t), (t) and , corresponding to the control u 1 (t), t 0 t t ƒ can be expressed as Eq.20: It is assumed that the perturbations , , u and are small.Therefore, the change in J, which is assumed to be twice differentiable, to second order in the perturbations, is given by Eq.21 (Longmuir and Bohn, 1969): Considering that the state equations (Eq.15) is as in Eq. 22: with the initial conditions (t 0 )= 0 adjoint equations (Eq.23) with the boundary conditions (Eq.24) and taking Eq. 25 Eq. 21 reduces to 26: since (t 0 ) = 0. Notice that (t equation (Eq.27): The change in the performance index, including the second order terms, can be written as Eq.28: Where (Eq.29): = g + T .
(29) u(t the following new problem of optimal control, known as accessory problem (Bullock and Franklin, 1967): determine u(t), t 0 t t ƒ to minimize the performance index J by Eq. 28, subject to Eqs. 30 to 32: Following the Pontryagin's Maximum Principle (Pontryagin et al., 1962), the n-vector p of adjoint variables is introduced and the Hamiltonian function H ( , p, u) is formed by using Eq.30: (33) The optimal control u * (t) (Eq.34) must be chosen to maximize the Hamiltonian H (Eq. 33): ]. [ The adjoint vector p(t) must satisfy the differential Eq. 35: with the boundary conditions (Eq.36): where is the Lagrangian multiplier corresponding to the constraint (Eq.32).Note that Equations 35 and 36 are the linear perturbation ones corresponding to Eqs. 23 and 24, respectively; thus, p(t)= (t) and = .
Equation 40 means that on each step, "partial corrections" are obtained.The parameter k must be chosen by the user of the algorithm.Numerical experiments show that this parameter can be used to avoid the method diverges.
The solution of the two-point boundary value problem method, which uses the generalized Riccati transformation (Longmuir e Bohn, 1969; Bryson and Ho, 1975), as in Eqs.42 and 43: where: R is a n × n symmetric matrix, L is a n × q matrix, Q is a q × q symmetry matrix, s is a n × 1 matrix, and r is a q × 1 matrix.
In order that Eqs.42 and 43 be consistent with Eqs.37 differential equations (44 to 48): with the boundary conditions (49 to 53): The step-by-step computing procedure to be used in the second variation (or second order gradient) method is summarized as follows: , which further constraints the control effort, the resulting functional J dre condition if the m × m matrix W 2 is chosen large enough.Therefore, H uu must be replaced by H uu +W 2 in the algorithm constraint (Eq.7) introduced in the steepest descent method (Bullock and Franklin, 1967).

THE COMBINED ALGORITHM
As discussed in the book by Bryson and Ho (1975), the main characteristic of the steepest descent method is the great gence as the optimal solution is approached.On the other hand, the second variation method presents excellent convergence characteristics as the optimal solution is approached, but it convexity conditions, numerical experiments have shown that the accuracy of the solution is very sensitive to the choice of matrix W 2 Considering these remarks, an algorithm that combines the best characteristics of each method is implemented: at the nominal solution to a point where it is convex; then, at the second one, the second variation method is applied, using the value of u of the second one.The point where the algorithm commutes by the user and is based on the value desired for the integral: The resulting algorithm will be simply referred to as the combined algorithm and its implementation is a combination of the step-by-step computing procedure.As it will be later presented, steepest descent and second variation methods described in the preceding sections provides good results, which have motivated its application to the analysis of optimal low-thrust limited-power transfers between coplanar circular orbits.

SIMPLE NUMERICAL EXAMPLES
In order to clarify some of the algorithms aspects of the methods described, two classical examples were considered: the Zermelo and the Brachistochrone problems.

Zermelo problem
It is a classical minimum-time navigation problem (Leitmann, 1981).Consider a boat moving with velocity v of constant magnitude v =1 relative to a stream of constant velocity s.The problem consists in determining the steering program that transfers the boat from a given initial position to a given terminal position in minimum time.The statement of the optimization problem is as follows: determine the control (t) that transfers the system described by differential equations ( 54): from the initial ( 55) and minimizes the performance index ( 57) In order to solve this time-free problem by means of the algorithms of the steepest descent and the second variation methods, a transformation approach must be used.This time-free problem [0,1] and auxiliary control variables through the Eq.58 t = , (58) In order to apply the algorithm of the steepest descent method function method (Hestenes, 1969; O'Doherty and Pierson, by the state equations ( 59), the initial conditions (55), and a new performance index obtained from Eqs. 56 and 57: where k 1 ,k 2 >> 1 must be chosen by the user of the algorithm.In the penalty function method as described by O' Doherty and Pierson (1974), the penalty constants k 1 and k 2 are progres-value for these constants as adopted by Lasdon et al. (1967) in the conjugate gradient method.
Both methods use the partial derivatives of the Hamiltonian function H, which is given by H= [ 1 (cos +s)+ 2 sin + 3 ].
As an example, consider s= of this problem is t f = control variables is chosen as Eqs.61 and 62 for both algorithms: The numerical results are shown in Table 1 and Figs. 1 to 3. In both cases, the convergence criterion is given by |I For the steepest descent method, the numerical results are obtained with the following set of parameters: weights of the penalty function, k 1 =k 2 =500; reduction factor for the step size in control space, =0.50; initial step size in control space K 0 =0.05; critical value of the integral , L=1000.For the second variation method, the numerical results are presented with the following set of parameters: reduction factor for partial k=1.0; reduction factor for u corrections (0< 1), =1.0; elements of the diagonal matrix W 2 , W ii,2 = -2000.For simplicity, W 2 has been chosen as a diagonal matrix.For this set of parameters, the second variation method requires less iterations than the steepest descent method and the accuracy of the solution is much better.The last column provides a difference between the value of the performance index obtained through the algorithms and the exact solution.In Fig 1, the rates of convergence for steepest descent and second variation methods are presented.Figures 2 and 3 show the time history of the optimal control and trajectory for both methods.
The results of the combined algorithm are also presented in rithm provides a very accurate solution in few iterations.In this example, the performance is the same of the second variation method.We could note that the algorithm will be more effective in the next example, in which the Brachistochrone problem is solved.
The number of iterations obtained in all algorithms is closely related to the choice of the starting approximation, which is "poor".is chosen.For instance, when one takes the starting approximation as ( )=0.3 and ( )=3.0, , the number of iterations obtained by the steepest descent method with the same set of parameters

Brachistochrone problem
One of the most famous issue in the Mathematics history is the Brachistochrone problem, which consists of determining the curve y( ) that a m mass particle will slide from point to a lower B, without friction, in the minimum time.The statement of the problem is as follows (Williamson and Tapley, 1972;Bryson and Ho, 1975).Determine the control u(t) that transfers the system described by the differential equations: and minimizes the performance index: In order to solve this time-free problem by means of the algorithms of the steepest descent and the second variation methods, one proceeds as described in the previous section.Introducing a new independent variable , [0,1] and an auxiliary control variable through the equation 67 t = , (67) The penalty function method (Hestenes, 1969;O'Doherty and Pierson, 1974) must be used in order to apply the algorithm of the steepest descent method derived for a Mayer problem and a new performance index obtained from Eqs. 65 and 66: where k 1 >>1 must be chosen by the user of the algorithm.Both methods use the partial derivatives of the Hamiltonian function H, which is given by Eq. 70: As an example, consider the following boundary conditions (Williamson and Tapley, 1972): The exact solution to these boundary conditions is 2 (t f )=0.634974 and 3 (t f of the control variables is chosen as Eqs.71 and 72 for the two algorithms.
The numerical results are shown in Table 2.In both cases, the convergence criterion is given by | I n+1 -I n | < 1.0×10 -6 .
The numerical results are obtained taking: k 1 = k 2 = 1000; = 0.95; K 0 = 0.02; L = 400, for the steepest descent method, and k = 0.25; = 0.75; W ii,2 = -2000, for the second variation method.For this set of parameters, the second variation method requires less iterations than the steepest descent method, but the accuracy of the solution is worse.In Fig 4, the rates of convergence for steepest descent and second variation methods are presented.For the steepest descent method, Fig. 4 shows an oscillatory behaviour, which is produced by the change of the step size in control space.Figures 5 and  6 shows the time history of the optimal control and trajectory for both methods.For the sake of simplicity, 2 -axis is upwards.The results for the combined algorithm are also presented in Table 2.It should be noted that the algorithm converges in less iterations and presents a good accuracy of shown in Fig. 4 is produced by the change of the step size descent method).

APPLICATION TO SPACE TRAJECTORIES OPTI-MIZATION TRANSFER BETWEEN COPLANAR CIRCULAR ORBITS
Low-thrust limited power propulsion systems are charimpulse (Marec, 1976).The ratio between the maximum thrust and the gravity accelerations on the ground ma /g 0 is between 10 -4 e 10 -2 .For such system, the fuel consumption is described by the variable J obtained with the proposal algorithm (combined algorithm) and a linear theory (Silva Fernandes and Golfetto, 2007).One can verify that the curves match almost precisely, presenting good agreement between the numerical and analytical results.The method provides a good approximation for the solution of low-thrust limited power transfer to coplanar circular orbits in a central Newtonian gravitational In Figs. 9 and 10, respectively, one can verify the radial and circumferential acceleration evolutions for t f -t 0 = 3 and = 1.523 (Earth-Mars).

CONCLUSIONS
In this paper, two classic direct methods -steepest descent and second variation -for computing optimal trajectories are reviewed, and the main advantages and disadvantages of these methods are discussed.An algorithm that combines the good characteristics of each method is also presented.The algorithms are applied for solving two classic optimization problems: Zermelo and Brachistochrone problems.Finally, the proposed algorithm is used to solve the problem of optimization of space trajectories transference between coplanar circular orbits with that the algorithm can provide.

Figure 4 .
Figure 4. Convergence rate of the steepest descent method for Brachistochrone problem.

Figure 6 .
Figure 5.Time history of optimal control for Brachistochrone problem.

Table 1 .
Numerical results for Zermelo problem.

Table 2 .
Numerical results for Brachistochrone problem.