## Services on Demand

## Article

## Indicators

- Cited by SciELO
- Access statistics

## Related links

- Similars in SciELO

## Share

## Computational & Applied Mathematics

*On-line version* ISSN 1807-0302

### Comput. Appl. Math. vol.31 no.1 São Carlos 2012

#### http://dx.doi.org/10.1590/S1807-03022012000100003

**Numerical solution of the variational PDEs arising in optimal control theory **

**Vicente Costanza ^{I,} ^{*}; Maria I. Troparevsky^{II}; Pablo S. Rivadeneira^{I, III}**

^{I}Grupo de Sistemas No Lineales, INTEC, CONICET-UNL, Santa Fé, Argentina E-mail: tsinoli@santafe-conicet.gov.ar

^{II}Departamento de Matemática, Facultad de Ingeniería, UBA, Buenos Aires, Argentina E-mail: mitropa@fi.uba.ar

^{III}IRCCyN, 1 Rue de la Noë, 44321 Nantes Cedex 03, France E-mail: psrivade@santafe-conicet.gov.ar

**ABSTRACT**

An iterative method based on Picard's approach to ODEs' initial-value problems is proposed to solve first-order quasilinear PDEs with matrix-valued unknowns, in particular, the recently discovered variational PDEs for the missing boundary values in Hamilton equations of optimal control. As illustrations the iterative numerical solutions are checked against the analytical solutions to some examples arising from optimal control problems for nonlinear systems and regular Lagrangians in finite dimension, and against the numerical solution obtained through standard mathematical software. An application to the (*n* + 1)-dimensional variational PDEs associated with the *n*-dimensional finite-horizon time-variant linear-quadratic problem is discussed, due to the key role the LQR plays in two-degrees-of freedom control strategies for nonlinear systems with generalized costs.

**Mathematical subject classification: **Primary: 35F30; Secondary: 93C10.

**Key words: **numerical methods, first-order PDEs, nonlinear systems, optimal control, Hamiltonian equations, boundary-value problems.

**1 Introduction **

Hamilton's canonical equations (HCEs) are key objects in optimal control theory when convexity is present. If the problem concerning an *n*-dimensional control system and an additive cost objective is regular, i.e. when the Hamiltonian H(*t, x*, λ, *u*) of the problem is smooth enough and can be uniquely optimized with respect to *u* at a control value *u*^{0}(*t, x*, λ) (depending on the remaining variables), then HCEs appear as a set of 2*n* first-order ordinary differential equations whose solutions are the optimal state-costate time-trajectories.

In the general nonlinear finite-horizon optimization set-up, allowing for a free final state, the cost penalty *K*(*x*) imposed on the final deviation generates a two-point boundary-value situation. This often leads to a rather difficult numerical problem. In the linear-quadratic regulator (LQR) case there exist well-known methods (see for instance [3, 19]) to transform the boundary-value into a final-value problem, giving rise to the differential Riccati equation (DRE).

For a quadratic *K*(*x*), in this paper it will always be *K*(*x*) = *s*||*x*||^{2}, the extension to a general positive-semidefinite matrix *S* can be worked out along the lines of [10], where the problem has been imbedded into a whole (*T, s*) -family (see [5, 6, 11]). As a result two first-order, quasilinear, uncoupled PDEs with classical initial conditions were discovered, where the dependent variables are the missing boundary conditions *x*(*T*) and λ(0) of the HCEs. This approach is completely disjoint from Riccati equations, but more in the line of the early ideas on 'invariant imbedding' introduced by Bellman [2]. An analogous approach was reformulated for the multidimensional case, in the light of the symplectic properties inherent to Hamiltonian dynamics (see [6]). More involved PDEs appear in this case, called 'the variational PDEs'. They were solved numerically for linear, bilinear, and other nonlinear systems, but their complexity hadimpeded analytical confirmation till recently. In [12] such an analytical checking is developed for a nonlinear version of a well-known case-study, namely the time-variant LQR problem. It has been found that, in the general nonlinear case, additional relevant information can be recovered from the variational PDEs. For instance, the initial condition (0) for the DRE associated with the linearizared HCEs, can be calculated from the derivative of the Hamiltonian flow, which is part of the solution to the variational PDEs. This is of particular importance in implementing two-degrees-of-freedom (2DOF) control in the Hamiltonian context (see [15]), because the (*t*) enters the compensation gain of the control scheme, and it can be obtained on-line from knowledge of (0) (see [4, 9]).

When the variational PDEs were solved using standard software as MATLAB^{®} or MATHEMATICA^{®}, some inaccuracies have appeared, which hasmade it necessary to develop robust integration methods. In this work, a novel iterative scheme based on Picard's approach to ODEs' initial-value problems is proposed to numerically integrate these variational PDEs. This convergent iterative method actually applies to general first-order quasilinear PDEs of the form

where *x* ∈ ^{n}*, z* is the unknown real function of *x*, and the {*a _{i}, i* = 1, ...,

*n; b*} are

*C*

^{1}functions. The 'Cauchy problem' is to find the

*C*

^{1}function (denoted for simplicity

*z*(

*x*)) that satisfies Eq. (1) in a neighborhood of a given

*C*

^{1}'initial hypersurface'

*S*⊂

^{n}where the 'initial data' are prescribed, i.e.

for a given *C*^{1} function *g: S *→ .

Very few results are known concerning the numerical treatment of first-order PDEs. In [17], it is proved that the Picard method can be used to generate the Taylor series solution to any ODE in *R ^{n}* with polynomial generator. Lately, in [16] the result was extended to PDEs with polynomial generators and analytic initial conditions, showing that the power-series solution to that PDE can be generated to an arbitrary degree of approximation by Picard iteration method. Such a method can be applied to a large class of PDEs through polynomial projections. From the proof of the main theorem in [16] an algorithm to find the solution follows. A similar result for nonlinear PDEs, based on the 'Adomian Decomposition Method', has recently been reported [1], although the procedure to find the appropriate Adomian polynomials is somewhat involved. In [20], a variant of the dressing method for solving matrix quasilinear PDEs, more of a theoretical content is advanced. In the present article the objective is to approximate the solution via a convergent iterative procedure, no matter the degree ofthe McLaurin series reached in each iteration. It is shown that this new method actually applies to first-order quasilinear PDEs with matrix-valued unknowns. As illustrations the iterative method is checked against: (i) the analytical solutions to examples in dimension one and higher, some arising from optimal control problems for nonlinear systems and regular Lagrangians, and (ii) the numerical solution obtained from straightforward use of standard mathematical software. An application to the (

*n*+ 1)-dimensional variational PDEs associated with the

*n*-dimensional finite-horizon time-variant linear-quadratic problem is specially discussed due to its relation to 2DOF control strategies for nonlinear systems with generalized costs.

The paper is organized as follows: In Section 2, the proposed method for solving quasilinear PDEs is substantiated. The PDEs for boundary conditions of the Hamilton canonical equations are introduced in Section 3. A solution to these equations by means of the proposed iterative scheme is also included. In Section 4 some examples are discussed and illustrated. Finally, the conclusions are presented.

**2 An iterative method for first-order quasilinear PDEs **

Let us consider first-order quasilinear PDEs of the form given in Eq. (1) and its related Cauchy problem. Under the usual hypothesis that *S* is non-characteristic, i.e., the vector (*a*_{1}(*x, g*(*x*)), *a*_{2}(*x, g*(*x*)) ..., *a _{n}*(

*x, g*(

*x*)) is not tangential to

*S*at any

*x*∈

*S*, there exists a unique

*C*

^{1}solution

*z*(

*x*) of Eq. (1), defined on a neighborhood of

*S*(see [13] for details). The graph of

*z*(

*x*) is the union of the parametric family of integral curves of

where *v* is a vector-parameter running along *S*, i.e. *S* is assumed to be given in parametric form

with *v* ∈ , for some bounded open set in ^{n - 1} and *h* a valid 'parametrization' of *S*.

For each value of v the Eqs. (3-4) are actually ODEs with independent variable *t*. Then the Cauchy problem for the original quasilinear PDEs is basically answered by the theorem on existence and uniqueness of solutions for ODEs (see [14]), which in the positive case anticipates a unique *C*^{1} solution (*x*(*v, t*), *y*(*v, t*)) for small |*t*|. The *C*^{1} dependence of the ODE solutions in a bounded set of parameters *v*, guarantees that the solution curves are defined for |*t*| __<__ *t*_{0} for some positive *t*_{0}, i.e. uniformly, for all *v* ∈ . In addition, since *S* is non-characteristic, by the inverse-function theorem the function (*v, t*) → *x*(*v, t*) can be inverted in a neighborhood Ω of *S* to obtain *v* and *t* as *C*^{1} functions of *x* in Ω so as to meet the initial condition

*t*(*x*) = 0 , *h*(*v*(*x*)) = *x* , ∀*x* ∈ *S*

Therefore the desired solution to the original problem will be

as can be checked by taking derivatives, using the chain rule and the matrix equality (see [13])

The problem remains formally the same when the *a _{i}, i* = 1, ...,

*n*are

*n × n*-matrix-valued, and

*z, b*are

*n*-vector-valued mappings.

For a fixed *v*, Eqs. (3-4) can be solved by using the Picard method for autonomous ODEs [14]. The extension of the Picard algorithm to treat Eqs. (3-4) as PDEs reads as follows.

For each *v*_{0} ∈ where the function *x*(*v, t*) is invertible, the ODE system (3-4) can be written

where *Y* (*x*_{1}, *x*_{2}, ..., *x _{n}, y*) ,

*F*(

*a*

_{1},

*a*

_{2 },...,

*a*), and

_{n}, b*F*:

^{n+1 }→

^{n+1}is

*C*

^{1}.

If *M _{v}*

_{0}is an upper bound for

*F*in the compact set

*B*(*v*_{0}) {|*Y* - *Y*_{0}(*v*_{0})| __<__ 1},

then the Picard sequence, for all *t* ∈ *J _{v}*

_{0 }(-ε

_{v}_{0}, ε

_{v}_{0}) and

*v*

_{0}fixed, is denoted by {

*Y*(

_{n}*v*

_{0}, ·),

*n*= 0, 1, ...}, and

where ε_{v0} can be chosen ε_{v}_{0} < min, where *K _{v}*

_{0}is a Lipschitz constant for

*F*in

*B*(

*v*

_{0}).

It is well-known that Picard iterations converge to the solution *Y* in some neighborhood of *t* = 0 whenever the solution to Eq. (7) exists and is unique, then by choosing a compact (as big as possible) set _{0} ⊂ , and using the fact that *Y*_{0} is continuous, it follows that *W* *B*(*v*_{0}) is bounded, and therefore there will exist a Lipschitz constant *K*, and a bound *M* for *F* in the closure of *W*, with

Then, it turns possible to choose an ε < min ensuring that Eq. (9) will be valid for all *t* ∈ *J* [-ε,ε] and for all *v*_{0} ∈ _{0}. Therefore, in the compact set _{0} × *J*, the Picard sequence *Y _{n}*(

*v*

_{0}, ·) will converge uniformly to

*Y*(

*v*

_{0}, ·), the solution of the ODE (7), implying that the last coordinate

*y*(

_{n}*v*

_{0},

*t*) of

*Y*(

_{n}*v*

_{0}, ·) will converge (uniformly) to

*y*(

*v*

_{0},

*t*) =

*z*(

*x*(

*v*

_{0},

*t*)). By calling Ω

_{0}

*x*(

_{0 }×

*J*), the following proposition becomes true.

**Proposition 2.1. ***Let consider a first-order PDE*

*where x *∈ ^{n} and S is an hypersurface in ^{n}, parameterized by the smooth function h defined on an open set O *∈* ^{n - }^{1}*. Suppose that a _{i}, g are C*

^{1}

*functions and S is non characteristic for*(1)

*. Let z*(

*x*)

*be the unique solution to*(11)

*and z*(

_{n}*x*)

*be a sequence of functions generated by the application of the Picard iterative method to*(3)-(4)

*for v*∈

*∈ Ω*

*O*. Then if x_{0}_{0}

*, the sequence z*(

_{n}*x*)

*= y*(

_{n}*v*(

*x*)

*,t*(

*x*))

*converges uniformly to z*(

*x*).

Proposition 2.1 states that the sequence *y _{n}*(

*v, t*) generated by the proposed iterative scheme converge uniformly to

*y*(

*v, t*) for (

*v, t*) ∈

_{0 }×

*J*. In addition, for each

*x*∈ Ω

_{0}there is a unique (

*v, t*) ∈

_{0 }×

*J*such that

*x = x*(

*v, t*). Then

*y*(

_{n}*v, t*) =

*z*(

_{n}*x*) converges uniformly on Ω

_{0}to the solution

*z*(

*x*) =

*y*(

*v, t*) of the PDE, i.e., ∀ε > 0, ∃

*n*

_{0}such that

*n*

__>__n_{0 }⇒ |

*y*(

_{n}*v*(

*x*),

*t*(

*x*)) -

*z*(

*x*)| < ε, ∀

*x*∈ Ω

_{0}(

*i.e.*, ∀(

*v, t*) ∈

_{0}×

*J*).

Since the convergence is uniform, this means that the graphs surfaces of the iterated *y _{n}*(

*x*) uniformly approximate the graph (surface) of the solution

*z*(

*x*) to the PDE. Thus, the difference |

*z*(

_{n }- z*x*)| for each

*n*is uniformly bounded for every

*x*∈ Ω

_{0}, i.e. the accuracy of the approximated values for

*z*(

*x*) can be controlled.

Figure 1 illustrates some of the objects appearing in the proof of Proposition 2.1 for *n *= 2. The label *S _{z}* denotes the graph of the unique solution

*z*(

*x*) to (1) defined on Ω, i.e.

*S*= {(

_{z}*x, z*(

*x*)),

*x*∈ Ω} . The initial data

*S**{ (

*x, z*(

*x*)),

*x*∈

*S*} ⊂

*S*can be described by means of the parametrization of

_{z}*S*as

*S**= {(

*x, g*(

*x*)) :

*x*∈

*S*} . The left subplot in Figure 1 shows the set (

*x*

_{1},

*x*

_{2},

*z*(

*x*

_{1},

*x*

_{2})). The initial point

*P*

_{0}∈

*S**of the curve

*C*, and a typical point

_{x}*P*= (

*x*

_{1},

*x*

_{2},

*z*(

*x*

_{1},

*x*

_{2})) ∈

*S*, can be observed. On the right, the corresponding surface solution

_{z}*S*= {(

_{y}*v, t, y*(

*v, t*)) :

*v*∈ ,

*t*

__>__0}, in the (

*v, t*) space is presented. We show the points ,

_{0}∈

*S*where = (ν,

_{y}*t, y*(ν,

*t*)) corresponds to

*P*, i.e.

*y*(ν,

*t*) =

*z*(

*x, y*)).

Concerning the rate of convergence of Picard's sequences in the first-order PDEs' context, the following inequality can be advanced for all (*v, t*) in _{0 }× *J*:

where *m _{n}*(

*v*) max

_{t }

_{∈}

_{J0 }|

*Y*(

_{n}*v, t*) -

*Y*(

*v, t*)|. Then, by induction

revealing that iterations converge to the solution at a geometrical rate.

**3 Application to PDEs for boundary values of the Hamilton canonical equations **

This section relates first-order PDEs to optimal control problems defined by autonomous, nonlinear systems

coupled to cost functionals of the form

with smooth Lagrangians *L* and piecewise continuous admissible control strategies *u*(·).

The problems will be assumed regular and smooth, i.e. that the value (or Bellman) function

satisfies the Hamilton-Jacobi-Bellman (HJB) equation with boundary condition

where ^{0} is the minimized Hamiltonian defined as

*H* being the usual Hamiltonian of the problem, namely

and *u*^{0} is the unique *H*-minimal control satisfying

or equivalently,

Under these conditions, the optimal costate variable λ* results in

and the optimal state and costate trajectories are solutions to the Hamiltonian Canonical Equations (HCE)

which in concise form read as one 2*n*-dimensional ODE

with two-point mixed boundary conditions

When the problem is not completely smooth, i.e. when the resulting value function is not differentiable, it is still possible to generalize the interpretation of the costate (in Eq. 24) in terms of 'viscosity' solutions (see discussion and references quoted in [19], pages 394, 443); but those cases will not be explored here.

The following notation for the missing boundary conditions will be used in what follows:

The introduction of the variational PDEs is done through the following objects. First of all ϕ : [0, *T*] × ^{n } ×* ^{n }*→

^{n}×

^{n}will denote the flow of the Hamiltonian equations (25-26), i.e.

and ϕ^{t} is the *t*-advance map defined for each *t* as

The flow must verify the ODEs of Hamiltonian dynamics, i.e.:

Let us denote then, for a fixed *s*,

It can be shown, by deriving ODE condition in Eq. (34) with respect to the(*x*, λ)-variables, that the following 'variational (ODE) equation' (see [14]) applies:

where

and (*t, s*) stands for (*t, s*). Now it should be noted that *V*(*t, s*) is thefundamental matrix Φ(*t*, 0) of the linear system

i.e. its solution is *y*(*t*) = Φ(*t*, τ)*y*(τ), Φ(τ, τ) = *I*, and it can be shown (see [19]) that the matrix *V*(*T, s*) = Φ(*T*, 0) verifies

a first-order PDE which, given the form of Eq. (37), must be solved together with a pair of equations for ρ(*T, s*), σ(*T, s*). It turns out that several equivalent equations may play this role, for instance [6]

where the *V _{i}, i* = 1, ...4 are the partitions into

*n × n*submatrices of

and

In the one-dimensional case Eqs. (39, 40, 41) can be replaced [5] by

It is possible first to calculate ρ(*T, s*) by solving Eq. (43), and afterwards σ(*T, s*) from the solution of Eq. (43) inserted in Eq. (44).

Now, the iterative method proposed in Section2 will be applied to solve Eq. (43). The Picard scheme uses an arbitrary guess ρ_{0}(*T, s*) for the solution and attempts to solve

where only the variable ρ is replaced by ρ_{0}(*T, s*) the derivatives of ρ remaining unknown. It is enough to search for the integral curves of the vector field *C*(*T, s*) = (1,Γ(ρ_{0}, *s*), *F*(ρ_{0}, *s*)), where

i.e. to solve the system of ODEs

The first equation implies *T = t*, then

can be solved alone, to find *s*(*t, s*_{0}), and finally the ODE for *y* takes the form

A grid (*T _{i}, s_{k}*) can be constructed on the (

*T, s*) space but with

*s*(

*T*) =

_{i}*s*. Afterwards Eq. (49) can be numerically solved and the value

_{k}*y*(

*T*(

_{i}, s*T*)) is picked up. According to (6)

_{i}Finally an interpolation is made to obtain an approximation of ρ(*T, s*). The result of this interpolation is ρ_{1}, and so on. Were the calculations exact, the constructed sequence of functions {ρ* _{n}*} should then converge to ρ.

In the multidimensional case, the same approach can be pursued after writing the equations in component-by-component form.

**4 Numerical examples **

Three examples are treated in this section. One of them is just an abstract PDE whose analytical solution is known. The remaining examples come from optimal control problems (regularity and smoothness properties hold, as can be checked in the references quoted for each case).

*4.1 Case-study 1: a one-dimensional nonlinear control system with a quadratic cost*

The results of the previous sections will be applied here to the following initialized dynamics

subject to the quadratic optimality criterion

The problem presented in this subsection has been treated in [18] as a hyper-sensitive boundary value problem (HSBP), since small perturbations seriously affect its numerical solution for long time-horizons *T*. The corresponding variational PDEs were posed in [11] and they were solved numerically by using the MATHEMATICA^{®} software package (specifically, the command "NDSolve") and MATLAB^{®} (using the command "pdepe"). For the regulation case they read as follows

In Figure 2 the approximate solutions obtained by the proposed Picard scheme and the one-step numerical solution provided by Mathemtica are shown. In this case the solution obtained by standard software is acceptable.

*4.2 Case-study 2: a one-dimensional PDE with analytical solution*

This is an example that can be solved analytically. The PDE and its initial condition are the following:

*r _{t }+ r_{s}*(

*s*(

*t*- 1) +

*r*- 1) = -2

*s, r*(0,

*s*) = 1 ∀

*s*,

whose exact solution reads

In Figure 3 the solutions obtained by the fourth (in brown) and fifth (in blue) iterations of the proposed Picard scheme are compared against the solution provided by MATHEMATICA^{®}. It can be noted that the behavior of Picard's iterations converge monotonically to the analytical solution (in yellow), significantly improving in this case the numerical response of MATHEMATICA^{®} (in orange).

The validity of the geometrical rate of convergence for Picard iterations developed in Section 2 can be analytically checked in this example. For a fixed *v*_{0} the system (3-4) takes the form

and, in the notation of Section 2

*Y*(*t*) = (*T*(*t*), *s*(*t*), *r*(*t*)),

*F*(*Y*(*t*)) = *F*(*T*(*t*), *s*(*t*), *r*(*t*)) = (1, *s*(*T* – 1) + *r* – 1,–2*s*),

* Y*_{0} = (0, *v*_{0}, 1).

For *T* ∈ [0, 1], *v* ∈ [1, 2], *r* ∈ [0, 1], the Lipschitz constant can be chosen as the upper bound for the derivative. The values *K* = , *M* = , are found admissible, and therefore

From the exact solution *Y*(*v, t*) = (*t, v*(1 - *t*), 1 - *v + v*(1 - *t*)^{2}) and using ε < 1/ < 1/, the analytical expressions of the iterations allow to check, for increasing *n*,

*4.3 Case-study 3: a two-dimensional nonlinear control system with a quadratic cost*

The well-known time-variant Linear-Quadratic-Regulator (LQR) problem ishere revisited (see [12] for related results). Such a problem is defined by

The problem is transformed into an autonomous one through the usual change of variables

and for simplicity the old symbol *x* will be maintained for the new state, i.e., in what follows,

and, in the new set-up the (autonomous) dynamics will read

The transformed cost functional, Lagrangian, and final penalty will then be

It is clear that the optimal control for both (the time-variant and the autonomous) problems will be the same, since

however the new dynamics is nonlinear (actually, Eq. (62) having an exponential, implies that all powers of *x*_{1} are present, and they are multiplied by the control). New expressions for the Hamiltonian *H _{a}*, the

*H*-minimal control , and the minimized (or control) Hamiltonian will apply, namely

and the HCE for this case will read in component-by-component form

This ODEs can be solved analytically, to obtain

Then, the linear time-variant variational equation (36) can also be integrated analytically, and its solution results

The PDEs (39)-(41) were solved numerically by using MATHEMATICA^{®}, and their solutions compared against the following analytical expressions

In Figure 4 the results of applying the new method to Eq. (39-41) areillustrated. The relative error of the seventh Picard iteration with respect to the analytical solution is shown in Figure 5. It is of the same order than the one reached through standard software calculations.

**5 Conclusions**

A novel numerical scheme to integrate first-order quasilinear PDEs has beendeveloped, and its application has been illustrated through several examples. The method, based on Picard's iterations, has provided accurate solutions. Although no systematic comparisons against specialized numerical approaches (like advanced finite differences and finite elements schemes) have been attempted in the paper, at least the method proposed here has worked better than standard mathematical software like MATHEMATICA^{®} and MATLAB^{®} in all cases.

The fact that the Picard's method always converge in the ODE context (assuming the existence of a unique solution and precise calculations at each iteration step) was used to prove convergence of the scheme for the PDEs under study. The solution surfaces corresponding to each Picard step uniformly converge to the solution surface to the original PDE. This convergence guarantees that the accuracy of results can be improved with respect to non-iterative methods (like finite differences in their different versions). Analytical bounds for the rate of convergence have been found, and checked in a case-study whose explicit solution is known.

The variational PDEs associated with the Hamiltonian formulation of theoptimal control problem for nonlinear systems have been solved numerically in a satisfactory way by the new method. This application is particularly useful given the hyper-sensitivity of Hamilton Canonical Equations (HCEs), whose boundary values are obtained from the variational PDEs' solutions, and therefore need to be highly accurate.

**REFERENCES**

[1] A. Abdelrazec and D. Pelinovsky, *Convergence of the Adomian decomposition method for initial-value problems*. Numerical Methods for Partial Differential Equations, (2009), DOI 10.1002/num.20549.

[2] R. Bellman and R. Kalaba, *A note on Hamilton's equations and invariant imbedding*. Quarterly of Applied Mathematics, **XXI** (1963), 166-168.

[3] P. Bernhard, *Introducción a la Teoría de Control Óptimo*, Cuaderno Nro. 4, Instituto de Matemática "Beppo Levi", Rosario, Argentina (1972).

[4] V. Costanza, *Optimal two Degrees of Freedom Control of Nonlinear systems*. Latin American Applied Research, (2012), to appear.

[5] V. Costanza, *Finding initial costates in finite-horizon nonlinear-quadratic optimal control problems*. Optimal Control Applicantions & Methods, **29** (2008), 225-242.

[6] V. Costanza, *Regular Optimal Control Problems with Quadratic Final Penalties*. Revista de la Unión Matemática Argentina, **49**(1) (2008), 43-56.

[7] V. Costanza, R.A. Spies and P.S. Rivadeneira, *Equations for the Missing Boundary Conditions in the Hamiltonian Formulation of Optimal Control Problems*. Journal of Optimization Theory and Applications, **149** (2011), 26-46.

[8] V. Costanza and C.E. Neuman, *Partial Differential Equations for Missing Boundary Conditions in the Linear-Quadratic Optimal Control Problem*. Latin American Applied Research, **39** (2008), 207-211.

[9] V. Costanza and P.S. Rivadeneira, *Hamiltonian Two-Degrees-of-Freedom Control of Chemical Reactors*. Optimal Control Applicantions & Methods, **32** (2011), 350-368.

[10] V. Costanza and P.S. Rivadeneira, *Feedback Óptimo del Problema Lineal-Cuadrático con Condiciones Flexibles*. Actas del Congreso AADECA (in CD format), (2008), Buenos Aires.

[11] V. Costanza and P.S. Rivadeneira, *Finite-horizon dynamic optimization of nonlinear systems in real time*. Automatica, **44**(9) (2008), 2427-2434.

[12] V. Costanza and P.S. Rivadeneira, *Initial values for Riccati ODEs from variational PDEs*. Computational and Applied Mathematics, **30** (2011), 331-347.

[13] G.B. Folland, *Introduction to Partial Differential Equations*. Second Edition. Princeton University Press (1995).

[14] M. Hirsch and S. Smale, *Differential Equations, Dynamical Systems and Linear Algebra*. Academic Press, New York (1974).

[15] R. Murray, *Optimization Based-Control*. California Institute of Technology, California, in press (2009). http://www.cds.caltech.edu/~murray/amwiki/Supplement:_Optimization-Based_Control

[16] G.E. Parker and J.S. Sochacki, *A Picard-Maclaurin theorem for initial value PDEs*. Abstract and Applied Analysis, **5**(1) (2000), 47-63.

[17] G.E. Parker and J.S. Sochacki, *Implementing the Picard iteration*. Neural, Parallel and Scientific Computations, **4**(1) (1996), 97-112.

[17] A.V. Rao and K.D. Mease, *Eigenvector approximate dichotomic basis method for solving hyper-sensitive optimal control problems*. Optimal Control Applications & Methods, **21**(2000), 1-19.

[18] E.D. Sontag, *Mathematical Control Theory*. Springer, New York (1998).

[19] A.I. Zenchukand and P.M. Santini, *Dressing method based on homogeneous Fredholm equation: quasilinear PDEs in multidimensions*, (2007), http://arxiv.org/pdf/nlin/0701031

Received: 18/XI/10.

Accepted: 10/XI/11.

#CAM-282/10.

* Corresponding author.