Acessibilidade / Reportar erro

Discrete approximations for strict convex continuous time problems and duality

Abstract

We propose a discrete approximation scheme to a class of Linear Quadratic Continuous Time Problems. It is shown, under positiveness of the matrix in the integral cost, that optimal solutions of the discrete problems provide a sequence of bounded variation functions which converges almost everywhere to the unique optimal solution. Furthermore, the method of discretization allows us to derive a number of interesting results based on finite dimensional optimization theory, namely, Karush-Kuhn-Tucker conditions of optimality and weak and strong duality. A number of examples are provided to illustrate the theory.

Linear Quadratic problems; Continuous time optimization; discrete approximation; strict convexity


Discrete approximations for strict convex continuous time problems and duality* * Partially supported by CNPq and FAPESP of Brasil. † Professor. ‡ Postgraduate student.

R. AndreaniI,† * Partially supported by CNPq and FAPESP of Brasil. † Professor. ‡ Postgraduate student. ; P. S. Gonçalves; II,‡ * Partially supported by CNPq and FAPESP of Brasil. † Professor. ‡ Postgraduate student. ; G. N. SilvaII,† * Partially supported by CNPq and FAPESP of Brasil. † Professor. ‡ Postgraduate student.

IDepartamento de Matemática Aplicada, IMECC, UNICAMP, Campinas, SP, Brasil

IIUniversidade de Estadual Paulista, São José do Rio Preto, SP, Brasil E-mail: andreani@ime.unicamp.br/ goncalves@dcce.ibilce.unesp.br/ gsilva@dcce.ibilce.unesp.br

ABSTRACT

We propose a discrete approximation scheme to a class of Linear Quadratic Continuous Time Problems. It is shown, under positiveness of the matrix in the integral cost, that optimal solutions of the discrete problems provide a sequence of bounded variation functions which converges almost everywhere to the unique optimal solution. Furthermore, the method of discretization allows us to derive a number of interesting results based on finite dimensional optimization theory, namely, Karush-Kuhn-Tucker conditions of optimality and weak and strong duality. A number of examples are provided to illustrate the theory.

Mathematical subject classification: 49M25, 49N10, 90C20.

Key words: Linear Quadratic problems, Continuous time optimization, discrete approximation, strict convexity.

1 Introduction

We propose a discretization scheme to compute approximate solutions of linear-quadratic continuous time problems (LQCTP) of the form

The matrix Q associated with the integral cost consists of time-varying continuous functions as entries and the rest of the parameters of the problem a, c, B and K are functions of bounded variation; x(t) is the decision variable.

The method of discretization consists of taking a sequence of partitions of the interval [0,T] in equidistant intervals such that in each interval of a particular partition we apply the trapezoidal rule in the cost and in the constraints of the original problem. For each discretization we get a finite dimensional quadratic programming problem. We are able to prove that any sequence of continuous solutions obtained from the optimal solutions of the discretized problems converges almost everywhere to the optimal solution of the original continuous time problem. We point out that this convergence is independent of particular integration rule adopted. What may differ is the order of convergence which will be discussed elsewhere.

We illustrate the power of our discretization method to approximately solve linear-quadratic continuous time problems with a series of examples. In these examples we are capable of actually finding analytical solutions, where the discrete optimal solutions play a major role in showing how the solutions should look like. The confirmation of optimality is made by using the Karush-Kuhn-Tucker conditions of optimality provided in Section 6.

The method of solving (LQCTP) presented here is an extension of the purely linear cases studied by Buie and Abrham [3] and Pullan [10] to linear-quadratic problems.

The theoretical approach to prove convergence of a subsequence of continuous approximate solutions to an optimal solution of the LQCTP enables us to also derive some duality results for LQCTP. The finite dimensional pair of problems obtained in our discretization scheme are convex and concave, respectively. Then, there is no duality gap between them [4]. Due to weak duality, Theorem 5.1 below, the value of the maximization problem (D) is at most equal to the value of the minimization problem (LQCTP). We show that the gap closes up when we refine the partition of [0,T]. In the limit we have optimality (strong duality).

As it is well known, finite dimensional quadratic problems have duals with only dual variables. This is made possible via primal variable elimination of the dual. A direct approach of eliminating the primal variables of the dual in a continuous time version is now more complicated. However, due to the approximation scheme and duality relationships between the finite dimensional problems, we can, as explained above, provide a duality theory for the linear-quadratic problem.

A duality theory for purely linear continuous time problems was first investigated in [11]. After that, duality theory has been extended in a number of directions, see, for instance, [12, 5, 6, 1, 2]. Pullan [10] considers a special class of linear problems, called separated continuous linear programs, proposes a discretization method to solve problems in this class and derives a strong duality result. Philpot [9] considers a linear continuous time problem modelling shortest path problems in graphs, where edge distances can vary with time. Under some regularity on the edge distances he also establishes strong duality.

It seems that continuous time problems with quadratic integral cost and linear (integral) operator in the constraints in the present form of this paper has not been considered elsewhere in the literature.

We now fix the notation used in this paper. The space of matrices with k rows and l columns is denoted by k×l. We will use the following definitions for norm of vectors, matrices and variation of the functions. Let x = (x1,¼,xk)´ be a vector, A = (aij) be a matrix and

denote the norms of x and A, respectively. Let abs(x) mean (|x1|,¼,|x2|). Let f : [0,T] ® be a given function. The number

finite or infinite is called the variation of the function f on the interval [0,T]. We say that f is a function of bounded variation on [0,T], if V[0,T,f(t)] < +¥. Matrix functions A(·) : [0,T] ® k×l are of bounded variation if each entry of the matrix is a function of bounded variation. We denote the space of matrix functions of bounded variation by BV ([0,T];k×l).

We finish this section by summarizing the organization of the paper. Section 2 is devoted to recalling the LQCTP and introducing its discretization. In Section 3 the dual problem and its discretization are introduced. Some facts about duality for finite dimensional quadratic problems are gathered in Proposition 3.1 below. The standing hypothesis and results of convergence are the subject of Section 4. Results of duality are presented in Section 5 and the Karush-Kuhn-Tucker conditions of optimality in Section 6. Numerical examples are discussed Section 7. Finally, in Section 8, some comments and final considerations are made.

2 The linear-quadratic problem and its discretization

We start recalling the LQCTP from the introduction.

Here a(t) : [0,T] ® N, B(t) : [0,T] ® m×N and c(t) : [0,T] ® m are functions of bounded variation which are supposed to be continuous on the right on the interval [0,T) and Q:[0,T] ® N×N is a continuous function. K:[0,T]×[0,T] ® m×N is a constant matrix and x(·) : [0,T] ® N is a choice variable. Precise assumptions will be made clear in Section 4. We point out that right continuous bounded variation functions can jump on right endpoint T of the interval [0,T], but cannot jump at the left point of it.

We now introduce the discretization for LQCTP. Let pn: = {} be a partition of the interval [0,T] in ln sub-intervals, in which : = i(T/ln), for i = 1,2,¼, ln. Let A : [0,T] ® k×l and a :[0,T] ® k be given functions, which are matrix valued and vector valued, respectively. We use the notation : = A () and ani : = a () for i = 1,2,¼, ln unless stated differently. The resulting discrete quadratic problem is given by

where

and

where Q = Q(0), Q = Q(T) and Q = Q(), i = 1, 2,¼,ln - 1; B = B(0), B = B(T) and B = B(), i = 1,2,¼,ln - 1

3 The dual problem, its discretization and basic results

The dual of (LQCTP) is defined by

where G'(t)x(t) := B'(t)x(t) - K(s,t)x(s)ds and G(t)x(t) := B(t)x(t) - K(t,s)x(s)ds . Here the dual variables w, m : [0,T] ® n are functions of bounded variation. We now write the dual problem in a simplified form

where,

Analogously, we introduce the discretization of the dual problem

Again, Gn, Qn, cn and an are the same as in the primal discrete problem, mn := (m() : ¼, m()) and wn := (w() : ¼:w()). The discrete dual can also be written in a simplified form

where

For a fixed n, the pair of finite dimensional problems (Pn) and (Dn) form the usual pair of primal-dual problems in the classical theory of optimization chapter for quadratic problems.

We state some basic facts related to this pair of primal-dual problems that will be needed in the sequel.

Proposition 3.1. See, for instance, [8].

a. Let xn and (wn, mn) be (Pn) and (Dn) feasible, respectively. If the (primal) cost of xn coincides with the (dual) cost of (wn, mn), then xn and (wn,mn) are (Pn) and (Dn) optimal, respectively.

b. Let xn be (Pn) optimal. If there exists (wn, mn) > 0 such that

(a) Qnxn + a'n + w'n (Gnxn - cn) - mn > 0

(b) w'n (Gnxn - cn) - m'nxn = 0

then (wn,mn) is (Dn) optimal. Conversely, if (wn,mn) is (Dn) optimal then (a) and (b) are satisfied at (xn,wn,mn).

4 Hypothesis and convergence results

We now introduce some definitions and hypothesis which will be used throughout the remainder of this paper. Given a Î , let

Sa = {x Î BV ([0,T]; N) : x(·) satisfies (1) and

[(1/2)x' (t)Q(t)x(t) + a' (t)x(t)]dt < a}

and

Wa = {q Î BV ([0,T]; m+N) : q > 0 and

[-(1/2)q'(t)M(t)q(t) + N' (t)q(t) + R(t)]dt < a}

be the a-sub-level sets of the primal and dual problems, respectively.

We assume that

(H1) a(t), c(t) and B(t) are matrix functions of bounded variation, Q(t) is a continuous positive definite matrix function and K is a constant matrix;

(H2) For every positive integer n, there exist feasible solutions to problems (Pn) and (Dn);

(H3) There exists a > 0 such that [(1/2) x'(t)Q(t)x(t) + a'(t)x(t)]dt < a, " x satisfying (1).

We observe that (H1) implies that LQCTP is a convex programming problem whose cost is strictly convex. In that case the optimal solution is unique (apart from a zero-measure set). This observation has an important implication in the convergence of the approximate solutions since all subsequences will converge to the same unique optimal solution (a.e.). In fact, this procedure ensures the convergence of any sequence obtained via discretization to the optimal solution.

We can only guarantee that a perturbation of the discrete Primal quadratic problem, which takes into account the error generated by the rule of integration, is feasible. But, to keep the arguments of the proof easier to understand and also to parallel some results in reference [1], which is the basis of our work, we assume hypothesis (H2), without loss of generality. Actually our hypothesis (H2) corresponds to an algebraic condition imposed in [1] to ensure feasibility.

Note that in hypothesis (H1), except for Q(·) that is supposed to be continuos, all functions are of bounded variation. The continuity of Q(·) is essential to guarantee the boundedness of the sub-level sets of the cost functional of LQCTP. This boundedness is central in the convergence proof which, in turn, is based on compactness arguments over sub-level sets of the cost functional of the problem.

The proof of the expected boundedness is in Lemma 4.1; soon after Lemma 4.1, we show why the continuity of Q(·) cannot be relaxed. It will be clear in the proofs that if we assume the sets of feasible solutions of (LQCTP) and (D) are bounded we get convergence of a subsequence, even when Q(·) is of bounded variation. So, the continuity of Q(·) is a trade off to increase the generality of the method.

We say that pn+1 is a refinement of pn if pn Ì pn+1. Hereafter, we consider only sequences of partitions {pn} in which pn+1 is a refinement of pn.

We introduce some notation needed in the sequel. Given a Î , let

Sn, a = { x Î n : Gnx < cn, x > 0; (T/2ln)x' Qnx + b'nx < a}.

Let xn and qn be the primal and dual optimal solutions of the discrete problems, respectively. We define their continuous extensions xn(t) and qn(t) to the whole interval [0,T], by

It is easy to verify that the extension qn(t) obtained from the discrete dual optimal solution is a dual feasible solution (not necessarily optimal). However, xn(t) is not necessarily primal feasible.

Our next step is to state and prove two technical lemmas needed for the main results of this section.

Lemma 4.1.Let a Î be given. Suppose hypothesis (H1)-(H3) are valid. Then Sa is a bounded set. Furthermore, Sna is uniformly bounded for n Î .

Proof. The matrix Q(t) Î N ×N is symmetric positive definite for all t Î [0,T], then there exist lmin(t) > 0 the minimum eigenvalue of Q(t) such that

Definite the function lmin : [0, T] ® , such that

lmin(t) = min {eigenvalue of Q(t)} .

Since all entry functions of Q(t) are continuous, lmin is a continuous function. Therefore, using the fact that [0,T] is compact, there exist > 0 such that

x(t)'Q(t)x(t) >||x(t)||" t Î [0,T]

It follows from the definition of Sa that

Since a(t) Î BV([0,T], N) then there exists L > 0 such that L > ||a(t)||2 and

Using again that a(t) Î BV([0,T];N), it follows from (4) that Sa is bounded. It is now easy to see that the set Sna is also bounded uniformly for n Î .

Example 4.1. This example shows that the continuity of Q(·) to ensure the boundedness of the sub-level sets cannot be dropped out. Let

and B(t) = , K = 1 and c(t) = 1+(1/2)t1/2. Clearly Q(t) is of bounded variation and so are B(·) and c(·). Observe that the set

contains the unbounded sequence {xn} given by

Indeed,

Moreover, a simple calculation shows that

B(t)xn(t) - xn(s) ds<c(t) "n,

proving the feasibility of xn for all n. Therefore, the sub-level set S1 of this example is unbounded, showing that the continuity of Q(·) cannot be left out.

Lemma 4.2.Let (xn(t)) and (qn(t)) be the sequences generated from the optimal solutions xn and qn of Pn e Dn, respectively. Then, there exist subsequences (xnk(t)) of (xn(t)) and (qnk(t)) of (qn(t)) which converge almost everywhere in [0,T] to functions of bounded variation (t) and (t), respectively.

Proof. The proof follows immediately from Lemma 4.1 and Helly's Theorem ([7], Theorem 1.4.5).

We point out that the sequence of discretized problems (Pn) and related dual (Dn) are arbitrary. This implies that if we replace the sequences of problems (Pn) and (Dn) to subsequences (Pn') and related dual (Dn') in Lemma 4.2, and consider their respective sequences of optimal solutions {xn'} and {wn'} we can also extract convergent subsequences. The importance of this feature will be made clear in Remark 5.3.

We are now in position to state the main results of this section, which are Theorems 4.3 and 4.4. below. Theorem 4.3. states that the sequences of costs generated from sequences of continuous extensions of discrete optimal solutions converges to the costs of their limits, while Theorem 4.4 states that the adjusted costs of the discrete problems converge to the costs of the limits of sequences of the continuous extensions of the discrete optimal solutions. The proofs of both theorems are based on ideas taken from Buie and Abrham [1].

Theorem 4.3. Let (xnk(t)) and (qnk(t)) be the subsequences obtained according to Lemma 4.2 and let (t), (t) be their limits. Then

Proof. a) and b) follows from Helly's Theorem and Lebesgue dominated convergence theorem ([7], Theorem 6.2.10).

To prove part c), we first observe that (t) > 0, since qnk(t) ® (t) almost everywhere in [0,T] and qnk(t) > 0 almost everywhere in [0,T], "k Î . The feasibility of (t) will follow if we prove that e(t) < 0, where e(t): = lim ek(t) almost everywhere in [0,T] and

and

Let pk be a partition of [0,T] and let pk+1 be its refinement. Set p: = .

Under hypothesis (H1), ek(·) is continuous almost everywhere in [0,T] which implies e(·) is also continuous almost everywhere in [0,T]. It follows, then, that 0 > e(t) : = lim ek(t), "t Î p. It is also easy to see that p = is dense in [0,T]. Let Î [0,T] be point of continuity of e(·) and {tj} Ì p be a sequence such that tj® . This implies that e( ) < 0 (e( ) = lim j e (tj) < 0).

Theorem 4.4.Let (xnk(t)) and (qnk(t)) be subsequences obtained according to Lemma 4.2 and (t), (t) their limits. Then

Proof. a) For this proof we will introduce some additional notation

Using the trapezoidal approximation in

[(1/2) ' Q(t)+ a'(t)(t)] dt

and using above notation we see that

It needs to be shown that

We have

½(T/lnk) a'nk (nk - xnk)½ < (T/lnk) abs (a'nk) abs (nk - xnk)

and

Since a(t) is bounded, there exists a constant l > 0 such that abs ank < l E for any nk, where E is the vector with lnk components equal 1. We also have, because of the uniform bound on of ||Qnk||, nk and xnk, that there exist constants a and b such that abs (¢nkQnk) < aE e abs (Qnkxnk) < bE, for any nk. Hence,

and

It follows from Lebesgue's dominated convergence theorem that

and

Therefore,

b) The proof of this part is similar to the proof of part (a).

5 Duality

In this section we discuss duality for LQCTP. It is a feature of our approach that the continuous dual problem contains no primal variable. Further, it is not of a maxmin type. We show here that problem (D), as defined in Section 3, is in fact a dual of (LQCTP). The proof will follow from approximations of the continuous time pair of optimization problems by a finite dimensional ones whose duality theory is already known. Taking the limits we get the desired weak (Theorem 5.1) and strong (Theorem 5.2) duality results.

Theorem 5.1 [Weak Duality].If x(t) and q(t) are feasible solutions for Primal (LQCTP) and Dual (D) problems, respectively, then

[(1/2)x'(t)Q(t)x(t) + a'(t)x(t)]dt

<[-(1/2)q'(t)M(t)q(t) + N'(t)q(t) + R(t)]dt

Proof. Let x(t) Î BV([0,T];N) and q(t) Î BV([0,T];m+N) be feasible solutions of LQCTP and D, respectively. Let pn be a partition of the interval [0,T] and {xn}, {qn} be the values of x(t) and q(t) in the points of the partition pn. It is easy to see that {xn}, {qn} are feasible solutions of (Pn) and (Dn), respectively. Now, consider their continuous extensions xn(t) and qn(t) to the whole interval, as given by (2) and (3).

Because of Helly's convergence theorem [7], there exists (t), (t) such that, xnk(t) ® (t), qnk(t) ® (t), where xnk(t), qnk(t) are subsequences of xn(t),qn(t), respectively.

We now show that (t) = x(t) and (t) = q(t) almost everywhere in [0,T]. Let S: = {t Î [0,T]: xnk(t) ® (t)}. S is dense in [0,T]. We also know that the set p: = is dense in [0,T]. Let s Î p Ç S. Then s Î pn0 for some n0. This implies that xnk(s) = xn0(s) = x(s) for all nk > n0. But xnk(s) ® (s). So, x(s) = (s). Since s Î p Ç S is arbitrary it follows that (s) = x(s) almost everywhere in [0,T]. Analogously, we prove that (t) = q(t) almost everywhere in [0,T].

By Theorem 4.2, it is known that

(T/lnk)[(1/2) x'nkQnk xnk + a'nkxnk]

® [(1/2) x'(t)Q(t)x(t) + a'(t)x(t)]dt, for k ® ¥

and

(T/lnk)[-(1/2)q'nkQnkqnk + N'nkqnk + Rnk]

® [-(1/2)q'(t)Q(t)q(t) + N'(t)q(t) + R(t)]dt, for k ® ¥

Since

(T/lnk)[(1/2) x'nkQnk xnk + a'nkxnk]

< (T/lnk)[-(1/2)q'nkQnkqnk + N'nk qnk + Rnk], " i;

by [4] and taking the limit as k ® ¥, we obtain

[(1/2)x'(t)Q(t)x(t) + a'(t)x(t)]dt

< [-(1/ 2) q'(t)M(t)q(t) + N' (t)q(t) + R(t)]dt

We now state the result that provides the optimality of the algorithm of discretization.

Theorem 5.2 [Strong Duality].Let x(t) Î BV([0,T];N) and q(t) Î BV([0,T];m+N) be feasible solutions of LQCTP and D, respectively, such that

[(1/2)x'(t)Q(t)x(t) + a'(t)x(t)]dt

= [-(1/ 2) q'(t)M(t)q(t) + N' (t)q(t) + R(t)]dt

Then, x(t) and q(t) are primal and dual optimal solutions, respectively.

Proof. It is immediate from Theorem 5.1.

Remark 5.3. Under hypothesis (H1) the optima of the primal and the dual problems (LQCTP) and (D) are uniquely achieved (except on zero-measure set) at x(t) and q(t), respectively. This observation implies that the sequences (themselves) {xn(t)} and {qn(t)} generated by the optimal solutions of the discrete problems (Pn) and (Dn) converge to x(t) and q(t), almost everywhere.

To verify the second affirmation of Remark 5.3, take any subsequences {xn' (t)} and {qn' (t)} of {xn(t)} and {qn(t)}, respectively. By Lemma 4.2 and Theorems 4.2 and 5.2 there exist further subsequences {xn'k(t)} and {qn'k(t)} that converge to the unique optimal solutions of (LQCTP) and (D), respectively. So, all subsequences have further subsequences that converge to the same limits. Therefore, the sequences (themselves) {xn(t)} and {qn(t)} converge to x(t) and q(t), respectively as required.

Remark 5.3 will be taken into consideration in the next section.

6 Karush-Kuhn-Tucker conditions (KKT)

The KKT conditions play no role in the convergence analysis of the approximating scheme we propose in this work. However, it is crucial in the process of obtaining analytical optimal solutions to compare the numerical experiments with.

On the other hand, the convergence set up provides the means from which it is possible to establish the KKT optimality conditions. Let us now state and prove them.

Proposition 6.1.Suppose hypotheses (H1)-(H3) are in force. Let x(t) be a function of bounded variation which minimizes LQCTP. Then there exists w, l:[0,T] ® n, functions of bounded variation such that

1. w(t), l(t) > 0 a.e. t Î [0, T]

2. [Q(t)x(t) + a' + B'(t)w(t) - K(s,t)w(s)ds - l(t)]dt = 0

3. w'(t) [B(t)x(t) - K (t,s)x(s) - c(t)] = 0 a.e. t Î [0,T]

4. x' (t)l(t) = 0 a.e. t Î [0,T]

Proof. Let x(t) minimize LQCTP and consider the discretized problem (Pn). Let xn be the optimal solution of (Pn). By Proposition 3.1 there exist

wn, ln : [0,T] ® n

Such that

Moreover, qn(t) := (wn,ln) is the optimal solution of the discrete dual (Dn).

Let xn(t) and qn(t) be defined as in (2) and (3), respectively. As it was pointed out previously, qn(t) is a continuous time dual feasible solution, but xn(t) may not be. Nonetheless, because of Lemma 4.2, Theorems 4.4 and 5.2 we have that xn(t) ® x(t) and qn(t) ® q (t), where x(t) and q(t) are the unique solutions the continuous time primal and dual problems, respectively (except for a zero-measure set).

We are now in position to prove the KKT conditions for LQCTP. First, assertion 1. is satisfied, since q(t) is dual feasible. To prove assertions 2., 3. and 4. consider

Since qn(t) ® q(t) and xn(t) ® x(t) almost everywhere, the right sides of (8), (9) and (10) converge to the left sides of 2., 3. and 4. (The first convergence relies on the Lebesgue dominated convergence theorem). The proof will be complete if we prove that whenever pn+1 is a refinement of pn, for every n, we have n ® 0, en(t) ® 0 and rn(t) ® 0, as n ® ¥.

The proofs that rn(t) and en(t) converge to zero almost everywhere follow from (6) and (7) by making use of arguments similar to those of the proof of Theorem 4.3 (c).

To prove that

n goes to zero as n ® ¥ we use (5) and arguments similar to the proof of Theorem 4.4 (a) and (b).

7 Examples

In this section we provide a series of examples to show that our approach is reasonable to solve difficult problems in the LQCTP class. To make sure our approach gives a good approximate solution, we selected some examples, where the analytical optimal solutions can be obtained, with the help of Proposition 6.1, and compare the numerical solutions with the analytical ones.

We solved analytically a family of problems with varying parameters. Optimality of candidates was checked via KKT conditions. Then we solved numerically the same family of problems, making use of the proposed scheme of approximation and verified that the numerical solutions approximated very well the analytical true solutions of the problems in consideration.

In order to calculate the numerical solutions to make sure our theory works in practice, we solved the discretized quadratic problems with the commercial packaged CPLEX 4.0. in a 700MHz PC-pentium III, with 128MB of RAM.

A comparison between the analytical solutions and the numerical ones, which were obtained for partitions of 1600 points of the interval, gave an approximation no greater than 10-6, using the L1-norm.

For easy of understanding and visual comparisons, we plotted the graphs of the analytical and the numerical solutions in the same place, see Figures 1-5. For each particular example, we chose to plot only a sample of points of the numerical solution because, had we chosen to plot the numerical solution along aside with analytical solution, using a great quantity of points, the two curves would have merged.






It is also worth noting that in each figure, the drawing on the left is for x1, the first component of the optimal solution, while the drawing on the right is for x2.

The five examples we explore here are, in fact, a family of problems parameterized by (a, b)', that we introduce now.

By using Proposition 6.1, we calculate the analytical solutions, for each case below. Observe that we make changes on the values of the parameters a and b in each example. In the first four examples the analytical solutions are continuous functions. In Example 7.5 the optimal solution contains discontinuous functions, see Figure 5. It can be observed in Figures 1, 2, 3, 4 and 5 that the graphs of the numerical and the analytical solutions are visually the same.

Example 7.1. If a > 2 and b > 2, then x1(t) = 2 and x2(t) = 2 are optimal solutions.

Example 7.2. If we make 2a + b < 6 with a > b, 0 < b < 2; the following analytical solution has been obtained:

Example 7.3. If 2a + b > 6 with a > b, 0 < b < 2, the following analytical solution has been obtained:

Example 7.4. If a > 2 and b = 0.0; the following analytical solution has been obtained:

and x2(t) º 0.

Example 7.5. In this example take function c(t) to be the vector (a, b)', where

Now, consider the following numbers

where

Taking a > b, 2a0 + [] < 6, t0< t2, t3> t4, 2[]+[] < 6, the following analytical solution has been obtained:

The values for which this example works are a0 = 1, a1 = 0.5, b0 = b1 = 0.5, and t0 = 0.25.

8 Final consideration

In this work we consider a general class of LQCTP and proposed a computational scheme that is able to provide, via interpolation, a nearby continuous time optimal solution. We show through examples that in a number of cases it is possible to actually find an analytical optimal solution of the problem. We also established weak and strong duality results for this class of problems in which the dual problems contain only dual variables as opposed to minmax formulation of duality. For the purpose of checking the optimality of candidates of analytical solutions we provided the so called Karush-Kuhn-Tucker conditions of optimality. The proof of these conditions relies on finite dimensional Karush-Kuhn-Tucker conditions and the analysis of convergence of continuous solutions obtained from the optimal solution of the discrete approximating problems. In our set of examples it becomes clear that the proposed approximating scheme for linear quadratic continuous time problems works well.

However, some questions remain unanswered. 1) Is it possible to relax the positive definiteness of Q in the quadratic integral cost? 2) Is it possible to generalize the method employed here to address general nonlinear continuous time convex problems?

9 Acknowledgment

We thank the anonymous referee for his helpful comments and suggestions for improving this work.

Received: 19/II/04. Accepted: 04/V/04.

#601/04.

  • [1] J. Abrham and R. N. Buie, Kuhn Tucker conditions and duality in continuous programming, Utilitas Mathematicas, 16 (1979), pp. 15-37.
  • [2] J. Abrham and R. N. Buie, Duality in continuous programming and optimal control, Utilitas Math., 17 (1980), 103-109.
  • [3] R. N. Buie and J. Abrham, Numerical solutions to continuous linear programming problems, Zeitschrif für Operations Research, 17 (1973), pp. 107-117.
  • [4] R. Fletcher, Practical Methods of Optimization, 2nd Ed., John Wiley & Sons, New York, (1987).
  • [5] R. C. Grinold, Continuous programming, part one: linear objectives, Journal Mathematical Analysis and Applications, 28 (1969), pp. 32-51.
  • [6] P. Levine and J. C. Pomerol, C-closed mappings and Kuhn-Tucker vectors in convex programming, Discussion Paper 7620, Center for Operations Research and Economics, Universite Catholique de Louvain, Heverlee, Belgium, (1976).
  • [7] S. Lojasiewicz, An Introduction to the Theory of Real Functions, John Wiley & Sons, New York, (1988).
  • [8] D. G. Luenberger, Introduction to Linear and Nonlinear Programming, Addison-Wesley, Boston, (1973).
  • [9] A. B. Philpot, Continuous time shortest path problems and linear programming, SIAM Journal Control and Optimization, 32 (1994), pp. 538-552.
  • [10] M. C. Pullan, An algorithm for a class of continuous linear programs, SIAM Journal Control and Optimization, 31 (1993), pp. 1558-1577.
  • [11] W. F. Tyndal, A duality theorem for a class of continuous liner programming problems, SIAM Journal of Applied Mathematics, 13 (1965), pp. 644-666.
  • [12] W. F. Tyndal, An extended duality theorem for continuous liner programming problems, SIAM Journal of Applied Mathematics, 15 (1967), pp. 1294-1298.
  • *
    Partially supported by CNPq and FAPESP of Brasil.
    †
    Professor.
    ‡
    Postgraduate student.
  • Publication Dates

    • Publication in this collection
      26 Nov 2004
    • Date of issue
      2004

    History

    • Accepted
      04 May 2004
    • Received
      19 Feb 2004
    Sociedade Brasileira de Matemática Aplicada e Computacional Sociedade Brasileira de Matemática Aplicada e Computacional - SBMAC, Rua Maestro João Seppe, nº. 900 , 16º. andar - Sala 163, 13561-120 São Carlos - SP Brasil, Tel./Fax: 55 16 3412-9752 - São Carlos - SP - Brazil
    E-mail: sbmac@sbmac.org.br