## Services on Demand

## Journal

## Article

## Indicators

- Cited by SciELO
- Access statistics

## Related links

- Cited by Google
- Similars in SciELO
- Similars in Google

## Share

## Computational & Applied Mathematics

##
*Print version* ISSN 2238-3603*On-line version* ISSN 1807-0302

### Comput. Appl. Math. vol.22 no.3 Petrópolis 2003

**Infinite horizon differential games for abstract evolution equations**

**A.J. Shaiju ^{* }**

Department of Mathematics, Indian Institute of Science, Bangalore, 560 012, India, E-mail: shaiju@math.iisc.ernet.in

**ABSTRACT**

Berkovitz's notion of strategy and payoff for differential games is extended to study two player zero-sum infinite dimensional differential games on the infinite horizon with discounted payoff. After proving dynamic programming inequalities in this framework, we establish the existence and characterization of value. We also construct a saddle point for the game.

**Mathematical subject classification:** 91A23, 49N70, 49L20, 49L25.

**Key words:** differential game, strategy, value, viscosity solution, saddle point.

**1 Introduction **

In [1], Berkovitz has introduced a novel approach to differential games of fixed duration. He has extended this framework to cover games of generalized pursuit-evasion [2] and games of survival [3] in finite dimensional spaces. Motivated by these developments we define strategies and payoff for infinite horizon discounted problems whose state is governed by a controlled semi-linear evolution equation in a Hilbert space. In this setup, we show the existence of value and then characterize it as the unique viscosity solution of the associated Hamilton-Jacobi-Isaacs (HJI for short) equation. To achieve this, we follow a dynamic programming method and hence we differ from Berkovitz's approach for finite horizon problems [4]. We also establish the existence of a saddle point for the game by constructing it in a feedback form.

The rest of this paper is organized as follows. The description of the game and some important preliminary results are given in Section 2. In Section 3, we deal with dynamic programming and characterization of the value function. Section 4 contains the construction of saddle point equilibrium. We conclude the paper with some remarks in Section 5.

**2 Preliminaries **

Let the compact metric spaces *U* and *V* be the control sets for players 1 and 2 respectively. For 0 __<__ *s* < *t* , let

The sets [*s, t*] and [*s, t*] are called the control spaces on the time interval [*s, t*] for players 1 and 2 respectively. The functions *u*(·) Î [*s, t*] and *v*(·) Î [*s, t*] are referred to as the precise or usual controls (or simply 'controls') on the time interval [*s, t*] for players 1 and 2 respectively. We denote [0, ¥) and [0, ¥) by and respectively.

Let , a real Hilbert space, be the state space. Let *x*(*t*) Î denote the state at time *t*. The state *x*(·) with initial point *x*_{0} Î is governed by the following controlled semi-linear evolution equation:

where *f*: × * U* ×* V* ® , *u*(·) Î , *v*(·) Î and –*A* : É *D*(*A*) ® is the generator of a contraction semigroup {*S*(*t*)} on . We assume that

**(A1) ** The function *f* is continuous and, for all *x, y* Î and (*u, v*) Î *U × V*,

Under the assumption (A1), for each *u*(·) Î , *v*(·) Î and *x*_{0} Î , (2.1) has a unique global mild solution (see e.g., Proposition 5.3, p. 66 in [10]) which is denoted by f(·, *x*_{0}, *u*(·), *v*(·)) and is referred to as the trajectory corresponding to the pair of controls (*u*(·), v(·)) with initial point *x*_{0}.

Following Warga [13], we now describe relaxed controls and relaxed trajectories. Let

where (*U*) and (*V*) are the spaces of probability measures on *U* and *V* respectively with the topology of weak convergence. The sets [*s, t*] and [*s, t*] are called the relaxed control spaces on the time interval [*s, t*] for players 1 and 2 respectively. These relaxed control spaces, equipped with weak^{*} topology, are compact metric spaces. The relaxed control spaces [0, ¥) and [0,¥) are denoted by and respectively. Note that by identifying *u*(·) and *v*(·) by d_{u(·)} and d_{v}_{(·)} respectively, precise controls can be treated as relaxed controls.

For m(·) Î and n(·) Î , the state equation in the relaxed control framework is given by

Since *f* satisfies (A1), it follows that also satisfies (A1) with *u* Î *U* , *v *Î* V* replaced respectively by m Î (*U*) , n Î (*V*). Therefore for each *x*_{0} Î , m(·) Î and n(·) Î , the existence and uniqueness of a global mild solution to (2.2) follows analogously. This solution is called a relaxed trajectory and is denoted by y(·, *x*_{0}, m(·), n(·)).

We now begin the description of the game by defining the strategies of players. A strategy for player 1 is a sequence P = {P* _{n}*} of partitions of [0, ¥), with ||P

*|| ® 0 , and a sequence G = {G*

_{n}*} of instructions described as follows:*

_{n}Let P* _{n}* = {0 =

*t*

_{0}<

*t*

_{1}< ¼}. The nth stage instruction G

_{n}is given by a sequence , where G

_{n}_{,1}Î [

*t*

_{0},

*t*

_{1}) and for j

__>__2,

Similarly, a strategy for player 2 is a sequence = {* _{n}*} of partitions of [0, ¥), with ||

*|| ® 0 , and a sequence D = {D*

_{n}_{n}} of instructions described as follows:

Let * _{n}* = {0 =

*s*

_{0}<

*s*

_{1}< ¼}. The nth stage instruction D

_{n}is given by a sequence , where D

_{n}

_{,1}Î [

*s*

_{0},

*s*

_{1}) and for

*j*

__>__2,

We suppress the dependence of the sequence of partitions on a strategy and, by an abuse of notation, denote a strategy by G or D. In what follows G stands for a strategy for player 1 and D stands for a strategy for player 2.

Note that a pair (G* _{n}*, D

*) of nth stage instructions uniquely determines a pair (*

_{n}*u*(·),

_{n}*v*(·)) Î × as follows. Let

_{n}_{n}= {0 =

*r*

_{0}<

*r*

_{1}< ¼} be the common refinement of P

_{n}and

*. The control functions*

_{n}*u*(·) and

_{n}*v*(·) are given by the sequences (

_{n}*u*

_{n}_{,1}(·),

*u*

_{n}_{,2}(·),¼) and (

*v*

_{n}_{,1}(·),

*v*

_{n}_{,2}(·),¼) respecively, where

*u*(·) Î [

_{n,j}*r*

_{j}_{–1},

*r*) and

_{j}*v*(·) Î [

_{n,j}*r*

_{j}_{–1},

*r*). Let

_{j}*u*(·),

*v*(·) denote respectively the restrictions of

*u*(·),

_{n}*v*(·) to the interval [

_{n}*r*

_{0},

*r*).

_{j}On [*r*_{0},* r*_{1}), set *u _{n}*

_{,1}(·) = G

_{n,1}and

*v*

_{n}_{,1}(·) = D

_{n,1}.

Let *j* __>__ 1. If *r _{j}* =

*t*, then on [

_{i}*r*

_{j}, r_{j}_{+1}) we take

*u*

_{n,j}_{+1}(·) = G

_{n,i}_{+1}(

*u*(·),

*v*(·)) and

*v*

_{n,j}_{+1}(·) = D

_{n,l+1}(

*u*(·),

*v*(·)) , where

*l*is the greatest integer such that

*s*

_{l}__<__

*r*and

_{j}*s*=

_{l}*r*

_{j}_{¢}.

If *r _{j}* =

*s*, then on [

_{m}*r*

_{j}, r_{j}_{+1}) take

*u*

_{n,j}_{+1}(·) = G

_{n,k+1}(

*u*(·),

*v*(·)) and v

_{n,j+1}(·) = D

_{n,m+1}(

*u*(·),

*v*(·)) , where k is the greatest integer such that

*t*

_{k}__<__

*r*and

_{j}*t*=

_{k}*r*

_{m}_{¢}.

The pair (*u _{n}*(·),

*v*(·)) determined this way is called the nth stage outcome of the pair (G, D) of strategies.

_{n}Let c: × *U* × *V* ® be the running payoff function and let l > 0 be the discount factor. We assume that

**(A2) ** The function c is bounded, continuous and, for all *x, y* Î and (*u, v*) Î *U × V *

|*c*(*x, u, v*) – *c*(*y, u, v*)|__<__ *K*||*x – y*||.

Without any loss of generality, we take *c* to be nonnegative. For *x*_{0} Î and (*u*(·), *v*(·)) Î × , let f^{0}(·, *x*_{0}, *u*(·), *v*(·)) denote the solution of

Let (·, *x*_{0}, *u*(·), *v*(·)) denote . The running cost in the relaxed framework is defined by

Note that satisfies (A2) with (*u, v*) Î *U × V* replaced by (m, n) Î (*U*) × (*V*). Now the relaxed trajectory (·, *x*_{0}, m(·), n(·)) is interpreted in an analogous way. Let

The next result is helpful in defining the concept of motion in the game. To achieve this, we make the following assumption.

**(A3) ** The semigroup {*S*(*t*)} is compact.

**Lemma 2.1. ** *Assume (A1)-(A3). Let *{(*u _{n}*(·)

*, v*(·)}

_{n}*be the sequence of nth stage outcomes corresponding to a pair*(G, D)

*of strategies and*{

*x*

_{0}

*}*

_{n}*, a sequence converging to x*

_{0}

*. Then the sequence*{(·

*, x*

_{0}

*(·)*

_{n}, u_{n}*, v*(·))}

_{n}*of nth stage trajectories is relatively compact in C*([0

*,*¥)

*;*

*).*

**Proof. ** Let _{n}(·) = (·, *x*_{0}* _{n}*,

*u*(·),

_{n}*v*(·)),

_{n}*h*(·) = (·,

_{n}_{n}(·),

*u*(·),

_{n}*v*(·)),

_{n}*(·) =*

_{n}

It is enough to show that for each *T* > 0, the sequence of nth stage trajectories, when restricted to [0, *T*], is relatively compact in *C*([0, *T*]; ). Fix *T* > 0. Let *Q *: *L*^{2}([0, *T*]; ) ® *C*([0, *T*]; ) be the operator defined by

Then

_{n} (·) = _{n} (·) + *Q*(*h _{n}*(·))(·).

Since the sequence _{n}(·) converges to (·) uniformly, it is sufficient to prove the relative compactness of {*Q*(*h _{n}*(·))(·) }. To achieve this we show that the operator

*Q*is compact. This will imply the desired result since, by (A1) and (A2), {

*h*(·)} is bounded in

_{n}*L*

^{2}([0,

*T*]; ). Let {h

*(·)} be a sequence in the unit ball of*

_{k}*L*

^{2}([0,

*T*]; ). We need to show that {

*Q*h

*(·)(·)} is relatively compact. By Arzela-Ascoli theorem, the proof will be complete if we establish the pointwise relative compactness and equicontinuity of the sequence {*

_{k}*Q*h

*(·)(·)}.*

_{k}Let *t* Î [0, *T*]. We first prove the relative compactness of {*Q*h* _{k}*(·)(

*t*)}. This is trivial if

*t*= 0. So we assume that

*t*> 0. Let > 0 be given. Since {h

*(·)} is in the unit ball, there exists d Î (0,*

_{k}*t*) such that for all

*k*

Note that

where

Since (d) is compact and {*y _{k}*} is bounded in , there exist

*y*

_{1},¼,

*y*Î such that . Therefore {

_{m}*Q*(h

*(·))(*

_{k}*t*)} Ì

*B*(

*y*, ). Thus we have established the relative compactness of {

_{i}*Q*(h

*(·))(*

_{k}*t*)}.

Next we prove the equicontinuity of {*Q*(h* _{k}*(·))(

*t*)}. Let

*t*,

*s*Î [0,

*T*] and

*s*<

*t*. The case when

*s*= 0 is trivial. Assume that 0 <

*s*. Now for d small enough,

*Q*(h* _{k}*(·))(

*t*) –

*Q*(h

*(·))(*

_{k}*s*) =

*I*

_{1}+

*I*

_{2}+

*I*

_{3};

where

By Cauchy-Schwartz inequality, we get

where *C* is a constant independent of *k*. The map *t* (*t*) is continuous in the uniform operator topology on (0, ¥) because {*S*(*t*)} is a compact semigroup. Thus from the above, we get the equicontinuity of {*Q*(h* _{k}*(·))(·)}.

We now define the concept of motion. Let {*x*_{0}* _{n}*} be a sequence converging to

*x*

_{0}and let {(

*u*(·),

_{n}*v*(·))} be the sequence of nth stage outcomes corresponding to the pair of strategies (G, D). By Lemma 2.1, the sequence of nth stage trajectories {(·,

_{n}*x*

_{0}

*,*

_{n}*u*(·),

_{n}*v*(·))} is relatively compact in

_{n}*C*([0, ¥); ). We define a motion to be the local uniform limit of a subsequence of a sequence of nth stage trajectories.

A motion is denoted by [·, *x*_{0}, G, D]. Let [·, *x*_{0}, G, D] denote the set of all motions corresponding to (G, D) which start from *x*_{0}. A motion [·, *x*_{0}, G, D] can be written as

Let F^{0}[·, *x*_{0}, G, D], F[·,*x*_{0}, G, D] respectively denote the set of all f^{0}[·, *x*_{0}, G, D] , f[·,* x*_{0}, G, D]. The set of all [*t*, *x*_{0}, G, D] where [·, *x*_{0}, G, D] runs over [·, *x*_{0}, G, D] is denoted by [*t*, *x*_{0}, G, D]. Similarly, the sets F^{0}[*t*, *x*_{0}, G, D] and F[*t*, *x*_{0}, G, D] are defined. Since *c* __>__ 0, for any f^{0}[·] = f^{0}[·, *x*_{0}, G, D], lim f^{0}[*t*] exists and is denoted by f^{0}[¥, *x*_{0}, G, D]. As above, F^{0}[¥, *x*_{0}, G, D] is the set of all f^{0}[¥, *x*_{0}, G, D]. If the initial point of the augmented component f^{0}(·) of the trajectory is not zero, then the corresponding extended trajectory is denoted by

By (·, *t*_{0}, _{0}, *u*(·), *v*(·)), we mean the trajectory

Similarly, the relaxed trajectories (·, _{0}, m(·), n(·)) and (·, *t*_{0}, _{0}, m(·), n(·)) are defined. To complete the description of the game, we need to define the payoff. The payoff associated with the pair of strategies (G, D) is set valued and is given by

*P*(*x*_{0},G, D) = F^{0}[¥, *x*_{0}, G, D].

The player 1 tries to choose G so as to maximize all elements of *P*(*x*_{0}, G, D) and player 2 tries to choose D so as to minimize all elements of *P*(*x*_{0}, G, D). This gives rise to the upper and lower value functions which are respectively given by

(If {*D*_{a}} is a collection of subsets of , then sup_{a}*D*_{a} : = supÈ_{a}*D*_{a} and inf_{a}*D*_{a} : = infÈ_{a}*D*_{a}.) Therefore the upper and lower value functions are real valued functions. Clearly *W*^{+} __>__ *W*^{–}. If *W*^{+} = *W*^{–} = *W*, then we say that the game has a value and W is referred to as the value function.

A pair of strategies (G^{*}, D^{*}) is said to constitute a saddle point for the game starting from *x*_{0}, if for all (G, D),

*P*(*x*_{0}, G, D^{*}) __<__ *P*(x_{0}, G^{*}, D^{*}) __<__ P(x_{0},G^{*}, D).

(By *D*_{1} __<__ *D*_{2} we mean *r*_{1} __<__ *r*_{2} for all (*r*_{1}, *r*_{2}) Î *D*_{1} × * D*_{2}.) Note that if (G^{*}, D^{*}) is a saddle point, then *P*(x_{0}, G^{*}, D^{*}) is singleton and

*W*^{+}(*x*_{0}) = *W*–(*x*_{0}) = *P*(*x*_{0}, G^{*}, D^{*}).

By a constant component strategy G* ^{c}* for player 1 corresponding to the sequence {

*u*(·)} of controls, we mean a strategy where, for each

_{n}*n*, the player 1 chooses the open loop control

*u*(·) at the nth stage. If

_{n}*u*(·) º

_{n}*u*(·) for all n, then this strategy is referred to as a constant strategy corresponding to the open loop control

*u*(·). Constant component strategies and constant strategies for player 2 are defind in a similar fashion.

In view of Lemma 2.1, the following result may be obtained by modifying the arguments in [1]. Hence we omit the proof.

**Lemma 2.2. ** *Assume *(A1)-(A3).

(i) Let x_{0}Î,be a constant strategy corresponding to(·)Îand letDbe any strategy for player 2. Then for any motion[·, x_{0},,D], there exists a relaxed controln(·) Îsuch that

Conversely, given any relaxed trajectory(·,x_{0},(·), n(·)),there exists a motion[·,x_{0},,D]such that (2.4) holds.

(ii) For any0< t <¥and constant strategy, the setÈ_{D}[t, x_{0},,D]is compact.

Analogous results hold with *, *D replaced respectively by , G.

**3 Dynamic programming and viscosity solution **

Before proving the dynamic programming inequalities, we show the continuity properties of *W*^{+} and *W*^{–}. To this end, we first compare the trajectories with different initial points.

**Lemma 3.1. ** *Assume (A1) and (A2). For any *a Î (0, 1] Ç(0, )*, there exists C*_{a}* > *0* such that *

|f^{0}(*t*, *x*_{0}, *u*(·), *v*(·)) – f^{0}(*t*, *y*_{0}, *u*(·), *v*(·))|__<__ *C*_{a}||*x*_{0} – *y*_{0} ||^{a},

*for all t > *0*, x*_{0}*, y*_{0} Î * and *(*u*(·)*,v*(·)) Î *× **. *

**Proof. ** Let _{1}(·) = (·, *x*_{0}, *u*(·), *v*(·)) and _{2}(·) = (·, *y*_{0}, *u*(·), *v*(·)). Obviously,

From this, it follows by using Gronwall inequality that

||f_{1}(*t*) – f_{2}(*t*)||__<__||*x*_{0 – }*y*_{0}||*e ^{Kt}*.

We have

Therefore for any a Î (0,1] Ç (0, ), we obtain

Henceforth we take a Î (0,1] Ç (0, ) and *C*_{a} = .

**Lemma 3.2. ** *Assume *(A1)-(A3)*. Let x*_{0}*, y*_{0} Î * and *(G, D)* a pair of strategies. Then for any motion *[·, *x*_{0}*, *G, D]*, there is a motion *[·, *y*_{0}*, *G, D] *with the property that *

|f^{0}[¥, *x*_{0}, G, D] – f^{0}[¥, *y*_{0}, G, D]| __<__* C*_{a}||*x*_{0 }– *y*_{0}||^{a}.

**Proof. ** Consider a motion [·, *x*_{0}*, *G, D] and without any loss of generality let it be the local uniform limit of a sequence {(·, *x*_{0}* _{n}*,

*u*(·),

_{n}*v*(·))} of nth stage trajectories. Let [·,

_{n}*y*

_{0}

*,*G, D] be the local uniform limit of a subsequence {(·,

*y*

_{0},

*u*(·),

_{nk}*v*(·))} of the sequence {(·,

_{nk}*y*

_{0},

*u*(·),

_{n}*v*(·))} of nth stage trajectories. From Lemma 3.1, it follows that for

_{n}*t*> 0,

|f^{0}(*t, x*_{0nk}, *u _{nk}*(·),

*v*(·)) – f

_{nk}^{0}(

*t*,

*y*

_{0},

*u*(·),

_{nk}*v*(·))|

_{nk}__<__

*C*

_{a}||

*x*

_{0}

*–*

_{nk}*y*

_{0}||

^{a}.

Letting *k* ® ¥, we get

|f^{0}[*t, x*_{0},G, D] – f^{0}[*t*, *y*_{0}*, *G, D]| __<__* C*_{a}||*x*_{0 }– *y*_{0}||^{a}.

The required result now follows by lettiong *t* ¥ in the above inequality.

**Lemma 3.3. ** *Assume *(A1)-(A3).* The upper and lower value functions are bounded and Holder continuous on ** with exponent *a Î (0,1] Ç (0, ) .

**Proof. ** The boundedness of *c* gives the boundedness of *W*^{+} and *W*^{–}. The Holder continuity of *W*^{+} and *W*^{–} follow immediately from Lemma 3.2.

Having established the continuity of *W*^{+} and *W*^{–}, we now prove dynamic programming inequalities.

**Lemma 3.4. ** *Assume *(A1)-(A3). *For x*_{0} Î * and *0 < *t* < ¥,

**Proof. ** Take an arbitrary (·) Î and keep it fixed. It is enough to show that

Let *E*_{0} = { : = (*t*, *x*_{0}, (·), n(·)) for some n(·) Î } and > 0. For any

Therefore for each Î *E*_{0}, there exists a strategy G() such that for all D,

Let d() > 0 be such that whenever || – || < d() ,

Now *E*_{0} is compact (by Lemma 2.2 (ii)) and the collection {*B*(, d()) : Î *E*_{0}} is an open cover for *E*_{0}. Let _{1}, _{2}, ¼, * _{k}* Î

*E*

_{0}be such that

*E*

_{0}Ì

*B*(

_{i}, d(

*)) .*

_{i}In order to prove (3.3), it is sufficient to construct a strategy with the property that for all D and all motions [·, *x*_{0}, , D],

We first define . Let P^{¢}(*i*) = {} be the sequence of partitions associated with G(* _{i}*);

*i*= 1, 2,¼,

*k*. Let be the refinement of . Let P

*be the partition of [0, ¥) such that t is a partition point, [0,*

_{n}*t*] is partitioned into

*n*equal intervals and the interval [

*t*, ¥) is partitioned by the translation of to the right by t. We take P = {P

*} to be the sequence of partitions associated with = {*

_{n}*}. Let P*

_{n}_{n}= {0 = t

_{0}< t

_{1}< ¼}.

We now define * _{n}* = (

_{n}_{,1},

_{n}_{, 2}, ¼).

For *i* = 1,¼, *n*, define * _{n,i}* to be the map which always selects (·) on [t

_{i–1},

*t*).

_{i}For *i* __>__ *n* + 1 , we define * _{n,i}* as follows. Let

*u*(·) Î [t

_{0}, t

_{i}_{-1}),

*v*(·) Î [t

_{0}, t

_{i}_{-1}) and (·) = (·,

*x*

_{0},

*u*(·),

*v*(·)). If (

*t*) Ï

*B*(

*, d(*

_{i}*)), then we define*

_{i}*(*

_{n,i}*u*(·),

*v*(·)) to be a fixed element

*u*

_{0}Î

*U*.

Let (*t*) Î *B*(* _{i}*, d(

*)) and*

_{i}*j*the least integer such that (

*t*) Î

*B*(

*, d(*

_{j}*)). We then take*

_{j}*to be G*

_{n,i}_{n}(

*) in the following sense.*

_{j}Let = {0 = < < ¼}. Let (·) be the control that G_{n,1}(* _{j}*) selects on [, ). For any

*i*such that

*n*<

*i*

__<__

*n+i*

_{1}, the map

*selects (·) on [t*

_{n,i}_{i–1}, t

*). If (·) denotes the control that G*

_{i}_{n,2}(

*) selects on [, ), then for*

_{j}*i*with

*n + i*

_{1}<

*i*

__<__

*n + i*

_{2}, the map

*selects (·) on [t*

_{n,i}_{i–1}, t

*). Now the definition of is complete.*

_{i}It remains to prove (3.5). To this end, let D be any strategy for player 2, [·, *x*_{0}, , D] a motion and {(*u _{n}*(·),

*v*(·))} the sequence of nth stage outcomes corresponding to (, D). Without any loss of generality, we assume that this motion is the uniform limit of a sequence {(·,

_{n}*x*

_{0n},

*u*(·),

_{n}*v*(·))} of nth stage trajectories. Since

_{n}*u*(·) = (·) on [t

_{n}_{0}, t

*] = [0,*

_{n}*t*], := [

*t*,

*x*

_{0}, , D)] Î

*E*

_{0}. Let

*j*be the smallest integer such that Î

*B*(

*, d(*

_{j}*)) and*

_{j}*:= (*

_{n}*t*,

*x*

_{0n},

*u*(·),

_{n}*v*(·)). Since

_{n}*® , for*

_{n}*n*large enough,

*Î*

_{n}*B*(

_{j}, d(

*)).*

_{j}Let D^{c} = {D} be the constant component strategy corresponding to {*v _{n}*(·)} with the associated sequence of partitions same as that of D restricted to [

*t*, ¥) and translated back to [0, ¥). Therefore for large

*n*, the pair (

*u*(·),

_{n}*v*(·)) is the outcome of (G

_{n}*(*

_{n}*), D).*

_{j}Hence by (3.4), we obtain

By arguing analogously, we can prove the next result.

**Lemma 3.5 ** *Assume *(A1)-(A3). *For x*_{0} Î * and *0 <* t *< ¥*, *

The strict inequality can hold in (3.2) and (3.6). The following example illustrates this fact for (3.6).

**Example: ** = , *U* = *V* = [–1, 1], *A* = 0, *f*(*x, u, v*) = *u + v*, *c*(*x, u, v*) = *c*(*x*), a bounded, nonnegative and Lipschitz continuous function.

Note that *W*(*x*) = *c*(*x*), since *H*^{+}(*x, p*) = *H*^{–}(*x, p*) = –*c*(*x*). Furthermore take *c* such that *c*(*x*) = |*x*| on [–2,2]. Let *x*_{0} = 0. For 0 < *t* < 1, we get

Using dynamic programming inequalities, we next show that the upper (resp. lower) value function is a viscosity sub- (resp. super) solution of the HJI lower (resp. upper) equation. The HJI lower and upper equations are, respectively, the following:

where for *x, p* Î ,

We take the definition of viscosity solution given by Crandall and Lions in [5] and [6]. We first recall their definition of viscosity solution. To this end, let

S_{0}: = {Y Î C^{1}() | Y is weakly sequentially lower semi-continuous andA^{*}DY ÎC()}

_{0}: = {gÎC^{1}() |g(x) = r(||x||) for some r ÎC^{1}() with r^{¢}>0 }.

**Definition 3.6. ** *An upper (resp. lower) semi-continuous function W:* ® * is called a viscosity sub-(reps. super) solution of *(3.7) (*resp.* (3.8)) *if whenever W – *Y* – g ( *Y Î * S _{0}, g *Î

_{0})

*has a local maximum (resp. minimum) at x*Î

*, we have*

*If W *Î* C(**) is both a viscosity subsolution and a viscosity supersolution of an equation, then we call it a viscosity solution. *

**Lemma 3.7. ** *Assume *(A1)-(A3). *The upper value function W ^{+} is a viscosity subsolution of *(3.7)

*and the lower value function W*(3.8).

^{–}is a viscosity supersolution of**Proof.** We prove that *W*^{+} is a viscosity subsolution of (3.7). The other part can be proved in a similar fashion. Let Y Î *S*_{0}, *g* Î _{0} and let *x*_{0} be a local maximum of *W*^{+} – Y – *g*. Without any loss of generality we assume that *W*^{+}(*x*_{0}) = Y(*x*_{0}) and *g*(*x*_{0}) = 0.

We need to show that

Fix an arbitrary Î *V*. It is enough to show that

Let > 0. By Lemma 3.5, for each *t* > 0, there exists m_{t}(·) Î such that

We denote (·, *x*_{0}, m* _{t}*(·), ) by

*(·) =*

_{t}Hence for small enough *t*,

This implies that for t small enough,

It can be shown that (see e.g., Lemmas 3.3, 3.4 in pp. 240-241, [10]) for t small enough,

Combining (3.10), (3.11), (3.12), (3.13) and letting *t* ® 0, we get

Since is arbitrary, we get the required inequality (3.9).

We next show the existence of value and characterize it as the unique viscosity solution of the associated HJI equation. To achieve this we make the following assumption.

**(A0) ** There exists a positive symmetric linear operator *B* : ® and a constant *c*_{0} such that *R*(*B*) Ì *D*(*A*^{*}) and (*A*^{*}+*c*_{0}*I*)*B* __>__ *I*.

Let = á*Bx, x*ñ and , the class of all bounded functions *W*: ® with the property that for all *x, y* Î , |*W*(*x*) – *W*(*y*) | __<__ *w*(|*x – y*|* _{B}*) for some modulus

*w*. We shall prove the characterization in this class . Note that the class

*F*is contained in the class of bounded uniformly continuous functions.

We also require the so called Isaacs min-max condition. By 'local game' at (, ) Î × , we mean the zero-sum static game, in which player 1 is the minimizer and player 2 the maximizer, with payoff *x*^{0} + á– *p* ,*f*(*x, u, v*)ñ – *p*^{0}*c*(*x, u, v*). The Isaacs condition is that for each (, ) Î × , the associated local game has a saddle point. In other words, we assume that

**(A4)** For all (, ) Î × ,

**Remark 3.8. ** In (A4) it is enough to take *p*^{0} = ±1. For proving the existence of value, we only need (A4) with *p*^{0} = +1. But in the next Section we want *p*^{0} = ±1.

**Theorem 3.9. ** *Assume *(A0)-(A4). *The differential game has a value and this value function is the unique viscosity solution of *(3.7) (*or* (3.8)) *in the class **. *

**Proof. ** We first show that *W*^{+} and *W*^{–} belong to . Boundedness of *W*^{+} and *W*^{–} has been proved in Lemma 3.3.

Let *x*_{0}, *y*_{0}, *u*(·), *v*(·), f_{1}(·), f_{2}(·), (·), (·) be as in Lemma 3.1. Let > 0. Since c is bounded, there exists *T* = *T*() large enough such that

It can be shown that (see e.g., Lemma 2.5, p. 233 in [10])

for some constant *C*. Therefore, we obtain

This implies that we can choose d = d() > 0 such that |(¥) – (¥)| < whenever |*x*_{0 }– * y*_{0}|* _{B}* < d. Hence there is a modulus

*w*with the property that |(¥) – (¥)|

__<__

*w*(|

*x*

_{0}–

*y*

_{0}|

_{B}). Now we can mimic the arguments in Lemmas 3.2 and 3.3 to get the fact that the upper and lower value functions are in the class .

Under (A4), both (3.7) and (3.8) coincide. Therefore *W*^{+} and *W*^{–} are respectively sub- and super solutions of this equation (HJI equation). Now, we have the comparison result for the HJI equation in the class (see [6] and Chapter 6 in [10]). Therefore *W*^{+} __<__ * W*^{–}. But we always have *W*^{+} __>__ *W*^{–}. Hence *W*^{+} = *W*^{–}. The uniqueness follows from the same comparison result.

**4 Saddle point **

Under the Isaacs condition (A4), we prove the existence of a saddle point for the game. To achieve this we use only the dynamic programming inequalities in Section 3. We don't use the fact that the game has a value.

Fix an arbitrary *x*_{0} Î . Let *r*_{0} = *W*^{–}(*x*_{0}) and *r*^{0} = *W*^{+}(*x*_{0}). Consider the sets

Clearly (0,) Î *C*(*r*_{0}) Ç *C*(*r*^{0}) and, by the continuity of *W*^{+} and *W*^{–}, the sets *C*(*r*_{0}) and *C*(*r*^{0}) are closed.

The next two results are very crucial in constructing the optimal strategies. These results follow respectively from Lemmas 3.4 and 3.5.

**Lemma 4.1. ** *Assume *(A1)-(A3).* Let *(*t, *) Î *C*(*r*_{0})* and *d* > 0. Then for any u*(·) Î [*t, t + *d]*, there exists *n(·) Î [*t, t + *d]* such that *(*t + *d*, **(t + *d*, t, **, u*(·)*, *n(·))) Î* C*(*r*_{0}).

**Proof. ** Suppose that the result is not true. Then there exist (*t*, ) Î *C*(*r*_{0}), d > 0 and *u*(·) Î [*t, t* + d] such that for all n(·) Î [*t, t *+ d], (*t* + d, (*t* + d, *t*, , *u*(·), n(·))) Ï *C*(*r*_{0}).

That is,

This together with Lemma 2.2 (ii) implies that

Applying Lemma 3.4, we obtain

*x*^{0} + *e*^{–lt} *W*^{–}(*x*) > *r*_{0}.

This contradicts the fact that (*t*, ) Î *C*(*r*_{0}).

In an analogous manner, by using Lemmas 2.2 (ii) and 3.5, we can establish the next result.

**Lemma 4.2. ** *Assume *(A1)-(A3).* Let *(*t, *) Î *C*(*r*^{0})* and *d* > 0. Then for any v*(·) Î [*t, t + *d]*, there exists *m(·) Î [*t, t + *d]* such that *(*t + *d*, *(*t + *d*, t, **, *m(·)*, v*(·))) Î* C*(*r*^{0}).

We now define extremal strategies and, using Lemmas 4.1 and 4.2, show that they constitute a saddle point. Any sequence *F* = {*F _{n}*},

*F*: [0, ¥) × [0, ¥) × ®

_{n}*U*, defines a strategy G = G(

*F*) for player 1 in the following way. We take the nth stage partition P

*= {0 =*

_{n}*t*

_{0}<

*t*

_{1}< ¼} to be the one which divides [0, ¥) into subintervals of length . G

_{n,1}º

*F*(0, ). Let

_{n}*j*

__>__2, (

*u*(·),

*v*(·)) Î [

*t*

_{0},

*t*

_{j}_{–1}) × [

*t*

_{0},

*t*

_{j}_{–1}), and (·) = (·,

*x*

_{0},

*u*(·),

*v*(·)). We define G

*(*

_{n,j}*u*(·),

*v*(·)) =

*F*(

_{n}*t*

_{j}_{–1}, (

*t*

_{j}_{–1})).

For any sequence *G* = {*G _{n}*}, where

*G*: [0, ¥) × [0, ¥) × ®

_{n}*V*, a strategy D = D(G) for player 2 is defined in an analogous manner. The strategies G(

*F*) and D(

*G*) are referred to as feedback strategies associated with

*F*and

*G*respectively. The optimal strategies G

_{e}and D

*which we define now are of this feedback form. That is, G*

_{e}_{e}= G(

*F*) and D

_{e}*= D(*

_{e}*G*). We define the sequence

_{e}*G*= {

_{e}*G*} and the definition of

_{en}*F*= {

_{e}*F*} is similar.

_{en}Let (*t*, ) Î [0, ¥) × [0, ¥) × . If (*t*, ) Î *C*(*r*_{0}), then we define *G _{en}*(

*t*, ) to be a fixed element

*v*

_{0}Î

*V*. Let (

*t*, ) Ï

*C*(

*r*

_{0}) . Let

*C*(

_{t}*r*

_{0}) = { : (

*t*, ) Î

*C*(

*r*

_{0}) } and Î

*C*(

_{t}*r*

_{0}) be such that || – ||

__<__( ,

*C*(

_{t}*r*

_{0})) . We then define

*G*(

_{en}*t*, ) to be

*v*

_{*}, where (

*u*

_{*},

*v*

_{*}) is a saddle point for the local game at (, – ).

The next result compares trajectories governed by two special pairs controls. The proof may be obtained by modifying the proof of the analogous finite dimensional result in [9].

**Lemma 4.3. ** *Assume *(A1)-(A4). *Let **, ** belong to a bounded subset M of *[0, ¥)* × **, *t Î [0*,T*)*, *m(·) Î [t,*T*] *and *n(·) Î [t*,T*].* Let *(*u _{*},v_{*}*)

*be a saddle point for the local game at*(

*,*

*–*).

*Let*(·)

*=*(·, t

*,*

*,*m(·)

*, v*), (·)

_{*}*=*(·, t

*,*

*, u*n(·))

_{*},*,*(·) = (·) – (·).

*Then there exists a modulus*

*and*b

*> 0, depending only on M and T, such that for 0*

__<__d

__<__

*T*– t,

Using Lemmas 4.1, 4.2 and 4.3, we try to establish the optimality of (G_{e}, D_{e}).

**Lemma 4.4. ** *Assume *(A1)-(A4). *Let *G* be any strategy for player 1 and let *[·] = [·*, x*_{0}*, *G, D* _{e}*]

*be a motion corresponding to*(G, D

*).*

_{e}*Then for all t > 0,*(

*t,*[

*t*]) Î

*C*(

*r*

_{0}).

**Proof. ** Let (*t*) = dist((*t*, [*t*]),*C*(*r*_{0})) . Without any loss of generality, let [·] be the local uniform limit of the sequence {(·, *x _{on}, u_{n}*(·),

*v*(·))} of nth stage trajectories. Let

_{n}_{n}(·) = (·, ·,

*x*(·),

_{on}, u_{n}*v*(·)) and

_{n}*(*

_{n}*t*) = dist((

*t*,

*(*

_{n}*t*)),

*C*(

*r*

_{0})). Clearly for each

*t*,

*(*

_{n}*t*) ® (

*t*) as

*n*® ¥. Therefore it suffices to show that for all

*t*> 0, lim

_{n ® ¥}

*(*

_{n}*t*) = 0. Fix

*t*> 0 and an integer

*N*>

*t*. We now estimate

*(*

_{n}*t*).

Let * _{n}* = {0 = t

_{n,0}< t

_{n,1}< ¼ < t

_{n,Nn}=

*N*< ¼} be the nth stage partition associated with D

_{e}. Let

*t*Î (t

*, t*

_{n,j}

_{n,j}_{+1}], 0

__<__

*j*

__<__

*Nn*–1. Choose Î (

*r*

_{0}) such that ||

*(t*

_{n}*) – ||*

_{n,j}__<__

*(t*

_{n}*). Let (*

_{n,j}*u*

_{*},

*v*

_{*}) be a saddle point for the local game at (

*(t*

_{n}*),*

_{n,j}*(t*

_{n}*) – ). Now by Lemma 4.1, there exists n(·) Î [t*

_{n,j}*,*

_{n,j}*t*] such that the relaxed trajectory (·) = (·, t

*, ,*

_{n,j}*u*

_{*}, n(·)) has the property that (

*t*, (

*t*)) Î

*C*(

*r*

_{0}). Therefore

Applying Lemma 4.3, we get

Therefore

Letting *n* ® ¥, we get the desired result.

Similarly, we can prove the next result.

**Lemma 4.5. ** *Assume *(A1)-(A4). *Let *D* be any strategy for player 2 and let *[·] = [·, *x*_{0}*, *G* _{e}, *D]

*be a motion corresponding to*(G

*D).*

_{e},*Then for all t >*0, (

*t,*[

*t*]) Î

*C*(

*r*

^{0}).

Now we can show that the pair of strategies (G* _{e}, *D

*) constitute a saddle point equilibrium for the game.*

_{e}**Theorem 4.6. ** *Assume *(A1)-(A4). *The pair *(G* _{e}, *D

*)*

_{e}*is a saddle point for the game with initial point x*

_{0}.

**Proof. ** From Lemmas 4.4 and 4.5, it follows that for any (G, D) and motions [·, *x*_{0}*, *G*, *D * _{e}*], [·,

*x*

_{0}

*,*G

*D], we have*

_{e},This holds for all *t* > 0. Letting *t* ¥, we get

Hence we obtain

The required result now follows.

**5 Conclusions **

We have extended the Berkovitz's framework to study infinite horizon discounted problems. In this setup, following a dynamic programming approach, we have shown that the two player zero-sum infinite dimensional differential game on the infinite horizon with discounted payoff has a value. This value function is then characterized as the unique viscosity solution of the associated HJI equation. This has been achieved by using the notion of viscosity solution proposed by Crandall-Lions in [5] and [6]. By using our dynamic programming inequalities and mimicking the arguments in [8], without using (A0), we can also characterize the value function in the class of bounded uniformly continuous functions by taking the definition of viscosity solution as in [7] which is a refinement of Tataru's notion (see [11] and [12]). In the Elliott-Kalton framework, this has been established by Kocan et. al. [8] under more general assumptions on *A*.

**6 Acknowledgements **

The author wishes to thank M.K. Ghosh for suggesting the problem and for useful discussions. The author is grateful to an anonymous referee for important comments.

**REFERENCES**

[1] L.D. Berkovitz, *The existence of value and saddle point in games of fixed duration*, SIAM J. Control Optim., **23** (1985), 173-196. Errata and addendum, ibid, **26** (1988), 740-742. [ Links ]

[2] L.D. Berkovitz, *Differential games of generalized pursuit and evasion*, SIAM J. Con. Optim., **24** (1986), 361-373. [ Links ]

[3] L.D. Berkovitz, *Differential games of survival*, J. Math. Anal. Appl., **129** (1988), 493-504. [ Links ]

[4] L.D. Berkovitz, *Characterizations of the values of differential games*, Appl. Math. Optim., **17** (1988), 177-183. [ Links ]

[5] M.G. Crandall and P.L. Lions, *Viscosity solutions of Hamilton-Jacobi equations in infinite dimensions*, Part IV, J. Func. Anal., **90** (1990), 237-283. [ Links ]

[6] M.G. Crandall and P.L. Lions, *Viscosity solutions of Hamilton-Jacobi equations in infinite dimensions*, Part V, J. Func. Anal., **97** (1991), 417-465. [ Links ]

[7] M.G. Crandall and P.L. Lions, *Viscosity solutions of Hamilton-Jacobi equations in infinite dimensions*, Part VI, 'Evolution Equations, Control Theory and Biomathematics', Lecture Notes in Pure and Appl. Math., **155** (1994), Dekker, New York, 51-89. [ Links ]

[8] M. Kocan, P. Soravia and A. Swiech, *On differential games for infinite-dimensional systems with nonlinear unbounded operators*, J. Math. Anal. Appl., **211** (1997), 395-423. [ Links ]

[9] N.N. Krasovskii and A.I. Subbotin, *Game-Theoretical Control Problems*, Springer-Verlag, (1988). [ Links ]

[10] X. Li and J. Yong, *Optimal Control Theory for Infinite Dimensional Systems*, Birkhauser, (1995). [ Links ]

[11] D. Tataru, *Viscosity solutions for Hamilton-Jacobi equations*. J. Math. Anal. Appl., **163** (1992), 345-392. [ Links ]

[12] D. Tataru, *Viscosity solutions for Hamilton-Jacobi equations with unbounded nonlinear terms: A simplified approach*, J. Differential Equations, **111** (1994), 123-146. [ Links ]

[13] J. Warga, *Optimal Control of Differential and Functional Equations*, Academic Press, (1972). [ Links ]

Received: 04/II/03.

Accepted: 09/VI/03.

#563/03.

*The author is a CSIR research fellow and the financial support from CSIR is gratefully acknowledged.