## Services on Demand

## Article

## Indicators

- Cited by SciELO
- Access statistics

## Related links

- Cited by Google
- Similars in SciELO
- Similars in Google

## Share

## Computational & Applied Mathematics

*On-line version* ISSN 1807-0302

### Comput. Appl. Math. vol.28 no.2 São Carlos 2009

#### http://dx.doi.org/10.1590/S1807-03022009000200006

**Lavrentiev-prox-regularization for optimal controlof PDEs with state constraints **

**Martin Gugat **

Lehrstuhl 2 für Angewandte Mathematik, Martensstr. 3, 91058 Erlangen, Germany, E-mail: gugat@am.uni-erlangen.de

**ABSTRACT**

A Lavrentiev prox-regularization method for optimal control problems with point-wise state constraints is introduced where both the objective function and the constraints are regularized. The convergence of the controls generated by the iterative Lavrentiev prox-regularization algorithm is studied. For a sequence of regularization parameters that converges to zero, strong convergence of the generated control sequence to the optimal control is proved. Due to the proxcharacter of the proposed regularization, the feasibility of the iterates for a given parameter can be improved compared with the non-prox Lavrentiev-Regularization.

**Mathematical subject classification: **49J20, 49M37.

**Key words: **optimal control, pointwise state constraints, prox regularization, Lavrentiev regularization, pde constrained optimization, convergence, feasibility.

**1 Introduction **

In the applications modelled by optimal control problems, pointwise state constraints are important since often, practical considerations require certain restrictions on the state. Unfortunately, for problems of pde-constrained optimal control with state constraints, in general the corresponding multipliers are not contained in a function space but only given as measures (see [1]). In order to obtain regular multipliers, the Lavrentiev regularization has been introduced, that transforms the pure state constraint to a mixed state-control constraint. This method is studied for example in [5, 7, 8, 9, 11] and in the references cited there. We do not claim to give a complete list of references about this subject here but want to mention in particular the paper [12], where problems of optimal boundary control are studied and the references therein. Due to the regularization, for the regularized auxiliary problems that are control problems with mixed pointwise control-state constraints multipliers with *L*^{2}-regularity exist, see [13].

In the (non-prox) Lavrentiev regularization there is a single real-valued regularization parameter λ > 0. For each parameter λ, an auxiliary problem with a mixed state-control constraint is defined. To obtain convergence, this Lavrentiev regularization parameter λ must converge to 0+. However, as λ decreases the problems become more and more difficult to solve. For each fixed λ > 0 in general the generated controls are infeasible for the original problem.

In this paper we introduce a Lavrentiev prox-regularization method where for a given parameter value λ, the feasibility is improved. In our regularization apart from the real-valued regularization parameter λ a control function appears as a second regularization parameter in the state constraints. If the zero control is chosen, the non-prox Lavrentiev regularization is obtained. During the algorithm, this control parameter is updated iteratively. Moreover, in our method also a regularization parameter __>__ 0 appears in the objective function in the same way as in the classical prox-regularization algorithm (see for example [10, 4]. We show that for a sequence of regularization parameters (λ_{k },_{k }) converging to zero, the new algorithm where the control regularization parameter is updated iteratively generates a sequence of controls that converges with respect to the *L*^{2}-norm to the optimal control.

We start by considering the elliptic optimal control problem with pointwise state constraints and pointwise control constraints (section 2) and the corresponding Lavrentiev prox-regularization (section 3). Then we turn to the elliptic optimal control problem with pointwise state constraints only (section 4) and the Lavrentiev prox-regularization (section 5) for this problem.

At the end of the paper we present examples where we compare the convergence of the Lavrentiev prox-regularization method with the non-prox Lavrentiev regularization. We give two numerical examples where the Lavrentiev prox iteration converges faster than the non-prox Lavrentiev regularization.

**2 The Elliptic Problem with pointwise control constraints**

In this section we introduce an elliptic optimal control problem with state constraints and *L*-control constraints.

Let *N *{2, 3, 4,...} and Ω *R ^{N}* be a bounded domain with

*C*

^{0,1 }boundary Γ. Let a desired state

*y*

_{d}*L*(Ω) be given. Let a real number κ > 0 be given. Define the objective function

.

In addition, let control bounds *u _{a}, u_{b} *

*L*(Ω) be given such that

*u*

_{a}__<__

*u*on Ω. Let state bounds

_{b}*y*

_{a}, y_{b}*L*(Ω) be given such that

*y*<

_{a}*y*almost everywhere on Ω. Let

_{b}_{n }denote the normal derivative with respect to the outward unit normal vector. As in [2], let

*A*an elliptic differential operator of the form

where the coefficients *a _{ij} *belong to

*C*and satisfy the inequality

for all ξ *R*^{N }and for all *x * Ω for some *M *> 0, *m *> 0 and *a*_{0} *L*^{r }(Ω) is not identically zero with *r *__>__ *Np*/(*N *+ *p*) for some fixed *p *> *N, a*_{0} __>__ 0 in Ω.

Define the following elliptic optimal control problem with distributed control, pointwise state constraints and pointwise control constraints:

Note that for a solution *u*_{*} of **Q**, we have *u*_{*} *L*(Ω).

As in [5], the notation *G *is used for the control to state map that gives the state *y *as a function of the control *u, G *: *L*^{2}(Ω) *H*^{1}(Ω). The notation *S *is used for the control to state map as an operator *L*^{2}(Ω) *L*^{2}(Ω) which is the composition of *G *and the suitable embedding operator.

**3 Lavrentiev Prox Regularization**

For *u * *L*^{2}(Ω) define *K *(*u*) = *J *(*G*(*u*), *u*). Let the Lavrentiev regularization parameter λ > 0 and the prox-regularization parameter ε __>__ 0 and ν *L*(Ω) be given. We consider the regularized problem

Let ν_{*} denote the optimal value of **Q**, and ν(λ, ε, ν) denote the optimal value of **Q**_{λ,ε, ν}. Let *F*_{*} denote the admissible set of **Q **and *F*(λ, ε, ν) denote the admissible set of **Q**_{λ,ε ν}.

If ν *F*_{*} is a solution of **Q**_{λ,ε ν} then

.

Moreover, if *u*_{*} is the solution of **Q **we have

.

We consider the following

**Lavrentiev Prox-Regularization Algorithm: **

For the convenience of the reader, we also describe the (non-prox) Lavrentiev regularization algorithm that has been considered in the literature for example in [5, 7, 8, 9, 11]:

**(non-prox) Lavrentiev Regularization Algorithm: **

*3.1 Uniform boundedness of the feasible sets of ***Q**_{λ,ε}_{,u }

Due to the pointwise control constraints, the feasible points of **Q**_{λ,ε}_{,u }are uniformly bounded in *L*(Ω):

**Lemma 3.1. ***Let u * *F*(λ, ε, v) *be a feasible point of ***Q**_{λ,ε, ν}. Then

*3.2 Well-definedness and convergence of the generated sequence*

In this section we study the convergence of the solutions (*u _{k}* )

_{k}for

*k*.

**Theorem 3.2. ***Assume that there exists a Slater control and * > 0 *such that *

*Define . Let p * (0, 1).* Assume that in each step, *λ* _{k} is chosen such that *λ

_{k }

__<__1

*and*λ

_{k}

^{1-p }

__<__

*/(2M). Then*

**Q**

*has a solution, the Lavrentiev prox-regularization algorithm is well-defined, and if and we have*

*Moreover, there exists a constant C*_{10 }> 0 *such that for all k *

*For a real number z we use the notation z*+= (*z *+|*z*|)/2*. Hence we have z*+= max{*z*, 0}*. For the constraint violation we have the upper bound *

**Proof. **First we show the existence of a solution of **Q**. Since is feasible for **Q**, we have ν_{*} < . let (*mk *)*k *denote of minimizing sequence for **Q**, that is the points *mk * *L*^{2}(Ω) are feasible for **Q **and *K *(*m _{k} *) = ν

_{*}. Since the sequence (

*m*)

_{k}_{k }is bounded in

*L*(Ω), we can choose a subsequence that converges weakly

^{* }in

*L*(Ω) to a limit point

*L*(Ω). Then this subsequence converges also weakly in

*L*

^{2}(Ω) to . Since the subsequence converges weakly in

*L*

^{2}(Ω), we have . Moreover, the weak

^{* }convergence in

*L*(Ω) implies that is feasible for

**Q**. Hence is a solution of

**Q**. Due to the strong convexity of the objective function, this solution isuniquely determined.

Now we consider the sequence (*u _{k}*) generated by the Lavrentiev prox-regularization algorithm. Due to the control constraints, this sequence is bounded. Choose

*p*(0, 1). Define τ

_{k }= λ

*and the function ν*

_{k}^{p}_{k }= (1 - τ

_{k })

*u*

_{*}+ τ

_{k }where

*u*

_{*}denotes the solution of

**Q**. Then

*u*

_{a}__<__ν

_{k}

__<__

*u*and we have

_{b}On the other hand, we have *G*(ν_{k })+λ_{k }(ν_{k} - *u _{k} *)

__>__

*y*. Hence v

_{a}*k*

*F*(λ

_{k },ε

_{k},

*u*).This implies that the iteration is well defined.

_{k}Now we assume that the sequences (λ_{k })_{k }and (ε_{k})_{k }converge to zero. Then we have τ_{k} 0 and thus . Thus we have

Let *L*(Ω) denote a weak^{* }limit point of the sequence (*u _{k}*)

_{k }. Then

*F*

_{*}and we have

.

Since *F*_{*}, the uniqueness of the solution of **Q **implies = *u*_{*}. Hence the sequence (*u _{k}*)

_{k }converges weakly

^{*}to

*u*

_{*}. This implies the equation

Note that the convergence of is also an immediate consequence of the compactness of the solution operator *S *and the weak convergence of *uk *to *u*_{*} with respect to the *L*^{2}(Ω) topology.

The weak convergence of *uk *to *u*_{*} in *L*^{2}(Ω) and the convergence of the norms imply .

There exists a Lipschitz constant *C *> 0 such that for all points ν_{1}, ν_{2} *L*(Ω) with and , respectively we have *K *(ν_{1}) __<__* K *(ν_{2}) + . Hence we have

where *µ*(Ω) = 1 *dx*. For all *k *> 1, the point is in *F*_{*}. Hence

Define *C*_{10} = 2*MC *. Then (4) follows.

To obtain the bound for the constraint violation we have used the fact that the lower and the upper state bound cannot be violated simultaneously, hence for all controls *u *the sets *M*_{1} ={*x * Ω : (*G*(*u*) - *y _{b}*)(

*x*)

__>__0} and

*M*

_{2}={

*x*Ω : (

*y*-

_{a}*G*(

*u*))(

*x*)

__>__0} are disjoint. On the set

*M*

_{1}we have (

*G*(

*u*

_{k+1}) -

*y*)+

_{b}__<__λ

_{k }|

*u*

_{k+1}-

*u*| and on the set

_{k}*M*

_{2}we have (

*y*-

_{a}*G*(

*u*

_{k+1}))+

__<__λ

_{k }|

*u*+1 -

_{k}*u*|.Since

_{k}*M*

_{1}

*M*

_{2}Q the assertion follows by integration.

**Remark 3.3. **Note that we have the inequality

where the prox-parameter ε does not appear explicitly. Hence for the optimal value we have the upper bound

.

**4 The Elliptic Problem without pointwise control constraints**

In this section we introduce an elliptic optimal control problem with state constraints. Here no *L*(Ω)-control constraints are present.

Let *N *{2, 3} and Ω *R*^{N }be a bounded domain with *C*^{0,1 }boundary r.Let a desired state *y _{d} *

*L*(Ω) be given. Let a real number κ > 0 be given. Define the objective functions

*J*(

*y, u*) and

*K*(

*u*) as above. Let state bounds

*y*

_{a}, y_{b}*L*(Ω) be given.

Define the following elliptic optimal control problem with distributed control and pointwise state constraints minimize *J *(*y, u*) subject to

As in [5], the notation *G *is used for the control to state map that gives the state as a function of the control, *G *: *L*^{2}(Ω) *H*^{1}(Ω) *L*(Ω). The notation *S *is used for the control to state map as an operator *L*^{2}(Ω) *L*^{2}(Ω) which is the composition of *G *and the suitable embedding operator.

**5 Lavrentiev Prox Regularization**

Let the Lavrentiev prox-regularization parameters λ > 0, __>__ 0 and ν *L*(Ω) be given.

We consider the regularized problem

Let ω* denote the optimal value of **P**, and ω(λ , , ν) denote the optimal value of **P**_{λ,, ν}. Let *F*_{*} denote the admissible set of **P **and *F*(λ, _{}, v) denote the admissible set of **P**_{λ,ε ν}.

Concerning the regularity of the multipliers corresponding to the inequality constraints in **P**_{λ,, ν}, we can apply Theorem 2.1 in [5] that states that we find multipliers in the function space *L*^{2}(Ω).

We consider the following

**Lavrentiev Prox-Regularization Algorithm: **

As far as the regularization of the objective function is concerned, this algorithm is related to the prox-regularization as considered in [10, 4]. The difference is that for our state-constrained problem regularization terms appear both in the constraints and in the objective function.

In our discussion we use the choice *u*_{1} = 0. First we show that the iteration is well-defined. For *u*_{1} = 0 problem **P**_{λ}_{1,}_{1,u1 }is of the form studied in the papers about Lavrentiev regularization [9, 7, 6], hence the corresponding existence results are applicable. As in Section 3, the non-prox Lavrentiev-Regularization corresponds to the definition of the regularization parameters *u*_{k+1} = 0,_{k+1} = 0 for all *k *__>__ 0, that is the non-prox Lavrentiev-Regularization is the algorithm: In step *k *solve **P**λ_{k },0,0.

In step *k*, the function *u*_{k+1} satisfies the state constraint

.

Hence *uk*+1 *L*(Ω) and the function

is feasible for **P**_{λ}_{k+1,}_{k+1,uk+1}. Therefore the iteration is well-defined.

*5.1 Properties of *λ*I *+ *S. *

The following Lemma states that is uniformly bounded. Moreover, the operators converge pointwise to the zero operator for λ 0+. We use this Lemma in Example 2.

**Lemma 5.1. ***Let * *denote the operator norm of S as a map from L*^{2}(Ω) *to L*^{2}(Ω)*. For all *λ > 0 *we have the inequality *

*Let u * *L*^{2}(Ω)*,u * 0 *and let *λ_{k }> 0 *with * *. Then *

.

**Proof. **See [5].

*5.2 Boundedness of the generated sequence*

The iteration of the Lavrentiev prox-regularization method generates a bounded sequence if the regularization parameters are chosen sufficiently small. This can be seen as follows:

**Lemma 5.2. ***Assume that there exists a Slater control * *L*(Ω) *and * > 0 *such that *

on Ω. Assume that in each step, λ_{k} is chosen such that

Assume that in each step, _{k} is chosen such that the sequence

is bounded. Then the sequence (*u _{k}*)

_{k}generated by the Lavrentiev Prox-Regularization Algorithm is bounded in L

^{2}(Ω).

**Remark 5.3. **Note that the conditions (8) and (9) can easily be satisfied during the iteration by choosing λ_{k }and _{k }sufficiently small since the functions and *u _{k} *are known.

**Proof. **For all *k *we have the inequalities

*a *hence *F*(λ_{k },* _{k}, u_{k} *) which implies the inequality

and the assertion follows due to the boundedness of the sequence in (9).

*5.3 Convergence of the generated sequence*

We study now the convergence of the sequence (*u _{k}*)

_{k }generated by the Lavrentiev Prox-Regularization Algorithm.

**Theorem 5.4. ***Assume that the solution u*_{*} *of ***P ***is in L*(Ω) *and that there exists a Slater control * *L*(Ω) *and * > 0 *such that *

*on *Ω*. Let p * (0, 1) *be given. Assume that in each step, *λ_{k } (0, 1) *is chosen such that *

*Assume that in addition the sequence ** is bounded. *

*If * *and **we have *

*For the constraint violation we have the upper bound *

**Remark 5.5. **Condition (10) can easily be satisfied during the iteration by choosing λ_{k }sufficiently small since the functions and *u _{k} *are known. Condition (11) can be satisfied if an

*a priori*bound for is known. For the problem with additional pointwise control constraints, this problem does not occur, see section 2.

Lemma 5.4 states that if the λ_{k }and the _{k} decrease sufficiently fast we obtain convergence.

**Proof. **Since (10) holds and λ_{k }< 1, condition (8) also holds, hence since inaddition condition (9) holds lemma 5.2 implies that the sequence is bounded.

Let denote a weak limit point in *L*^{2}(Ω) of the sequence (*u _{k}*)

_{k }. Then

*F*

_{*}.Moreover, we have the inequality

.

Define τ_{k }= λ_{k}^{p }and the function ν_{k }= (1 - τ_{k} )*u*_{*}+ τ_{k }. Then we have

On the other hand, we have

Hence ν_{k } *F*(λ_{k },_{k }, *u _{k} *). Moreover, . Thus we have

Hence we have . This implies that *K *() = ω_{*}.Since *F*_{*}, the uniqueness of the solution of **P **implies= *u*_{*}. Hence the sequence (*u _{k}*)

_{k }converges weakly to

*u*

_{*}.

As in the proof of Theorem 3.2 we obtain (12). For all *k *we have the inequalities

.

This implies the *L*^{2}-bound for the constraint violation

In Theorem 5.4 we have stated that is a bound for the constraint violation. In the non-prox Lavrentiev-Regularization (In step *k *solve **P**_{λ}_{k ,0,0}) we have the corresponding bound that satisfies the inequality

If *u*_{*} 0 and we have the inequality

which indicates that at least asymptotically, the Lavrentiev prox-regularizationmethod yields smaller bounds for contraint violation.

**6 Examples**

In this section we study two examples that allow to compare the performance ofthe Lavrentiev prox-regularization method and the non-prox Lavrentiev regularization method.

**Example 1. **Consider a problem **P**, where for the optimal control we have *u*_{*} *L*(Ω) and both inequality constraints are *not *active, that is we have *y _{a} *<

*G*(

*u*

_{*})<

*y*in the sense that

_{b}.

In this case, *u*_{*} is an unconstrained local minimal point of *K *and the convexity of *K *implies that ω_{*}= *K *(*u*_{*}) = *K *(*u*). Let ν *L*^{2}(Ω) be given. Since *F*(λ, , ν) *L*^{2}(Ω), for all λ > 0 we have the inequality

.

Since *u*_{*} *F*(λ, , *u*_{* }) we have ω(λ, , *u*_{*}) __<__ *K *(*u*_{*}) = ω_{*}, hence in this case ω(λ, , *u*_{*}) = ω_{*}. Thus with the choice *u*_{1} = *u*_{*}, the Lavrentiev proxregularization method generates the constant sequence *uk *= *u*_{*} for all *k *and all λ_{k }> 0, _{k }> 0, even if the sequences (λ_{k})_{k }, (_{k })_{k }do *not *converge to zero.

More generally, we have *u*_{*} *F*(λ_{k },_{k }, *u _{k} *) if

.

In this case

.

If _{k }= 0, this yields ω*= ω(λ_{k },_{k }, *u _{k} *), hence in this case

*u*solves

_{k}**P**

_{λ}

_{k, k, uk }.

If _{k }> 0, the method reduces to a classical prox regularization for problem **P**, where the constraints are not regularized. For the non-prox Lavrentiev regularization method, *u*_{*} is the solution with the parameter λ_{k }if *u*_{* } F(λ_{k } , 0, 0)which is the case if

.

If and _{k }= 0, the Lavrentiev prox-regularization method can find *u*_{*} with larger parameter values λ_{k }than the non-prox Lavrentiev regularization.

**Example 2. **Consider a problem **P**, where for the solution both inequality constraints are active almost everywhere in Ω, that is we have *y _{a} *=

*G*(

*u*

_{*}) =

*y*and the Slater condition is violated. Assume that

_{b}*y*

_{a}*C*

^{2}(Ω) satisfies the boundary conditions

*= 0 in Γ. In this case, we have*

_{n}y_{a}*S*(

*u*

_{*}) =

*y*.

_{a}The non-prox Lavrentiev regularization method computes the solution of **P**_{λ}_{k ,0,0} for which we have the following equation: (λ* _{k}I *+

*G*) =

*y*. Hence (λ

_{a}*+*

_{k}I*S*)(-

*u*

_{*}) =

*y*- λ

_{a}

_{k}u_{*}-

*y*=-λ

_{a}

_{k}u_{*}.

This yields

,

hence if λ_{k } 0 due to Lemma 5.1 we have

.

The Lavrentiev prox-regularization method computes the solution *uk*+1 of **P**λ_{k },_{k },*u*_{k }for which we have the following equation: (λ* _{k}I *+

*G*)

*u*+1 - λ

_{k}*=*

_{k}u_{k}*y*. Hence we have (λ

_{a}*+*

_{k}I*S*)(

*u*+1 -

_{k}*u*

_{*}) =

*y*+ λ

_{a}*- λ*

_{k}u_{k}

_{k}u_{*}-

*y*= λ

_{a}*(*

_{k}*u*

_{k}- u_{*}).

Thus if *u _{k}*

*u*

_{*}we have due to Lemma 5.1

(see the proof of Lemma 5.1) hence the algorithm generates a bounded sequence with strictly decreasing distance to *u*_{*} also if (λ_{k})_{k }does *not *converge to zero.

We have (λ* _{k}I *+

*S*)(

*u*

_{k+1}-

*u*

_{*}) - λ

*= -λ*

_{k}u_{k}

_{k}u_{*}, hence if λ

*0 we have*

_{k}.

**Example 3. **Let κ = 0, *A *= - Δ*y *+ *y, y _{d} * 1 and

*J*(

*y, u*) = (1/2) (

*y*-1)

^{2 }

*dx*. Choose

*y*= 0,

_{a}*y*= 1. Then the optimal control that solves

_{b}**P**is

*u*

_{* }1and we have ω

_{*}= 0.

For all λ > 0 we have the inequality

,

hence *u*_{*} is infeasible for the auxiliary problem *P*(λ, 0, 0) used in the Lavrentiev regularization method. However,λ = 1/(1 + λ) is in *F*(λ, 0, 0) hence we have the inequality

.

For every ν *L*(Ω) with 1 __<__ ν __<__ on Ω we have the inequality

hence *u*_{*} is feasible for *P*(λ, , v) and we have the inequality

.

So if 1 __<__ *u*_{1} __<__ with the choice = 0, the Lavrentiev prox regularization algorithm finds the optimal control *u*_{*} in one step. For > 0 with a fixed parameter λ, we can make the regularization error arbitrarily small by choosing sufficiently small.

**Example 4. **Let Ω =[0,π]×[0,π]. Define the desired state

.

Consider the problem

The state constraint *y *__<__ 1 - implies that for *x*_{2} __>__ π/6 we have

.

This yields the optimal state

Figure 1 shows the desired state *y _{d} *and the optimal state

*y*

_{*}that is generated by the optimal control

shown in Figure 2.

The optimal control *u*_{*} has a jump discontinuity at *x*_{2} = π/6. The optimalvalue ω_{*} of **P **is given by the equation

We use a discretization based upon Fourier-expansions. In the general case, this corresponds to a representation of the control as a series of eigenfunctions of the operator *A*. Here we write the control function *u *as a cosine series of the form

.

Then

and we obtain the following series representation for the state:

.

Hence we have

.

For the objective function, this yields

So we see that for cos( *jx*2) the problem **P**_{λ, ε ν },is equivalent to the problem

By replacing the infinite series by a finite sum we obtain a semi-infinite optimization problem with a quadratic objective function and linear constraints.

In our numerical implementation we used the finite sums and a finite number of inequality constraints corresponding to the 1001 grid points 0.001π *j* for *j *{0,..., 1000}. We solved the finite-dimensional optimization problems with the program fmincon from the matlab optimization toolbox.

Let *u*_{1} = 0 and let ω(λ) denote the optimal value of **P**_{λ}_{,0,u1 }which is used in the non-prox Lavrentiev regularization.

Table 1 contains the optimal values of the discretized problems for variousvalues of λ. The state error *e _{y} *has been computed as

with the computed state *y _{c} *and the grid points

*x*.

_{ij}The constraint violation *e*_{ν} has been computed as

Starting with *u*_{1} = 0, we have computed the Lavrentiev-prox regularization iterates whith λ_{k }= 10 _{*} 10^{(1-k)/2}. Let ω*k *denote the optimal value of **P**_{λ}_{k, 0, uk} which is used in the Lavrentiev prox iteration. Table 2 shows the results.

In this case the Lavrentiev-prox regularization iterates shown in Table 2 converge faster than the iterates generated with the non-prox Lavrentiev regularization shown in Table 1.

Figure 3 shows the optimal state *y*_{11} that is computed in step 11 of the Lavrentiev-prox iteration with λ_{11} = 10^{-4}.

**Example 5. **As in Example 4 let Ω =[0,π]×[0,π] and define the desired state *y _{d} *(

*x*

_{1},

*x*

_{2}) = 1 - cos(

*x*

_{2}).

The state constraint *y *__<__ 1 implies that for *x*_{2} __>__ π/2 we have

.

This yields the optimal state

Figure 4 shows the desired state *y _{d} *and the optimal state

*y*

_{*}that is generated by the optimal control

shown in Figure 5. The optimal control *u*_{*} is continuous. The optimal value ω_{*} of **P **is given by the equation

_{}.

As in Example 4, we use a discretization based upon Fourier-expansions. For cos( *j x*_{2}) the problem **P**_{λ, ε,ν }is equivalent to the problem

By replacing the infinite series by a finite sum we obtain a semi-infinite optimization problem with a quadratic objective function and linear constraints. Again in our numerical implementation we used the finite sums and a finite number of inequality constraints corresponding to the 1001 grid points 0.001π *j *for *j *{0,..., 1000}. Again we used the program fmincon from the matlab optimization toolbox to solve the finite-dimensional optimization problems.

Let *u*_{1} = 0 and let ω(λ) denote the optimal value of **P**_{λ}_{,0,u1 }which is used in the non-prox Lavrentiev regularization. Table 3 contains the results for the solution of the discretized problems for various values of λ. The state error *e _{y} *has been computed as in (17) and the constraint violation

*e*v has been computed as in (18).

Starting with *u*_{1} = 0, we have computed the Lavrentiev-prox regularization iterates whith λ_{k }= 10 * 10^{(1-k)/2}. Let ω*k *denote the optimal value of **P**_{λ}_{k ,0,uk}which is used in the Lavrentiev prox iteration. Table 4 shows the results.

Figure 6 shows the optimal state *y*_{11} that is computed in step 11 of the Lavrentiev-prox iteration with λ_{11} = 10^{-4}.

In this example the Lavrentiev-prox regularization iterates shown in Table 4 converge faster than the iterates generated with the non-prox Lavrentiev regularization shown in Table 3.

**7 Conclusion **

In this paper we have introduced the Lavrentiev prox-regularization method for elliptic optimal control problems. The cost for the solution of the parametric auxiliary problems in each step of the method is the same as for the non-prox Lavrentiev regularization method since the auxiliary problems are of exactly the same form. Hence also the same numerical methods can be used for the solution, for example primal-dual active set methods, interior point methods or semismooth Newton methods, see [5, 6, 8, 11]. Our numerical examples indicate that the Lavrentiev prox-iteration gives approximations of the same quality as the non-prox Lavrentiev regularization method in fewer steps and with larger regularization parameters. We have also applied the method successfully for the solution of optimal control problems with hyperbolic systems see [3].

**Acknowledgements. **The author wants to thank the anonymous referee for the helpful hints that have substantially improved this paper.

**REFERENCES **

[1] E. Casas, *Control of an Elliptic Problem with Pointwise State Constraints*. SIAM J. Control Optim., **24 **(1986), 1309-1318. [ Links ]

[2] E. Casas and M. Mateos, *Second Order Optimality Conditions for Semilinear Elliptic Control Problems with Finitely Many State Constraints*. SIAM J. Control Optim., **40**(5) (2002), 1431-1454. [ Links ]

[3] M. Gugat and A. Keimer and G. Leugering, *Optimal Distributed Control of the Wave Equation subject to State Constraints*. To appear in ZAMM. [ Links ]

[4] A. Kaplan and R. Tichatschke, *Stable Methods for ill-posed variational problems*. Akademie Verlag, Berlin (1994). [ Links ]

[5] C. Meyer, U. Prüfert and F. Tröltzsch, *On two numerical methods for state-constrainded elliptic control problems*. Optimization Methods and Software, **22 **(2007), 871-899. [ Links ]

[6] C. Meyer, *Optimal control of semilinear elliptic equations with application to sublimation chrystal growth*. Dissertation, Technische Universität Berlin (2006). [ Links ]

[7] C. Meyer and F. Tröltzsch, *On an elliptic optimal control problem with pointwise mixed control-state constraints*. Recent Advances in Optimization, A. Seeger, Ed., Springer-Verlag, (2006), 187-204. [ Links ]

[8] C. Meyer and A. Rösch and F. Tröltzsch, *Optimal control of PDEs with regularized pointwise state constraints*. Computational Optimization and Applications, **33 **(2006), 209-228. [ Links ]

[9] I. Neitzel and F. Tröltzsch, *On Convergence of Regularization Methods for Nonlinear Parabolic Optimal Control Problems with Control and State Constraints*, submitted. [ Links ]

[10] R.T. Rockafellar, *Augmented Lagrange multiplier functions and applications of the proximal point algorithm in convex programming*. Math. Oper. Res., **1 **(1976), 97-116. [ Links ]

[11] F. Tröltzsch and I. Yousept, *A Regularization method for the numerical solution of elliptic boundary control problems with pointwise state-constraints*. Computational Optimization and Applications, **42 **(2009), 43-66. [ Links ]

[12] F. Tröltzsch and I. Yousept, *Source representation strategy for optimal boundary control problems with state constraints*. Zeitschrift für Analysis und ihre Anwendungen, **28 **(2009), 189-203. [ Links ]

[13] F. Tröltzsch, *Regular Lagrange multipliers for control problems with mixed pointwise control-state-constraints*. SIAM Journal on Optimization, **15 **(2005), 616-634. [ Links ]

Received: 16/X/08. Accepted: 23/IV/09.

#CAM-30/08.