SciELO - Scientific Electronic Library Online

vol.33 número1Design to learn: customizing services when the future mattersUsing value-focused thinking in Brazil índice de autoresíndice de assuntospesquisa de artigos
Home Pagelista alfabética de periódicos  

Serviços Personalizados




Links relacionados


Pesquisa Operacional

versão impressa ISSN 0101-7438

Pesqui. Oper. vol.33 no.1 Rio de Janeiro jan./abr. 2013 

A non-standard optimal control problem arising in an economics application



Alan Zinober*; Suliadi Sufahani

School of Mathematics and Statistics, University of Sheffield, Sheffield S3 7RH, United Kingdom. E-mails: /




A recent optimal control problem in the area of economics has mathematical properties that do not fall into the standard optimal control problem formulation. In our problem the state value at the final time the state, y(T) = z, is free and unknown, and additionally the Lagrangian integrand in the functional is a piecewise constant function of the unknown value y(T). This is not a standard optimal control problem and cannot be solved using Pontryagin's Minimum Principle with the standard boundary conditions at the final time. In the standard problem a free final state y(T) yields a necessary boundary condition p(T) = 0, where p(t) is the costate. Because the integrand is a function of y(T), the new necessary condition is that y(T) should be equal to a certain integral that is a continuous function of y(T). We introduce a continuous approximation of the piecewise constant integrand function by using a hyperbolic tangent approach and solve an example using a C++ shooting algorithm with Newton iteration for solving the Two Point Boundary Value Problem (TPBVP). The minimising free value y(T) is calculated in an outer loop iteration using the Golden Section or Brent algorithm. Comparative nonlinear programming (NP) discrete-time results are also presented.

Keywords: optimal control, non-standard optimal control, piecewise constant integrand, economics, comparative nonlinear programming results.




Calculus of Variations (CoV) provides the mathematical theory to solve extremizing functional problems for which a given functional has a stationary value either minimum or maximum [13]. Optimal control is an extension of CoV and it is a mathematical optimization method for deriving optimal control policies. Optimal control channels the paths of the control variables to optimize the cost functional whilst satisfying (in this paper) ordinary differential equations. Economics is a source of interesting applications of the theory of CoV and optimal control [7]. A few classical examples that reflect the use of optimal control are the drug bust strategy, optimal production, optimal control in discrete mechanics, policy arrangement and the royalty payment problem [5,7,9].

In a recently studied economics problem a firm struggling with intensively low demand of their product will purposely increase the marginal cost by reducing the production of the product and increase the selling price [11]. Therefore the requirement to pay a flat-rate royalty on sale has the effect of increasing the marginal cost and thereby decreasing the output while simultaneously increasing the price. The effect of permitting a nonlinear royalty leads to a non-standard CoV problem not previously considered. In this paper we will demonstrate some ways of solving the problem without discussing the precise economics details.

Let us start by considering a simple problem. We wish to determine the control function u(t) where t [0,T] that maximises the integral functional

subject to satisfying the state ordinary differential equation = u(t) with y(0) known and the endpoint state value y(T) at time t = T unknown. The Lagrangian integrand depends on the unknown final value y(T) and this is not a standard optimal control problem because the function depends on y(T). A standard optimal control often solves a problem with the final state y(T) free and then the boundary condition p(T) is zero [12]. The Hamiltonian for optimal control theory arises when using Pontryagin's Minimum Principle (PMP) [13]. It is used to find the optimal control for taking a dynamical system from one state to another subject to satisfying the underlying ordinary differential equations [12]. We introduce a continuous approximation of the piecewise constant integrand function by using a hyperbolic tangent approach. The present paper will indicate how such problems can be solved. In the next section we will discuss the theoretical approach in order to solve the problem. We will follow the work of Malinowska and Torres [6], in proving the boundary condition for CoV [2]. Then we will demonstrate a numerical example and we will use C++ to solve it. We also compare the results with a nonlinear programming(NP) discrete-time approach and then present some conclusions.



In classical optimal control problems, the function (1) does not depend specifically on the free value y(T). However, in our case, the Lagrangian integrand function f depends on y(T). We present a recent theorem that indicates that p(T) is equal to a certain integral at time T. Malinowska and Torres [6] have proven this boundary condition for CoV on time scales [2]. The following theorem can be applied to our problem.

Theorem: [6,7]

If (·) is the solution of the following problem


for all t [0,T]. Moreover

From an optimal control perspective one [12,13] has the costate p(T) = . Hence

where p(t) is the Hamiltonian multiplier or costate function. Theorem 1 shows that the necessary optimal condition does not have p(T) = 0 as is the case for the standard classical optimal control problem.

Suppose that we wish to consider a piecewise constant function f in (2). For example, consider

where z = y(T) and


with a and b real numbers. We have a discontinuous integrand that cannot be differentiated as required in the final boundary condition (7). We introduce a continuous approximation of the piecewise constant integrand function by using a hyperbolic tangent approach in order to have a continuous smooth function. Other approximations have been tried but they required hundreds of terms, for instance in a Fourier series approximation. The tanh(ky) approximation used here is very accurate. A numerical example will be presented in the next section.



The example is a simplified version of the proposed economics problems but has the same main features. The state equation (ODE system) is described by

We wish to maximise


is a continuous function and T = 10. W(y) is a piecewise constant function where

We approximate W(y) using a hyperbolic tangent function

and we select k suitably large. k = 450 say. For larger values of k we obtain a smoother approximation of the function W(y) (see Fig. 1). The known initial state is y(0) = 0 and final state value z = y(T) is free. Therefore the Hamiltonian is described by H(t, y, u, p) = f + p.u and



The costate satisfies

The stationarity condition is Hu = 0 and this yields

Equation (6) holds so



There are several necessary conditions that need to be satisfied, such as the state equation (9) and the costate equation (16) along with the stationarity condition (17). Also the initial condition of y(0) is given and we select a guessed initial value of p(0). We also need to make sure the boundary condition of (18) will be satisfied at the final time, T. This is a two point boundary value problem. We need to ensure that the iteration value of z used in (17) will be equal to y(T) at the final time T. This is satisfied in our algorithm only when the p(T) boundary condition converges. Then we will have the optimal solution. The minimising free value y(T) is calculated in an outer loop iteration using the Golden Section (or Brent) maximising line search algorithm. We used the Newton shooting method with two guessed values v1 = p(0) and v2 = p(T). We use the programming language C++ in order to solve the shooting method and other algorithms described in the Numerical Receipe library [8]. We iterate the system with the state equation y(t), costate equation p(t), the integral of η(t) and the cost function J(t).

An optimal solution was obtained with high accuracy. The results are

y(T) = 0.654221, p(T) = -0.022793, eta(T) = -0.022793 and J(T) = 1.138440.

Figures 2-4 shows the optimal curves of the state variable y(t), adjoint variable p(t) and the integral eta(t) and the control variable u(t).







As a comparative approach, we used a different nonlinear programming discrete-time technique to solve the same problem [1,7]. We solved the problem using Euler and also Runge-Kutta discretisation, and an optimization algorithm in order to solve the unknown control variables uk at each time tk [1]. We used the AMPL program language [3] and the MINOS nonlinear solver with 45 time steps in order to solve the problem.

param N = 45;
param x0 = 0;
param T = 10;
param N1 = N-1;
param TN = T/N;
param S = T/N/2;
param pi = atan(1.0)*4.0;
var x {0..N}, default x0;
var u {0..N} >= 0.0001;
maximize J: sum {j in 0..N1} (0.75*sqrt(u[j]) - (1.1+0.1*tanh(450.0*x[j]-
25.00*x[N]))*0.95*u[j] -
subject to xinit: x[0] = x0 ;
var xp1 {i in 0..N1} = u[i];
var xp2 {i in 0..N1} = u[i] + S*xp1[i];
var xp3 {i in 0..N1} = u[i] + S*xp2[i];
var xp4 {i in 0..N1} = u[i] + TN*xp3[i];
subject to xdyn {i in 0..N1}: x[i+1] = x[i] + TN*(1/6*u[i] +
2/3*(u[i]+u[i+1])*0.5 + 1/6*u[i+1]);
subject to uN : u[N] = u[N1];
############# SOLVING OPTIONS ##############
option solver minos;
display J;
display x[N];
display xinit;
printf: " # N = %d\n", N;
printf: " # Data\n";
printf{i in 0..N-1}: " %24.16e %24.16e %24.16e %24.16e\n\n",
i*TN, x[i], u[i], xdyn[i];
printf: " %24.16e %24.16e %24.16e\n",
J, x[N], u[N-1];
display {i in 0..N1} i*TN, x, xdyn, u > resultzurichRK.txt;

Figures 5-7 show the plot of optimal results for both the shooting and nonlinear programming (NP) approaches for the state variable y(t), costate and control variable u(t). The results are essentially the same. The NP results are good but the Hamiltonian shooting approach is a much more accurate approach.








In this paper we have shown how to solve a non-standard optimal control problem. We have presented the necessary conditions and the computational procedures in order to obtain optimal solutions. The optimal solution of a test problem has been presented. A shooting algorithm together with a maximising approach was used to obtain a highly accurate solution, and compared with a discrete-time nonlinear programming solution. Our techniques can be applied to the actual rather more complicated economics problem where the Lagrangian integrand is piecewise constant in many stages and depends upon y(T) which is a priori unknown.



[1] BETTS JT. 2001. Practical methods for optimal control using nonlinear, Advances in Design and Control, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA.         [ Links ]

[2] FERREIRA RAC & TORRES DFM. 2008. Higher-order calculus of variations on time scales. Mathematical control theory and finance, pp. 149-159. Springer, Berlin.         [ Links ]

[3] FOURER R, GAY DM & KERNIGHAN BW. 2002. AMPL: a modelling language for mathematical programming, Duxbury, Press/Brooks/Cole Publishing Company.         [ Links ]

[4] STEFANI G. 2009. Hamiltonian approach to optimality conditions in control theory and nonsmooth analysis. Nonsmooth analysis control theory and differential equations, INDAM, Roma.         [ Links ]

[5] LEONARD D & LONG NV. 1992. Optimal control theory and static optimization in economics, Cambridge University Press, Cambridge.         [ Links ]

[6] MALINOWSKA AB & TORRES DFM. 2010. Natural boundary conditions in the calculus of variations. Mathematical Methods in the Applied Sciences, 33(14): 1712-1722. Wiley, doi: 10.1002/mma.1289.         [ Links ]

[7] PEDRO AFC, TORRES DFM & ZINOBER ASI. 2010. A non-classical class of variational problemswith application in economics. International Journal of Mathematical Modelling and NumericalOptimisation, 1(3): 227-236. InderScience, doi: 10.1504/IJMMNO.2010.031750.         [ Links ]

[8] PRESS WH, TEUKOLSKY SA, VETTERLING WT & FLANNERY BP. 2007. Numerical recipes: The art of scientific computing. Third edition. Cambridge Univ. Press, Cambridge.         [ Links ]

[9] SETHI SP & THOMPSON GL. 2000. Optimal control theory - Applications on management science and economics. Second edition. Kluwer Academic Publishers, Boston, MA.         [ Links ]

[10] VINTER R. 2000. System and control: foundation and application optimal control, Birkha Boston, Springer-Verlag New York, New York.         [ Links ]

[11] ZINOBER ASI & KAIVANTO K. 2008. Optimal production subject to piecewise continuous royalty payment obligations, internal report.         [ Links ]

[12] KIRK DE. 1970. Optimal control theory: An introduction. Prentice-Hall Network Series, New Jersey.         [ Links ]

[13] PINCH ER. 1993. Optimal control and the calculus of variations. Oxford University Press, Oxford.         [ Links ]



Received May 3, 2012
Accepted July 10, 2012



* Corresponding author

Creative Commons License Todo o conteúdo deste periódico, exceto onde está identificado, está licenciado sob uma Licença Creative Commons