Preprint A80/2001
On the value function for control problems with infinite horizon

G. N. Silva | Baumeister, J. | Leitao , A.

**Keywords: **
dynamic programming | optimal control | optimality conditions

In this paper we consider optimal control problems of infinite horizon
type, whose control actions are given by $L^1$-functions. A characteristic
of this type of problems is the utilization of a discount factor in the
objective function. We provide an existence theorem, verify that the value
function is locally Lipschitz and obtain necessary and sufficient
conditions of optimality for the control problem in terms of upper Dini
solutions of the Hamilton-Jakobi-Bellman inequality (equation). For a
special class of control problems we prove that the value function is a
viscosity solution of the Hamilton-Jakobi-Bellman (HJB) equation with
certain decay condition. Finally, a ``real life'' example is provided to
show the strength of our existence result.