CONTROL THEORY

Markov Control Processes

 
  • Existence of Optimal Stationary policies
    We try to find the weakest sufficient conditions that ensure the existence of stationary optimal policies for Markov Control Processes on general Borel state and action spaces, with unbounded costs, non-compact action spaces and the expected average-cost criterion.

  • One way is to formulate the problem as a linear programming (LP) problem in an (infinite dimensional) space of probability measures.
     
  • Computational Aspects
    We then want to find a numerical procedure to compute the optimal value of the infinite dimensional LP. An approximation sheme based on constraint-aggregation and inner approximation permits to provide an iterative procdure in which a sequence of finite dimensional LPs are solved. Asymptotic convergence to the optimal value is guaranteed under very weak assumptions. We also prove asymptotic convergence of the optimal solutions to an optimal solution of the original problem in an appropriate weak topology.
  • Other Optimality Criteria 
    We are particularly interested in the sample-path average cost criterion, a much stronger criterion than the expected average cost criterion. Under rather weak assumptions, we prove the existence of a stationary sample-path average-cost optimal policy.
  • Semi-Definite Programming (SDP)

    We consider Linear Programs (LPs) on the cone of Positive Semi-Definite matrices. This formulation encompasses many Operations-Research, Engineering problems, especially in H-infinity, Robust Control. The corresponding LMI-type models are very popular and solved via interior-points methods. We have defined a Simplex-like algorithm to solve such LMI problems (in fact for general PSD programs). Among other things, we have defined and characterized the analogues for such LPs of the usual notions of "extreme points", "basis", "degeneracy", etc... in standard LP.