Discrete-Time Markov Control Processes: Basic Optimality Criteria

Onesimo Hernandez-Lerma and Jean B. Lasserre,  Springer-Verlag, New York, 1996.


PREFACE


ABBREVIATIONS AND NOTATION


CHAPTER 1. Introduction and Summary

  • 1.1. Introduction
  • 1.2. Markov control processes
  • 1.3. Preliminary examples
  • 1.4. Summary of the following chapters

  • CHAPTER 2. Markov Control Processes

  • 2.1. Introduction
  • 2.2. Markov control processes
  • 2.3. Markov policies and the Markov property

  • CHAPTER 3. Finite-Horizon Problems

  • 3.1. Introduction
  • 3.2. Dynamic programming
  • 3.3. The measurable selection condition
  • 3.4. Variants of the DP equation
  • 3.5. LQ problems
  • 3.6. A consumption-investment problem
  • 3.6. An inventory-production system

  • CHAPTER 4. Infinite-Horizon Discounted-Cost Problems

  • 4.1. Introduction
  • 4.2. The discounted-cost optimality equation
  • 4.3. Complements to the DCOE
  • 4.4. Policy iteration and other approximations
  • 4.5. Further optimality criteria
  • 4.6. Asymptotic discount optimality
  • 4.7. The discounted LQ problem
  • 4.8. Concluding remarks

  • CHAPTER 5. Long-Run Average-Cost Problems

  • 5.1. Introduction
  • 5.2. Canonical triplets
  • 5.3. The vanishing discount approach
  • 5.4. The average-cost optimality inequality
  • 5.5. The average-cost optimality equation
  • 5.6. Value iteration
  • 5.7. Other optimality results
  • 5.8. Concluding remarks

  • CHAPTER 6. The Linear Programming Formulation

  • 6.1. Introduction
  • 6.2. Infinite-dimensional linear programming
  • 6.3. Discounted cost
  • 6.4. Average cost: preliminaries
  • 6.5. Average cost: solvability
  • 6.6. Further remarks

  • APPENDIX A: Miscellaneous Results


    APPENDIX B: Conditional Expectation


    APPENDIX C: Stochastic Kernels


    APPENDIX D: Multifunctions and Selectors


    APPENDIX E: Convergence of Probability Measures


    REFERENCES


    INDEX