Reward functions and Markov strategies in finite-stage stochastic decision problems

Date of Award




Degree Name

Doctor of Philosophy (Ph.D.)



First Committee Member

Victor Pestien, Committee Chair


We consider the relationship between the reward function and the existence of (nearly) optimal Markov strategies in a finite-stage stochastic decision problem. This relationship is discussed mainly for a "finite-stage timed gambling model". A finite-stage reward function g has the "Markov adequacy property" if for any finite-stage timed gambling model with reward function g and for any strategy $\sigma$ defined on the model there exists a Markov strategy $\bar \sigma$ ($\bar\sigma$ and $\sigma$ have the same initial distribution) defined on the same model such that $\bar\sigma$ is nearly as good as $\sigma$. We formulate an easily checkable condition for reward functions, called the "linear sections property". We prove that the linear sections property implies the Markov adequacy property. Also we prove that for permutation invariant reward functions in finite-stage timed gambling models, the Markov adequacy property implies the linear sections property provided the state space has more than two elements. Furthermore, an analytic expression for reward functions having the linear sections property is found, and several examples are presented. Finally, a general "decision model" is formulated, and a linear sections property for such a model is defined analogously (comparing with the linear sections property for timed gambling models). If the reward function in a decision model has the linear sections property, then Markov strategies are adequate. An analytic expression for such reward functions is also presented.



Link to Full Text