We consider the optimal stopping problem for a discrete-time Markov process on a Borel state space $X$. It is supposed that an unknown transition probability $p(\cdot |x)$, $x\in X$, is approximated by the transition probability $\widetilde{p}(\cdot |x)$, $x\in X$, and the stopping rule $\widetilde{\tau}_*$, optimal for $\widetilde{p}$, is applied to the process governed by $p$. We found an upper bound for the difference between the total expected cost, resulting when applying $\wt{\tau}_*$, and the minimal total expected cost. The bound given is a constant times $\dps\sup\nolimits_{x\in X}\|p(\cdot |x)-\wt{p}(\cdot |x)\|$, where $\|\cdot\|$ is the \-to\-tal varia\-tion norm.
Keywords: discrete-time Markov process; optimal stopping rule; stability index; total variation metric; contractive operator; optimal asset selling;
AMS: 60G40;
BACK to VOLUME 44 NO.3