Approximating Performance Measures for Slowly Changing Non-stationary Markov ChainsZ. Zheng, H. Honnappa, and P. W. Glynn Submitted for publication. This paper is concerned with the development of rigorous approximations to various expectations associated with Markov chains and processes having non-stationary transition probabilities. Such non-stationary models arise naturally in contexts in which time-of-day effects or seasonality effects need to be incorporated. Our approximations are valid asymptotically in regimes in which the transition probabilities change slowly over time. Specifically, we develop approximations for the expected infinite horizon discounted reward, the expected reward to the hitting time of a set, the expected reward associated with the state occupied by the chain at time n, and the expected cumulative reward over an interval [0,n]. In each case, the approximation involves a linear system of equations identical in form to that which one would need to solve to compute the corresponding quantity for a Markov model having stationary transition probabilities. In that sense, the theory provides an approximation no harder to compute than in the traditional stationary context. While most of the theory is developed for finite state Markov chains, we also provide generalizations to continuous state Markov chains, and finite state Markov jump processes in continuous time. In the latter context, one of our approximations coincides with the uniform acceleration asymptotic due to Massey and Whitt (1998). |