Large-Sample Theory for Standardized Times Series: An OverviewP. W. Glynn and D. L. Iglehart Proceedings of the 1985 Winter Simulation Conference, 129-134 (1985) There are two basic approaches to constructing confidence intervals for steady-state parameters from a single simul t on run. The fir t s to consistently estimate the variance constant in the relevant central limit theorem. This is the approach used in the regenerative, spectral, and autoregressive methods. The second approach (standardized time series, STS) due to SCHRUBEN [10] is to “cancel out” the variance constant. This second approach contains the batch means method as a special case. Our goal in this paper is to discuss the large-simple properties of the confidence intervals generated by the STS method. In particular, the asymptotic (as run size becomes large) expected value and variance of the length of these confidence intervals is studied and shown to be inferior to the behavior manifested by intervals constructed using the first approachs. [10] Schruben, L., "Confidence interval estimation using standardized time series." Ops. Res., 31, 1983, 1090-1108.
|